id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
252531128
|
pes2o/s2orc
|
v3-fos-license
|
The Model Theory of Falsification
This paper is concerned with the question of when a theory is refutable with certainty on the basis of sequence of primitive observations. Beginning with the simple definition of falsifiability as the ability to be refuted by some finite collection of observations, I assess the literature on falsification and its descendants within the context of the dividing lines of contemporary model theory. The static case is broadly concerned with the question of how much of a theory can be subjected to falsifying experiments. In much of the literature, this question is tied up with whether the theory in question is axiomatizable by a collection of universal first-order sentences. I argue that this is too narrow a conception of falsification by demonstrating that a natural class of theories of distinct model-theoretic interest -- so-called NIP theories -- are themselves highly falsifiable.
Introduction
Popper's [14] solution to the demarcation problem says that the distinguishing feature of a scientific theory-construed as an empirical hypothesis-is its falsifiability. Various accounts of falsification have emerged over the years; in this chapter, I aim to provide a model-theoretic account of falsification that will aid us in understanding both long-run and short-run properties of various falsificationist strategies.
Broadly speaking, falsification centers on the following question: given a class K of possible worlds, is there some finite collection of observations about our world W that would allow us to infer W / ∈ K? If the answer is "yes," the class K is said to be falsifiable. Throughout, we suppose that L is a signature 1 and K ⊆ Str(L) a class of L-structures. Epistemically, L plays the role of the collection of observable relations, functions, and constants that relate objects in the world. We suppose that the world W is itself an L-structure.
An observable formula ϕ(x 1 , . . . , x n ) corresponds to a finite Boolean combination of atomic L-formulas. We say that ϕ(x 1 , . . . , x n ) is K-forbidden just in case no W ∈ K realizes ϕ. Model theoretically, this is the same as saying The present paper focuses on the static case of falsification; a subsequent paper will address the dynamic case.
In discussing the static case, we will primarily discuss variants surrounding the first-order, universal axiomatization of a class. After all, if a class K is specified by a universal set of axioms, this means that each one of the theory's axioms expresses the class K being incompatible with some observation. Others, such as Simon and Groen [16] and Chambers et al. [4], propose even more stringent constraints on K than its universal axiomatizability to deem them falsifiable.
Falsifiability and Unfalsifiability in Mechanics and Economics
As a warm-up to our investigation of falsification, we investigate falsificational phenomena in physics and economics by way of an analysis of Newtonian Mechanics and the theory of choice.
Newtonian Mechanics
We begin by showing that, in a strong sense, the framework of Newtonian Mechanics is unfalsifiable relative to the class of kinematic motions. For the definition of Newtonian system I follow the formalism given by Arnold [1].
Definition 2.1. [1, p. 8] An n-particle motion is a smooth function x : R → R 3n such that the graphs of the trajectories of each particle are non-intersecting.
A Newtonian system of n particles is a motion x : R → R 3n such that there exists a vector field for all t ∈ R.
Let K be the class of Newtonian systems. Note here that since we do not require K to be an elementary class in order to be falsifiable, we do not have to exhibit a first-order axiomatization of K.
By an n-particle kinematic datum e I mean an equality where v ∈ R 9n and t 0 ∈ R. Intuitively, a kinematic datum is a specification of the numerical values of the vector (x(t), x ′ (t), x ′′ (t), t) ∈ R 9n × R. For a set E of kinematic data, let π t (E) be the set of times occurring as values in E. For an element e ∈ E, let t e be the value of the time coordinate of e. We say that a set E of kinematic data is motional provided all sentences in E are satisfied by some motion.
Proposition 2.1. Let E be a finite motional set of kinematic data. Then there is an n-particle Newtonian system x such that for all e ∈ E, x satisfies all the conditions set out by E.
Thus, the class of Newtonian systems of n particles is unfalsifiable relative to the class of nparticle motions.
Proof. First, we note that if E is motional we may replace all inequalities of E with equalities to prove the claim. 2 Let Y i denote the trajectory of the i th particle.
Thus, e is equivalent to a system of equations of the form where p ij , v ij , and a ij represent the position, velocity, and trajectory of the i th particle at time t j .
Since the data E is motional, there exist n smooth functions Y i : R → R 3 such that the positions of the i th particle Y i satisfy Y i (t j ) = p ij .
We now show that we may alter this trajectory to ensure that Y ′ i (t j ) = v ij and Y ′′ i (t j ) = a ij for each i, j. The following argument ensures that we can locally alter the Y i 's without intersecting the graphs of the Y i .
Let I be a closed interval of finite length containing the open interval [min((t j )), max((t j ))]. Since the Y i are all continuous, there exists a compact box 3 R ⊆ R 4 such that a neighborhood of each graph Γ i of each Y i restricted to I is contained in R.
Since R is a compact metrizable space and the graphs Γ i are closed and disjoint, there exists an r ∈ R such that the tubular neighborhoods of the graphs Γ i given by Now, by the existence and uniqueness of ODEs there locally exists a unique solution to the initial value problem (x i , x ′ i , x ′′ i )(t j ) = (p ij , v ij , a ij ). For each particle i, by taking a small enough interval I i,j around t j we let Z i,j : I i,j → R 3 be the solution to this ODE and have the graph of Z i,j contained in the neighborhood U i (r) constructed above. Perhaps by shrinking the interval on which Z i,j solves the ODE, we may smoothly extend Z i,j to all of R in a manner such that Z i,j (t) = 0 for all t / ∈ I i,j . Now, by the existence of smooth bump functions, there exists a smooth bump function such that b i,j (t) = 0 on some open interval containing t j and b i,j (t) = 1 for all t > max(I i,j ) and t < min(I i,j ). We definẽ ThenỸ i (t) satisfies the required differential equations.
Finally, we must argue that there exists a force function F : for all i. Intuitively, we would like to define 3 By a box I mean a product of intervals.
but this is not a continuous function. However, by the construction of U r , shrinking r as necessary, we can ensure that this map is continuous by defining This theorem indicates that no matter how many finite points of data we collect regarding the kinematics of the system, there will always be some Newtonian theory which accommodates that data. This is the sense in which the framework of Newtonian mechanics fails to be falsifiable.
However, there are natural strengthenings of the class K of Newtonian motions which are falsifiable.
For example, suppose that our hypothesis is that a particle P is a free particle with motion x with zero initial acceleration relative to a fixed observer's frame of reference. This implies that for all t the resultant force F (x, x ′ , t) is identically zero. Thus, the motion x must follow a straight line.
This hypothesis is highly falsifiable. Since a line in R 3 is determined by two points, the theory of the free particle entails that if x(t 1 ) and x(t 2 ) are the positions of the first two observations of the particle, all subsequent observations of the particle must be a member of the line L(x(t 1 ), x(t 2 )) ∈ R 3 . Thus, for every n > 2, each subsequent observation carries with it the chance of refuting the claim that the particle is free. This is an instance of the notions of always falsifiability and VC finiteness, which we will discuss in our treatment of the dynamic case of falsification in section 2.3.
Theory of Choice
We now turn our attention from the falsification of physical theories to the falsification of economic theories of choice. To keep things simple, we model preference as a binary relation ≺ on a set of choices C, where x ≺ y is interpreted as "y is strictly preferred to x." The data (C, ≺) is called a preference structure.
A frequently assumed necessary condition for a preference to be considered rational is that the preference relation is acyclic; namely, that there is no chain x 1 ≺ x 2 · · · x n ≺ x 1 and x 1 ≺ x 1 . Let K be the class of acyclic preference structures.
The class K is falsifiable: if one observes a configuration then one can conclude that the underlying choice structure (C, ≺) / ∈ K. In fact, K is universally axiomatizable, axiomatized by the collection On the other hand, there are common rationality assumptions which are not falsifiable. For example, consider the axioms invoked to prove the representability of a preference structure (C, ≺) by a utility function u : X → R, that is, Gilboa [9, p. 51] gives the axioms 1. Completeness: (∀x, y)(x y ∨ y x), Completeness and Transitivity are both ∀ 1 sentences in the language L = {≺}, but Separability is naturally expressed as a second-order sentence.
Gilboa makes a very interesting argument for the admissibility of the Separability axiom: its unfalsifiability "suggests that [Separability] has no empirical content and therefore does not restrict our theory... Rather, the axiom is a price we have to pay if we want to use a certain mathematical model" [9, p. 52].
The following theorem makes this argument precise.
Theorem 2.1. Let K = Mod(T ) be the class of models of an L = {≺}-theory T . Let K sep be the class M ∈ K satsifying Separability. Then K sep is unfalsifiable relative to K.
Proof. Let T be a first-order theory and ϕ a ∀ 1 sentence such that ϕ / ∈ ∀ 1 (K). We wish to show that K sep ϕ.
Since ϕ / ∈ ∀ 1 (K) there exists some M ∈ K such that M ¬ϕ. Since ϕ is a universal sentence, ¬ϕ is existential. Let m ∈ M k witness ¬ϕ. By the Löwenheim-Skolem theorem there exists a countable elementary substructure M ′ M containing m of size ℵ 0 . Since M ′ ∈ K and is of size Thus, not only is the Separability axiom unfalsifiable relative to the class of all L-structures, it is unfalsifiable relative to any first-order axiomatizable theory of preference. In this way the Separability axiom is empirically harmless: we may freely adjoin the Separability axiom to any firstorder theory T of choice structures without inadvertently strengthening the observable consequences of T .
Falsifiability and the Randomness of the Universe
As an application of the results of the previous section, we argue that for many ways of making precise the assertion that "the world is, at a fundamental level, random," the assertion is unfalsifiable.
Defining what it means to be "random," however, poses a great difficulty. To this end, I consider two different formalizations of randomness as it pertains to structures: 1. The evolution of the universe is generic in a suitable sense, and 2. The evolution of the universe is generated by a stochastic process.
For a suitable formulation of each of the above cases we will see unfalsifiability arise. To make sense of these two notions we define the notion of a time-indexed structure.
Definition 3.1. Let L be a relational language. Then the time-indexed language L τ is given by A time-indexed structure is an L τ structure satisfying the theory T τ given by the axioms: 1. Objects and Times are different sorts, i.e.,
The Richness of the Universe
We first consider the notion of the randomness of the universe as specified by the notion of a Fraïssé limit. For a Fraïssé class K, there is a unique, highly homogeneous, countable structure K lim into which all and only the members of K embeds, called the Fraïssé limit. 2. |K lim | ≤ ℵ 0 , and 3. every isomorphism between finitely generated substructures M 1 , M 2 ⊆ K lim extends to an automorphism of K lim .
Thus, a Fraïssé limit is extremely rich, able to accommodate any finite number of observations. Moreover, when a Fraïssé limit exists for a class K, the first-order theory K lim is ∀ 1 -conservative over K.
Proposition 3.1. Let K be a Fraïssé class of L-structures where L is a finite relational language. If ϕ is a ∀ 1 L-sentence, then K lim ϕ ↔ K ϕ.
Proof. If K lim ϕ, then since every M ∈ K embeds into K lim and ϕ is ∀ 1 , K ϕ. Conversely, suppose that K lim ϕ. Then ¬ϕ is existential, so there is some witness m ⊂ K lim to the falsity of ϕ. Since K = age(K lim ) and m is a finitely-generated substructure of K lim , N = m ∈ K and N ¬ϕ. Thus K ϕ.
Thus, no new universal sentences are entailed by the Fraïssé limit of K. We show that the class of finite time-indexed structures forms a Fraïssé class, which we shall see yields unfalsifiabilty of the generic theory relative to the class of time-indexed structures. Proof. We need to show that class of finite models of T is a Fraïssé class.
First, since the class Mod(T τ ) is universally axiomatizable, its finite models satisfy (HP). By the axioms of T τ each model M ∈ Mod f in (T τ ) can be expressed as where each W i is an L-structure (recall L τ was obtained from L) and each t i is a time. A necessary and sufficient condition for a map f : M → N with M, N ∈ Mod(T ) is that f restricted to τ is an order embedding and that for each time t i , f (W i ) ⊂ W f (ti) is an embedding of L-structures. From this decomposition of embeddings it is clear that the joint extension property and amalgamation property holds as the class of finite L-structures and the class of finite linear orders are both Fraïssé classes: to jointly embed two finite models of T τ structures, first jointly embed their temporal component and then jointly embed their L-structures at each time in the intersection of the embedding. Likewise, one may amalgamate by first amalgamating the temporal component and then amalgamating the L-structures over each time.
Thus the theory of the Fraïssé limit T τ,lim exists and model complete by [10,Theorem 7.4.2]. It remains to show that T τ,lim is a model companion of T τ .
Clearly every model of T τ,lim is a model of T τ , so it suffices to show that every model of T embeds into a model of T τ,lim . Suppose M T τ . Then since T τ is ∀ 1 axiomatizable, all finitely generated substructures of M are models of T τ . Moreover, M embeds into an ultraproduct of its finite substructures since the language is relational, and in turn each finite substructure embeds into the Fraïssé limit of the class. Thus M embeds into an ultrapower of the Fraïssé limit of the class and hence, since T lim is elementary, a model of T lim .
As a corollary, we have the following.
Corollary 3.1. The theory T τ,lim is ∀ 1 -conservative over T ; thus, T τ,lim is relatively unfalsifiable over T τ . Therefore, for this sense of genericity, "the Universe is a generic time-indexed L-structure" is not falsifiable relative to the theory of time-indexed structures.
The Stochasticity of the Universe
We now turn to probabilistically generated models of the evolution of the universe.
We work with a distinguished class of L d τ structures M, namely those such that 3. the world (W, 0) is drawn from a probability distribution µ on the state space Σ = Str L ([n]), and 4. the world (W, t+1) is obtained from (W, t) by way of a time-homogeneous memoryless Markov process, i.e., there exists a stochastic matrix ρ on the state space Σ in ω such that P(( Let µ be any measure on {H, T }, i.e., an assignment of p H , p T ∈ [0, 1] such that p H + p T = 1. The stochastic transition matrix ρ is given explicitly by Such a stochastic process generates an L d τ structure on domain ω ∪ {c}. Now, let C be a set of pairs (µ, ρ) where µ is a probability distribution on Σ and ρ is a stochastic matrix on Σ. 4 The choice of µ and ρ induce a unique probability measure P µ,ρ on Σ ω . The existential C-theory T C in L d τ is given by: Let C + be the class of pairs (µ, ρ) of initial distributions µ on Σ with µ(W ) > 0 for each W ∈ Σ and stochastic matrices ρ with rows and columns indexed by Σ such that ρ(W, W ′ ) > 0 for all W, W ′ ∈ Σ.
Let ϕ M (x, t) be the sentence saying that at time t the L-structure N (t) is isomorphic to M. To show that every L d τ -satisfiable ∃ 1 formula in the language L d τ is a member of T C it suffices to show that every formula of the form 2. s i is a term in the language of the successor function {S(x)}, and is realized with probability one for each P ∈ C + . We demonstrate this by studying an auxiliary Markov process on Σ m , where m is the number of terms s i occurring in the formula ϕ.
The theories T C naturally occur as a formal model of the universe as a theormodynamic fluctuation. The idea that the universe is merely a fluctuation has been discarded by many prominent physicists such as Feynman and Carroll; it is worth investigating how these arguments dovetail with the present discussion of their falsifiability.
Feynman argues that we can refute this hypothesis, writing: Thus one possible explanation of the high degree of order in the present-day world is that it is just a question of luck. Perhaps our universe happened to have had a fluctuation of some kind in the past, in which things got somewhat separated, and now they are running back together again...
[F]rom the hypothesis that the world is a fluctuation, all of the predictions are that if we look at a part of the world we have never seen before, we will find it mixed up, and not like the piece we just looked at. If our order were due to a fluctuation, we would not expect order anywhere but where we have just noticed it... is Every day they turn their telescopes to other stars, and the new stars are doing the same thing as the other stars. We therefore conclude that the universe is not a fluctuation. [8, On this account, from the fact that we observe order-the aggregate of all of our observations of the universe-we can conclude that the universe is not a fluctuation.
At first glance this argument appears to be an argument from falsification: 1 The Fluctuation Hypothesis entails that the universe is disordered. 2 We observe order in the universe. 3 The Fluctuation Hypothesis is false.
After all, it appears to be framed as a reductio ad absurdum, but the inference is more subtle than that. If by the fluctuation hypothesis we understand it to mean that the universe is generated probabilistically in the manner described above, then observing order of arbitrarily large complexity is in fact a deductive consequence of the theory T C .
The tension here comes from a quirk of the probabilistic framework and its relation to first-order logic; while the probability of a specific observer witnessing a given highly-ordered conjunction of atomic and negations of atomics formulas will be quite low, nevertheless the theory predicts that all such observations will be witnessed. In other words, two notions of prediction are at play: in one sense, the theory entails that with probability 1 the state that is observed will happen, all the while entailing that the observer in question witnesses a sequence of low probability states. Carroll [3] refers to this latter property of the fluctation theory as rendering observers "cognitively unstable" in the sense that the theory in question actively thwarts inductive reasoning as understood by Bayesian confirmation theory.
What Feynman has in mind, most likely, is an anthropic principle of the kind that says we should only affirm/consider theories T which themselves make it highly probably that our own inductive reasoning is highly conducive to truth.
Much ink has been spilled over anthropic principles in connection with the hypothesis that the universe is in some manner random [2], but the results of this section indicate that such theories suffer the defect of unfalsifiability. While being unfalsifiable does not refute the truth of the hypothesis, it does show that the hypothesis is not amenable to being refuted by way of finitary modes of data acquisition.
How Much of a Theory is Falsifiable?
The static models of falsifiability typically concern themselves with questions of how close a theory is to being universally axiomatizable; after all, the more universal sentences a theory implies, in principle the more falsfiable the theory becomes.
Simon and Groen [16], in their work on Ramsification and the Second-Order definability of theories, isolate a notion on pseudoelementary classes K they call FITness which they claim isolates the ideal scientific theories: On their account, a pseudoelementary class K is FIT if and only if it is a scientific theory. They show that for pseudoelementary K, being FIT implies its universal axiomatizability. 5 Generalizing their definition to arbitrary classes of L-structures K closed under isomorphism, I show that for finite languages K being FIT entails that K is elementary and, in fact, universally axiomatizable. This result substantially generalizes their result over finite languages.
I then turn my attention to an argument given by Chambers et al. [4] that argues that being universally axiomatizable is not sufficient grounds to call a theory falsifiable. Instead, they identify the falsifiable sentences with a class of universal sentences they call UNCAF (a universal negation of a conjunction of atomic formulas). In turn, I argue that their argument implicitly assumes that the underlying predicates P ∈ L exhibit mere Σ 1 behavior and thus that their argument reaches too far in its conclusions.
As a final foray into the static case of falsification, I consider how falsification intersects with the dividing lines of classification theory. It is not too difficult to show that under very mild restrictions on the language L, NIP theories entail a great deal of nontrivial ∀ 1 sentences and are highly falsifiable. Of note, in NIP theories each formula ϕ is equipped with a notion of dimension known as the VC-dimension of a class, which in a sense measures the effective falsifiability of membership in the class of hypotheses it defines.
On the other hand, recent work of Kruckman and Ramsey [12] and, independently, Jeřábek [11], yield examples of NSOP 1 and simple theories which are unfalsifiable. While there are many NSOP 1 theories which are falsifiable, in a sense NIP is individuated among the dividing lines in model theory as a class of highly-falsifiable theories.
FITness: The Finite Signature Case
We begin our investigation of the static case of falsification by exploring the notion of FITness-the finite and irrevocable-testability of a theory. Simon and Groen [16] argue that, at least when K is a pseudoelementary class, K being FIT is necessary and sufficient for K to be a scientific theory. They purport to show that if K is FIT and pseudoelementary, then K is ∀ 1 axiomatizable. In this section I show that so long as the signature L is finite, the requirement that K is pseudoelementary is unneccessary; all that is needed is that K is closed under L-isomorphism.
Definition 4.1. Let L be a language and K a class of L-structures. K is said to be FIT provided that i K is finitely testable, i.e., K is nontrivial: and for every M ∈ Str(L), ii and K is irrevocably testable, i.e., for every M ∈ Str(L) In the case of a finite relational language L, any FIT class K is universally axiomatizable. This substantially weakens the assumption on K given in the original paper of Simon and Groen at the cost of working within a more limited class of languages. Proof. We begin by giving a first-order axiomatization of K. For each finite N ∈ K, let ϕ N be the formula in |N | many free variables given by ϕ. This formula expresses the isomorphism type of N relative to the fixed enumeration . Since the language L(x) is finite and only includes relations and constant symbols, for each n ∈ ω there are only finitely L(x)-isomorphism classes in K of size ≤ n, so K[n] is finite. Let ψ n be the sentence By construction, each ψ n is a universal sentence, as the disjunction [M]∈K[n] ψ M is a disjunction of finitely many Boolean combinations of atomic formulas.
Let T K = {ψ n } n∈ω . I claim that K = Mod(T K ). To see this, suppose that M T K . Because K is finitely testable it suffices to show that every finite substructure of M is a member of K. Let N be a substructure of M of size n. Since ψ n is universal, N ψ n and so N ψ N ′ for some N ′ isomorphic to a member of K. Since K is closed under isomorphism, N ∈ K. Thus M ∈ K.
Conversely, suppose that M ∈ K. To show that M T K , it suffices to show that M ψ n for each n. Let (m 1 , . . . , m n ) ∈ M n be a variable assignment. The set N = {m 1 , . . . , m n } is a set of size ≤ n and is a substructure of M . By the irrevocable testability of K, N ∈ K. Thus, N ϕ [N ] . Thus M ψ n .
Moreover, a similar argument works to show that a FIT class closed under isomorphism over an arbitrary finite language is universally axiomatizable. Proof. Same as the above, but by defining the axiom scheme ψ n as follows. For a function symbol f , we denote the arity of f by ar(f ).
Let χ n (x 1 , . . . , x n ) be the formula given by where f ∈L and c∈L are understood to be ⊤ in case L contains no function or constant symbols respectively. This formula expresses that the set x 1 , . . . , x n is an L-structure of size ≤ n, as it expresses that x 1 , . . . , x n is closed under all function symbols f ∈ L and contains all constants c ∈ L. Since L is finite, this is a quantifier-free first-order formula.
As above, let T K be axiomatized by A nearly identical argument as before suffices to show that K is axiomatized by T K . Let T K = {ψ n } n∈ω . I claim that K = Mod(T K ). To see this, suppose that M T K . Because K is finitely testable it suffices to show that every finite substructure of M is a member of K. Let N be a substructure of M of size n. Since ψ n is universal, N ψ n . Since N is an L-structure of size at most n, N χ n , so N ψ N ′ for some N ′ isomorphic to a member of K. Since K is closed under isomorphism, N ∈ K. Thus M ∈ K.
Conversely, suppose that M ∈ K. To show that M T K , it suffices to show that M ψ n for each n. Let (m 1 , . . . , m n ) ∈ M n be a variable assignment. The set N = {m 1 , . . . , m n } is a set of size ≤ n. If N is an L-structure, then N is a substructure of M ∈ K so N ϕ
FITness: The Arbitrary Language Case
The setting in which Simon and Gröen work involves a distinction between observational and theoretical scientific terms. Let L = L o ∪ L t be a language partitioned into the observational language L o and theoretical language L t . Let Σ be an L-theory. There is, of course, the class of models of the theory: By definition, this class is elementary, meaning that it is first-order axiomatizable. However, the class Mod(Σ) is not the appropriate class of structures to look at, for if there is a true o/t distinction then the scientist only has epistemic access to the observable structure. Instead, Sneed [18] isolates the fundamental relation between scientific L-theory Σ and some L o -structure N of observations is that of application: say that Σ applies to N just in case the L o -structure N can be expanded 6 to a full L-structure N such that N Σ. The pseudoelementary class of such structures is given by: In the case of pseudoelementary classes K = Mod * (Σ), we are able to drop the hypothesis that L is a finite language to conclude that an L o -FIT K is ∀ 1 -axiomatizable. This is the original result of Simon and Groen [16]. Proof. We recall a theorem of model theory [10, Theorem 6.6.7]: Let L be a first-order language and K be a pseudo-elementary class of L-structures. Suppose that K is closed under taking substructures. Then K is axiomatized by a set of ∀ 1 L-sentences.
Since Mod * (Σ) is pseudo-elementary, it suffices to show that Mod * (Σ) is closed under substructures. Let M ∈ Mod * (Σ) and let N ⊂ Lo M be a substructure. To show that N ∈ Mod * (Σ), the finite testability implied by L o -FIT -ness tells us that we need only check that for every finite substructure N k ⊂ Lo N satisfies N k ∈ Mod * (Σ). Since every such N k is an L o -substructure of M, the irrevocability of L o -FIT -ness ensures that N k ∈ Mod * (Σ).
That is, FIT -ness implies that the pseudo-elementary class of L o -structures expandable to models of Σ is not only elementary, but is in fact axiomatizable by universal axioms.
A partial converse can be given for the case of relational observational languages L o : Proof. To show that Σ is L o -FIT we must show both finite testability and irrevocable testability. Hence Σ is L o -FIT.
The two properties defining FIT -ness warrant scrutiny in virtue of their strong implications. We may view the finitely testable hypothesis as a local compactness principle: in the stated form it says that if every finite M k ⊂ M is consistently expandable to a model of Σ, so too is M. The irrevocability hypothesis expresses the closure of the class Mod * (Σ) under (finite) substructures, which together with finite testability implies closure under substructures.
Moreover, when working with a relational language the semantic criterion of FIT -ness is equivalent to the universal axiomatizability of the observable consequences of the theory. Thus, on the Simon-Groen view, given a universally axiomatizable L o theory T any L o -conservative extension of T to an L-theory T ′ is scientific. For instance, consider adding unary predicate symbols P 1 , . . . , P n and defining the L o -consequences of T m are the L o -consequences of T and so T m is FIT and therefore scientific. By construction, however, the truth of the axioms of T m are independent from any collection of observational data.
FITness and Finite Generation
The definition of FITness required that membership in a class K be witnessed by all finite substructures themselves being members of K. However, except in the relational case, a substructure being finitely generated does not imply that that substructure is finite. In this section we consider the analogous notion of FITness obtained by replacing "finite" with "finitely generated" everywhere in the definition of FITness.
Definition 4.2. Let L be a language and K a class of L-structures. K is said to be fg-FIT provided that i K is fg testable, i.e. K is nontrivial: and that for every M ∈ Str(L), ii and K is fg-irrevocably testable, i.e. for every M ∈ Str(L) The FITness and fg-FITness of a class are generally inequivalent. Then K is fg-FIT but not first-order axiomatizable. In particular, K is not FIT.
Proof. To show that K is fg-FIT, it suffices to show that for a ring R, R ∈ K just in case every finitely-generated subring of R is in K. Suppose that R ∈ K. This is witnessed by the quantifier-free formula 1 + · · · + 1 n times = 0, so any subring R ′ ⊂ R is also a member of K. Conversely, if R / ∈ K then 1 is infinite and therefore 1 / ∈ K. To show that K is not first-order axiomatizable, it suffices to show that K is not closed under ultraproducts by [5,Theorem 4.1.12]. Note that each finite field F p is a member of K. Let U be a nonprincipal ultrafilter on the set of primes. Then F = U F p is a field of characteristic zero, thus F / ∈ K. Hence K is not first-order axiomatizable. Therefore, by Theorem 4.1, K is not FIT.
Moreover, unlike the FIT case, every universally axiomatizable theory is fg-FIT.
Remarks on Signatures in FITness
In the above discussions regarding FITness, fg-FITness, and universal axiomatizability, it was shown that in the case of a finite relational language, these notions are equivalent without a background assumption on the class K beyond closure under L-isomorphism. However, these notions began to decouple in the case of languages with constant symbols and function symbols. This behavior is not so surprising; when converting a function symbol f to a relation symbol by defining R f by identifying ∀x∀yR f (x, y) ↔ f (x) = y, to eliminate the function symbol f from the language completely requires one to include an ∀ 2 axiom of the form which in general will not be equivalent to a ∀ 1 sentence. Thus, implicit in the inclusion of function symbols in the language is a ∀ 2 axiom in a purely relational language.
UNCAF Theories
Motivated by theories of revealed preference in economics, Chambers et al. [4] argue that the empirical content of a theory is captured not by general universal sentences but instead by a special kind of universal sentence they term UNCAF. Perhaps surprisingly, they argue that sentences of the form (∀x)P (x) is not falsifiable by virtue of not being UNCAF, while (∀x)¬P (x) is.
To argue this point, they write that substructures are unsatisfactory as mathematical models for observed data since they correspond to a situation in which the scientist observes the presence or absence of every possible relation among the elements in his data and, therefore, cannot accommodate partial observability.
While I agree with this general point, the conclusion that only UNCAF sentences have empirical content is too strong. For example, let S(x) be the predicate "x is a swan" and W (x) the predicate "x is white." The sentence "all swans are white," when formalized, is equivalent to which is not UNCAF owing to the presence of ¬W (x) as a nested subformula. To conclude that this sentence has no falsificational content seems to run counter to the usual conception of falsification: after all, if I were able to produce an example c such that S(c) ∧ ¬W (c), I would immediately be able to infer that W / ∈ K. However, on their model it is as if, when I go to the local bird sanctuary I am told that I may only record instances of white swans. One should not expect to be able to produce a counterexample to "all swans are white" under such constraints! The way that Chambers et al. circumvent this worry is to note that for each predicate P one may add a new relation symbol P ¬ together with the axiom ∀x(¬P (x) ↔ P ¬ (x)).
While this approach does formally work, it is somewhat awkward that this axiom itself is not UNCAF, as we see by reducing it to Their understanding of falsification qua UNCAF-expressibility entangles two separate considerations: first, whether there is in principle any falsificational strategy on the basis of some configuration being witnessed by a finite set of data, and second whether the model of knowledge acquisition allows one to actually carry out the falsificational strategy. Their account corresponds to a model of knowledge acquisition in which at each stage one gains (at most) one positive (relative to L) observation at a time, in a semidecidable fashion.
As an example, suppose that a researcher is observing an agent Ashley and wishes to falsify whether or not her preference relation is complete: ∀x, y ((x ≤ y) ∨ (y ≤ x))) .
To do so, the observer waits each day d to see whether the agent exhibits some preference relation between a can of Guayakí Enlighten Mint ready-to-drink Yerba Mate and a can of Guayakí Revel Berry that are sitting side-by-side in the office fridge, with no other items in potential consideration.
This experiment, as construed, is doomed to never falsify the experiment. After all, if there is some day d where Ashley surveys the fridge and takes a can of Enlighten Mint but not Revel Berry (resp. Revel Berry but not Enlighten Mint), then EM ≥ RB (resp. RB ≥ EM ) and therefore no refutation of the completeness axiom is possible in the context. Likewise, if the day that Ashley takes a can out of the fridge never comes, that also does nothing to falsify the completeness axiom.
So, what went wrong? Implicit in their semantics for the experiment is a suppressed existentiallydefined quantifier. Let R A (x, y, d) be the relation that says "on day d, agent A expressed a weak preference x over y." Then the formula x ≥ y in Chambers' terminology would not be ∀ 1 but instead properly ∀ 2 : ∀x, y ((∃t)R A (x, y, t) ∨ (∃t)R A (y, x, t)) .
Therefore, the purported example of an unfalsifiable ∀ 1 sentence is better and more directly modeled as an unfalsifiable ∀ 2 sentence fully compatible with the standard account of falsification as a universal over an in-principle decidable primitive. What their point indicates is that the standard revealed preference relations in economics are not in-principle decidable, but instead are ∃ 1 -definable relative to the empirical relation R A (x, y, d) via x ≥ y := (∃t)R A (x, y, d).
If we take as epistemically primitive a ∃ 1 -definable relation R(x, y) defined by an L-formula ∃cϕ(x, y, c) with ϕ quantifier-free, then their result is clear. A sentence of form (∀ 1 )R(x, y) is an ∃ 2 sentence, while an UNCAF sentence in the language L R = {R(x, y)}, with I ⊂ 2 [n] finite is equivalent to the L-sentence: which is equivalent to a ∀ 1 sentence in L.
Strength of Theories and their Falsifiability
Contrary to mere falsifiability, FITness, fg-FITness, and UNCAF-axiomatizable are typically not closed upwards under strength.
Proposition 4.6. Let K be a universally axiomatizable FIT or UNCAF class such that K has both finite and infinite models. Then the class K ′ ⊂ K of infinite members of K is not FIT, fg-FIT, or UNCAF.
Proof. Let T be a universal axiomatization of K. Then K ′ is axiomatized by T ∪ {ψ n } n∈ω where ψ n is the sentence (∃x 1 , . . . x n ) 1≤i =j≤n Since K is closed under substructures and admits finite models, K ′ necessarily fails to be closed under substructures. Thus K ′ is not universally axiomatizable, and in particular is neither FIT nor UNCAF.
Falsification and NIP
In the preceding sections, we have considered a sequence of refinements to the basic notion of falsifiability. We have seen, under mild conditions on the signature L and classes K the web of implications K has nontrivial UNCAF theory However, there are a great deal of hypotheses which do not readily fall into this framework at first glance.
For example, we are often interested in testing whether or not a (basic) relation R(x 1 , . . . , x n ) ∈ L is equivalent to some other (basic) relation S(x 1 , . . . , x n ) ∈ L. This is easy to handle directly in our account of falsification; after all, is a ∀ 1 sentence in L by assumption.
What if, instead, we are probing a more complicated question, such as whether or not R(x 1 , x 2 ) is a line in R n ? In the language of rings augmented by an additional relation symbol L = {+, ×, 0, 1, <, R(x, y)} this is most easily expressed by the ∃ 2 formula where L(x, y; a, b) is the sentence ay + bx + c = 0. Despite being ∃ 2 , this sentence has a great deal of falsificational content owing to the structure of the parametric family L(x, y; a, b, c). From Euclidean geometry that between any two distinct points there exists a unique line. Letting R * (a, b, c, d) be the sentence .
which is a nontrivial ∀ 1 sentence. Thus, while T L is ∃ 2 it has nontrivial ∀ 1 consequences. These properties of lines is an example of the VC finiteness of the class. For the remainder of the section, we assume that K is an elementary class, axiomatized by some first-order set of sentences T . Definition 4.4. [17, pg. 7-8] Let ϕ(x; y) be a first-order formula in disjoint sets of free variables x, y. With respect to this partition we say that ϕ is a partitioned formula.
Let M ∈ K. We say that ϕ(x; y) M-shatters a set X ⊂ M |x| just in case there is a set Y ⊂ M |y| such that for every subset X ′ ⊂ X there exists y ′ ∈ Y such that A partitioned formula is NIP provided for every M ∈ K, no infinite set is M-shattered by ϕ.
The formula ϕ has Vapnik-Chervonenkis (VC) dimension, V C(ϕ) ≤ n just in case for all M ∈ K, no set of size n is M-shattered. If ϕ has finite VC dimension then ϕ is said to be VC finite.
A theory T is NIP just in case every formula ϕ is NIP in the class K = Mod(T ).
For elementary classes K, a formula being NIP is related to its VC finiteness: Proposition 4.7. Let K be an elementary class and ϕ(x; y) a partitioned first-order formula. If ϕ is NIP, then ϕ has finite VC dimension.
Proof. This is an elementary consequence of the compactness theorem of First-Order Logic [17,Remark 2.3].
A first-order formula ϕ(x; y) having VC dimension ≤ n is first-order expressible by a sentence V C n (ϕ); moreover, if ϕ is quantifier-free then the proposition V C n (ϕ) is a ∀ 1 sentence.
Proposition 4.8. A formula ϕ(x; y) having VC dimension ≤ n is first-order expressible in any language containing ϕ by a sentence V C n (ϕ). Moreover, if ϕ is ∃ m /∀ m , then V C n (ϕ) is at most ∀ m+1 .
In particular, if ϕ is quantifier-free then V C n (ϕ) is a ∀ sentence.
Proof. First, the proposition "x 1 , . . . , x n is shattered by {y J } J⊆[n] in ϕ" is a Boolean combination of instances in ϕ: The proposition V C n (ϕ) is expressed by the following first-order sentence: Proposition 4.9. Let T be a complete NIP theory in a language L containing an m-ary relation symbol R(x, y) for some n > 1. Then T implies a nontrivial universal sentence.
Proof. Since T is complete NIP, for some T entails the ∀ 1 sentence V C n (R(x; y)) for some n ∈ ω. Since R non-unary, V C n (R(x; y)) is not a first-order validity, since the bipartite graph G n on a disjoint set of vertices [n] ∪ 2 [n] given by R(i, X) ↔ i ∈ X satisfies G n ¬V C n (R(x; y)).
In fact, since V C n (R(x; y)) → V C m (R(x; y)) for all m > n, for a VC finite relation we get a nested chain of ∀ 1 sentences. As we will see in our account of the dynamic case of falsification, this simple observation has very strong consequences in terms of understanding small-sample falsificational problems.
To explain the restriction about the language, we note that there are NIP unary theories entailing no nontrivial ∀ 1 sentence. Recall [13, Definition 4.2.17] 9 that a theory T is κ-stable for a cardinal κ if for every model M T , n ∈ ω, and A ⊆ M of size κ, the space of n-types with parameters in A has size κ |S n (A)| = κ.
A theory is stable provided it is κ-stable for some infinite κ. It is well known that stability implies NIP [15,Theorem 4.7].
Proposition 4.10. There exists a stable theory T in a unary language which entails no nontrivial universal sentence.
Proof. Let T be the theory in the language L = {P (x)} axiomatized by This theory is clearly ℵ 0 -categorical: any countable model M can be partitioned by Moreover, T has no finite models, so by Vaught's test [13, Theorem 2.2.6] T is complete. Clearly, ∀ 1 (T ) contains only first-order validities. This theory is ω-stable. Let A be a set of size ≤ ℵ 0 . The types over A are determined by specifying which coordinates x i are equal to an element of A and, for those x i / ∈ A, whether or not P (x i ) or ¬P (x i ). Thus, there are at most (|A| + 2) n ≤ ℵ 0 types over A.
Therefore, again under mild conditions on the language We observe that VC finiteness is not equivalent to universal axiomatizability. Proposition 4.11. There exists an NIP T such that T is not universally axiomatizable. There exists a universally axiomatizable T such that T is not NIP.
Proof. Let DLO be the theory of dense linear orders in the language L = {<}. Then T is not universally axiomatizable as all of its models are infinite and the language is relational. Concretely, we know Q DLO but no finite subset X ⊂ Q is a model of DLO. Since L is relational, X is a substructure.
On the other hand, let T be the (incomplete) theory of acyclic directed graphs in the language R(x, y). T is universally axiomatizable by the collection ϕ n of sentences defined by This class is not NIP as for each n the bipartite digraph G n on a disjoint set of vertices [n] ∪ 2 [n] given by R(i, X) ↔ i ∈ X satisfies G n ¬V C n (R(x; y)) and is a model of T .
Useful Examples of NIP Theories and VC finite Classes
The preceding section describes the relationship between NIP theories, VC finite classes, and falsification, but further argument is required to demonstrate that these phenomena actually appear in the kinds of hypotheses we seek to falsify. To this end, the following powerful theorem proves the VC finiteness of a very wide class of geometrically-definable hypotheses.
We assume that the reader is familiar with the notion of an analytic function. The restricted analytic exponential real field R is the structure The theory R an,exp is the theory of the structure R.
In this structure, parametric families of equalities and inequalities between analytic functions on compact rectangular domains are definable. The following result shows that such families have finite VC-dimension: Theorem 4.3. R an,exp is NIP. Consequently, every first-order definable set in the theory of R an,exp has finite VC dimension.
Proof. That R an,exp is o-minimal is a classical theorem of van den Dries and Miller [6], together with the result that every o-minimal theory T is NIP, which can be found in Simon's book [17,Theorem A.6]. This theorem is extremely useful because it implies that not only is any parametric family of algebraic equations over a real field VC finite, but in fact any parametric family of semi-analytic inequalities is VC finite. This is of the utmost importance for examples stemming from physics, as it implies that even definable classes where one includes a model of measurement error can be VC finite. As an illustration, we consider the example of the family of fat lines. By a fat line I mean a setL ⊆ R 2 such that (x, y) ∈L(x, y; a, b, c, r) just in case (x, y) is most distance r from the line ax + by + c = 0. Proof. By Theorem 4.3, any set definable in the theory of R an,exp is VC finite. Thus it suffices to give an explicit definition of this family. This is easily done: the formula ϕ(x, y; a, b, r) given by |ax + by + c| 2 < r(a 2 + b 2 ) is a formula in the language of R an,exp and defines the family of fat lines.
This suggests an explanation for why many physical theories are so readily falsifiable: many of the predictions of physical theories can be cast in terms of determining membership in a real semianalytic set which expresses being some bounded error away from an analytic or algebraic set.
A theory T is stable (resp. NSOP 1 ) provided every formula in T is stable (resp. NSOP 1 ).
Jeřábek [11], and, independently, Kruckman and Ramsey [12] showed that Proof. Suppose that ϕ is a ∀ 1 sentence in L that is not a validity. We wish to show that T ϕ. Without loss of generality we assume that ϕ = (∀x 1 , . . . x n )ψ(x 1 , . . . , x n ) With ψ(x 1 , . . . , x n ) quantifier-free. Since ϕ is not a first-order validity, ¬ϕ is satisfiable and is a ∃ 1 formula. Let M be an L structure such that M ¬ϕ. Since T ∅ L is the theory of existentially closed L-structures, M embeds into a model M ′ T . By construction, M ′ ¬ϕ, so ϕ / ∈ T ∅ L .
To be sure, there exist falsifiable NSOP 1 classes. The theory of the random graph, for instance, contains the universal theory of graphs, which in particular includes the non-logical validity ∀x∀y(R(x, y) ↔ R(y, x)).
In short, NIP classes of structures yield examples of falsifiable structures incomparable to the notions we have thus far discussed. Thus, our picture of the relationship between falsifiable classes of falsifiability now looks like (again assuming mild assumptions on K and L): K has nontrivial UNCAF consequence
Conclusion
Over the course of this paper we investigated various strengthenings of the basic notion of falsifiability that have been defined and studied. The static models of falsifiability typically concern themselves with questions of how close a theory is to being universally axiomatizable; after all, the more universal sentences a theory implies, in principle the more falsfiable the theory becomes. We first studied Simon and Groen's [16] notion of FITness which they claim isolates the ideal scientific theories. They show that for pseudoelementary K, FITness implies universal axiomatizability. Generalizing their definition to arbitrary classes of L-structures K closed under isomorphism, I show that for finite languages K being FIT entails that K is elementary and, in fact, universally axiomatizable. This result substantially generalizes their result over finite languages.
I then turned my attention to an argument given by Chambers et al. [4] that argues that being universally axiomatizable is not sufficient grounds to call a theory falsifiable. Instead, they identify the falsifiable sentences with a class of universal sentences they call UNCAF (a universal negation of a conjunction of atomic formulas). In turn, I argued that their argument implicitly assumes that the underlying predicates P ∈ L exhibit Σ 1 behavior, and thus that their argument reaches too far in its conclusions.
As a final foray into the static case of falsification, I considered how falsification intersects with the dividing lines of classification theory. Under very mild restrictions on the language L, NIP theories entail a number of nontrivial ∀ 1 sentences, yielding the falsifiability of NIP theories. Of note, in NIP theories each formula ϕ is equipped with a notion of dimension known as the VCdimension of a class, which in a sense measures the effective falsifiability of membership in the class of hypotheses it defines.
Finally, we considered recent work of Kruckman and Ramsey [12] and, independently, Jeřábek [11], which yield examples of NSOP 1 and simple theories which are unfalsifiable. While there are many NSOP 1 theories which are falsifiable, in a sense NIP is individuated among the dividing lines in model theory as a class of highly-falsifiable theories.
|
2022-09-27T01:16:05.870Z
|
2022-09-24T00:00:00.000
|
{
"year": 2022,
"sha1": "577f98e5782e98f0710d4b3c659567dafb4cb785",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "577f98e5782e98f0710d4b3c659567dafb4cb785",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
251663992
|
pes2o/s2orc
|
v3-fos-license
|
Commentary on: Cross-linked Hyaluronic Acid for Cleft Lip and Palate Aesthetic Correction: A Preliminary Report
This paper presents a case series of 15 patients treated with hyaluronic acid (HA) fillers of varying degrees of cross-linkage for aesthetic concerns of the lip and nose due to cleft lip and palate. 1 The average volume of the filler was 4.2 mL per patient (range 2-10 mL). The majority of patients were treated in 1 or 2 sessions, but 1 patient had 5 sessions separated by 4-6 weeks (Video).
This paper presents a case series of 15 patients treated with hyaluronic acid (HA) fillers of varying degrees of crosslinkage for aesthetic concerns of the lip and nose due to cleft lip and palate. 1 The average volume of the filler was 4.2 mL per patient (range 2-10 mL). The majority of patients were treated in 1 or 2 sessions, but 1 patient had 5 sessions separated by 4-6 weeks (Video).
Persistent asymmetries of the lip and nose due to cleft lip and palate present significant reconstructive challenges. Insufficient volume of the upper lip is commonly observed in the presence of a satisfactory scar and adequate vermilion height at the site of the cleft lip repair. The lip may appear deflated and/or flat rather than convex, particularly in patients with bilateral cleft palate. Dermal fat grafting and/or fat grafting have been the predominant approach to correct the observed volume deficiency of the upper lip. There are a number of common features of the cleft lip nasal deformity but nasal asymmetry is the dominating feature that stigmatizes patients. The bony pyramid and nasal dorsum are typically deviated away from the side of the cleft in the unilateral cleft lip. In addition, there is asymmetry of the nasal tip and alae. In the bilateral cleft lip, the nose is typically wide and the nasal tip is bulbous, over-rotated, and under-projected.
As pointed out by the author, these patients frequently express fatigue at the number of surgeries that they have had over their lifetime and this may lead them to consider nonsurgical alternatives to improve their appearance. Fillers have the advantage of minimal downtime and recovery. The obvious downside of fillers is the durability of volume augmentation and long-term cost associated with the expected need for retreatment. The author was also careful to highlight the potential complications associated with filler injections that can include skin necrosis, particularly when Video. Watch now at https://academic.oup.com/ asjopenforum/article-lookup/doi/10.1093/asjof/ojac069 injecting the nose in the presence of excessive scarring. In this study, a blunt cannula was always used for nasal injections.
The author shows a number of cases that demonstrate a remarkable improvement in the appearance of the lip and nose with treatment. The author also reported that patients had a high degree of satisfaction at the end of treatment, immediately after completion of injecting HA.
This study suffers from a small sample size and heterogeneity of the patient population with regard to age, gender, and type of cleft lip. However, there is convincing qualitative evidence that there may be a role for HA filler in improving the appearance of both the lip and nose for patients with residual deformities of the lip and nose due to cleft lip. How expansive that role should be is the product of a detailed and transparent discussion of the patient's goals and the setting of patient expectations for what can be achieved with fillers, including both durability and long-term cost.
Although this was not assessed in the study, the case photos demonstrate that the lip is more amenable to change with HA fillers than the nose. The initial improvements in the appearance of the upper lip were on par with those achieved by fat grafting and/or dermal fat graft. The current paper suggests that the patient with residual stigmata of cleft lip in the lip and nose who are seeking improvement without surgery should consider HA filler.
Supplemental Material
This article contains supplemental material located online at www.asjopenforum.com.
Disclosures
The author declared no potential conflicts of interest with respect to the research, authorship, and publication of this article.
Funding
The author received no financial support for the research, authorship, and publication of this article, including payment of the article processing charge.
|
2022-08-19T15:12:56.376Z
|
2022-08-18T00:00:00.000
|
{
"year": 2022,
"sha1": "30bcd77f5a7a6f068c8c03126d97e9a21461b86c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/asjopenforum/advance-article-pdf/doi/10.1093/asjof/ojac069/45470954/ojac069.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce9805400a59ff865ca4377b74dd4f4bd6bd0266",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240153748
|
pes2o/s2orc
|
v3-fos-license
|
T–Follicular Helper–like Cells in Sarcoidosis: Lending a Helping Hand
T–follicular helper (Tfh) cells were first defined 12 years ago as a specialized subset of CD4 T cells that expresses CXCR5, PD-1, and ICOS and the lineage-defining transcription factor, Bcl-6 (1–3). Tfh cells are strategically placed in lymphoid organs and are considered as nonmigratory compared with nonfollicular T-helper cells (4), which migrate from secondary lymphoid organs to nonlymphoid organs. In the past years, Tfh cells have gained attention as the major cell type regulating germinal center formation and B-cell antibody production and are involved in multiple immune disorders and infections (5). In this issue of the Journal, Bauer and colleagues (pp. 1403–1417) report the presence of Tfh-like cells in the BAL fluid of patients with sarcoidosis (6). These cells display features of Tfh cells while lacking CXCR5, which distinguishes Tfh cells from other T-cell subsets and is required for T-cell migration toward the T cell–B cell border of secondary lymphoid organs, and Bcl-6. In vitro, coculture of these Tfh-like cells with blood B cells induced B-cell proliferation and antibody production. This article thus adds an important role for Tfh-like cells in pulmonary sarcoidosis and potentially other immune-mediated lung disorders. Sarcoidosis is a granulomatous lung disease that occurs in individuals of all ages, sexes, and ethnic backgrounds and is characterized by the accumulation of large numbers of activated CD4 T cells and B cells in the lung (7). Although the cause of sarcoidosis remains unknown, a recent study identified a T-cell epitope derived from Aspergillus nidulans and suggested a potential role of this organism in driving L€ ofgren syndrome, an acute form of sarcoidosis (8). Multiple studies have delineated the phenotype of CD4 T cells in the BAL fluid of patients with sarcoidosis, including the presence of CD4 T cells expressing T-helper cell type 1 (Th1) and/or Th17 phenotypes as well as FoxP3-expressing regulatory T cells (9). Regulatory T cells have also been shown to be dysfunctional in sarcoidosis (10), and this dysfunctional state may contribute to the increased frequency of Th1and/or Th17-polarized CD4 T cells in the lung. A role for B cells also exists in sarcoidosis on the basis of the presence of increased IgA levels and the antivimentin antibodies (11). However, the subset of CD4 T cells involved in helping B-cell differentiation andmaintenance in sarcoidosis has not been defined. Bauer and colleagues (6) sought to understand the function of pulmonary Tfh and germinal center–like lymphocytes in sarcoidosis and used flow cytometry to identify CXCR5, PD-1, ICOS, CD40L, and IL-21 expression on Tfh cells in the BAL fluid. Unfortunately, CD4 T cells in the BAL fluid lacked expression of CXCR5 and Bcl-6, the two signature molecules that define Tfh cells. However, these cells expressed CD40L and IL-21 that promote B-cell expansion and plasma-cell differentiation. On the basis of CD40L and IL-21 expression, the authors named these cells as Tfh-like cells. Future studies assessing the presence of additional transcription factors, such as TCF-1 (T-cell factor 1) and the lymphoid enhancer–binding factor LEF1, in these cells will be useful to further characterize their Tfh-like status. Cytokine analysis of in vitro–stimulated CD4 T cells revealed the presence of multiple subsets of T cells displaying Th1 (IFNg–secreting), Th17 (IL-17–secreting), and presumably Tfh-like phenotypes (IL-21) in the sarcoidosis BAL fluid. Secretion of IL-2, which is considered as a negative regulator of Tfh-cell differentiation (12), was also present. Therefore, future studies assessing the presence of CD4 T cells that express either IL-21 alone or both IL-2 and IL-21 will surely add information about their role in sarcoidosis. Analysis of memorymarkers, CXCR3 and CD69, revealed a tissue-resident phenotype of these CD4 T cells that was confirmed by RNA sequencing. Among the various tissues, BAL-fluid CD4 T cells showed a dominantmemory phenotype as compared with tonsils or blood. Transcriptome analysis of BAL-fluid T cells was not of great help in assessing the relationship between Tfh-like cells and classical Tfh cells because both the surface marker and the lineage marker for Tfh cells, CXCR5 and Bcl-6, were absent. However, in vitro T cell–B cell coculture assays confirmed the ability of Tfh-like T cells in BAL fluid to induce B-cell plasmablast formation, similar to classical Tfh cells in the tonsils (13). In addition, in vitro IL-21–blocking experiments confirmed the ability of IL-21–producing Tfh-like cells to induce antibody production by B cells, indicating that these cells share functional homology with classical Tfh cells (Figure 1). Tfh cells contact B cells in organized structures present in the secondary lymphoid organs. In a nonlymphoid organ such as the lung, both sterile and pathogenic inflammation induce the formation of aggregates called ectopic lymphoid aggregates or tertiary lymphoid structures, which are comprised of T cells, B cells, and follicular dendritic cells (14). In ectopic lymphoid aggregates or loosely arranged aggregates, T cells exist in close contact with B cells, allowing T cell–B cell cooperation and T cell–mediated help to B cells. To find such aggregates and confirm the cooperation of Tfh-like cells with B cells, Bauer and colleagues (6) documented the close contact between Tfh-like cells and B cells. In contrast to ectopic lymphoid aggregates, most of the aggregates in the lungs of subjects with sarcoidosis were nonectopic (i.e., lacking follicular dendritic cells). The important highlight of this current manuscript is the presence of IL-21–producing Tfh-like cells in the lung of patients with sarcoidosis that are functionally similar to but phenotypically distinct from classical Tfh cells and provide necessary help to B cells to undergo class switching and formation of plasmablasts. Tfh-like cells have also been identified in the murine lung, playing a role in T cell–B cell cooperation (15). Future studies involving lineage tracing and gene depletion in mouse models will allow a further understanding the mechanism(s) involved in Tfh-cell differentiation and function that will significantly advance the field of Tfh-like cell biology. This study This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0. For commercial usage and reprints, please e-mail Diane Gern (dgern@thoracic.org).
T-follicular helper (Tfh) cells were first defined 12 years ago as a specialized subset of CD4 1 T cells that expresses CXCR5, PD-1, and ICOS and the lineage-defining transcription factor, Bcl-6 (1-3). Tfh cells are strategically placed in lymphoid organs and are considered as nonmigratory compared with nonfollicular T-helper cells (4), which migrate from secondary lymphoid organs to nonlymphoid organs. In the past years, Tfh cells have gained attention as the major cell type regulating germinal center formation and B-cell antibody production and are involved in multiple immune disorders and infections (5). In this issue of the Journal, Bauer and colleagues (pp. 1403-1417) report the presence of Tfh-like cells in the BAL fluid of patients with sarcoidosis (6). These cells display features of Tfh cells while lacking CXCR5, which distinguishes Tfh cells from other T-cell subsets and is required for T-cell migration toward the T cell-B cell border of secondary lymphoid organs, and Bcl-6. In vitro, coculture of these Tfh-like cells with blood B cells induced B-cell proliferation and antibody production. This article thus adds an important role for Tfh-like cells in pulmonary sarcoidosis and potentially other immune-mediated lung disorders.
Sarcoidosis is a granulomatous lung disease that occurs in individuals of all ages, sexes, and ethnic backgrounds and is characterized by the accumulation of large numbers of activated CD4 1 T cells and B cells in the lung (7). Although the cause of sarcoidosis remains unknown, a recent study identified a T-cell epitope derived from Aspergillus nidulans and suggested a potential role of this organism in driving L€ ofgren syndrome, an acute form of sarcoidosis (8). Multiple studies have delineated the phenotype of CD4 1 T cells in the BAL fluid of patients with sarcoidosis, including the presence of CD4 1 T cells expressing T-helper cell type 1 (Th1) and/or Th17 phenotypes as well as FoxP3-expressing regulatory T cells (9). Regulatory T cells have also been shown to be dysfunctional in sarcoidosis (10), and this dysfunctional state may contribute to the increased frequency of Th1and/or Th17-polarized CD4 1 T cells in the lung. A role for B cells also exists in sarcoidosis on the basis of the presence of increased IgA levels and the antivimentin antibodies (11). However, the subset of CD4 1 T cells involved in helping B-cell differentiation and maintenance in sarcoidosis has not been defined.
Bauer and colleagues (6) sought to understand the function of pulmonary Tfh and germinal center-like lymphocytes in sarcoidosis and used flow cytometry to identify CXCR5, PD-1, ICOS, CD40L, and IL-21 expression on Tfh cells in the BAL fluid. Unfortunately, CD4 1 T cells in the BAL fluid lacked expression of CXCR5 and Bcl-6, the two signature molecules that define Tfh cells. However, these cells expressed CD40L and IL-21 that promote B-cell expansion and plasma-cell differentiation. On the basis of CD40L and IL-21 expression, the authors named these cells as Tfh-like cells. Future studies assessing the presence of additional transcription factors, such as TCF-1 (T-cell factor 1) and the lymphoid enhancer-binding factor LEF1, in these cells will be useful to further characterize their Tfh-like status. Cytokine analysis of in vitro-stimulated CD4 1 T cells revealed the presence of multiple subsets of T cells displaying Th1 (IFNg-secreting), Th17 (IL-17-secreting), and presumably Tfh-like phenotypes (IL-21) in the sarcoidosis BAL fluid. Secretion of IL-2, which is considered as a negative regulator of Tfh-cell differentiation (12), was also present. Therefore, future studies assessing the presence of CD4 1 T cells that express either IL-21 alone or both IL-2 and IL-21 will surely add information about their role in sarcoidosis. Analysis of memory markers, CXCR3 and CD69, revealed a tissue-resident phenotype of these CD4 1 T cells that was confirmed by RNA sequencing. Among the various tissues, BAL-fluid CD4 1 T cells showed a dominant memory phenotype as compared with tonsils or blood.
Transcriptome analysis of BAL-fluid T cells was not of great help in assessing the relationship between Tfh-like cells and classical Tfh cells because both the surface marker and the lineage marker for Tfh cells, CXCR5 and Bcl-6, were absent. However, in vitro T cell-B cell coculture assays confirmed the ability of Tfh-like T cells in BAL fluid to induce B-cell plasmablast formation, similar to classical Tfh cells in the tonsils (13). In addition, in vitro IL-21-blocking experiments confirmed the ability of IL-21-producing Tfh-like cells to induce antibody production by B cells, indicating that these cells share functional homology with classical Tfh cells ( Figure 1). Tfh cells contact B cells in organized structures present in the secondary lymphoid organs. In a nonlymphoid organ such as the lung, both sterile and pathogenic inflammation induce the formation of aggregates called ectopic lymphoid aggregates or tertiary lymphoid structures, which are comprised of T cells, B cells, and follicular dendritic cells (14). In ectopic lymphoid aggregates or loosely arranged aggregates, T cells exist in close contact with B cells, allowing T cell-B cell cooperation and T cell-mediated help to B cells. To find such aggregates and confirm the cooperation of Tfh-like cells with B cells, Bauer and colleagues (6) documented the close contact between Tfh-like cells and B cells. In contrast to ectopic lymphoid aggregates, most of the aggregates in the lungs of subjects with sarcoidosis were nonectopic (i.e., lacking follicular dendritic cells). The important highlight of this current manuscript is the presence of IL-21-producing Tfh-like cells in the lung of patients with sarcoidosis that are functionally similar to but phenotypically distinct from classical Tfh cells and provide necessary help to B cells to undergo class switching and formation of plasmablasts. Tfh-like cells have also been identified in the murine lung, playing a role in T cell-B cell cooperation (15). Future studies involving lineage tracing and gene depletion in mouse models will allow a further understanding the mechanism(s) involved in Tfh-cell differentiation and function that will significantly advance the field of Tfh-like cell biology. This study
Copyright © 2021 by the American Thoracic Society
Cell Therapy with the Cell or without the Cell for Premature Infants?
Time Will Tell
Bronchopulmonary dysplasia (BPD) remains one of the main complications in preterm infants born before 28 weeks' gestational age (GA) (1). Advances in perinatal care since the original description of BPD more than 50 years ago have allowed the survival of preterm infants as young as 22 weeks' GA. The corollary is that these infants are now born at the limit of biological viability because their lungs are still at the late canalicular stage when blood vessels and airways are just becoming juxtaposed. The task of protecting the ever more immature lung is becoming increasingly challenging. In a sense, neonatologists are victims of their own success. Not surprisingly, an increasing number of reports describe the long-term consequences of BPD in young adults. Pulmonary vascular disease, cardiac dysfunction, and emphysematous changes may result from early disruption of normal lung development, impaired repair processes, and early aging (2)(3)(4). Although incremental improvements in the use of our current therapies-such as less-invasive surfactant administration, for example (5)-can have an immediate positive impact on the incidence and severity of BPD, additional innovative treatments may be required to prevent and/or repair lung damage to substantially improve the respiratory outcome of micropremies. Cell therapies for regenerative benefits represent such a promising approach. Mesenchymal stromal cells (MSCs) in particular have attracted attention in part because of their ease of isolation, culture, and expansion and because of their putative pleiotropic effects (6)(7)(8). Yet it is the immune-modulatory and reparative effects of MSCs that provided the biological plausibility for these cells to be tested in diseases with a strong inflammatory component such as the acute respiratory distress syndrome (9, 10) and BPD (11). Furthermore, MSCs do not engraft but rather act via a "hit-and-run" mechanism through cell-to-cell contact and the release of bioactive molecules contained in nano-sized particles termed exosomes or small extracellular vesicles (12,13). These observations opened exciting prospects for cell therapies without the cell.
In this issue of the Journal (14), Willis and colleagues (pp. 1418-1432) follow up on their original findings (15) to explore more in detail the molecular mechanisms by which MSC-derived small extracellular vesicles (MEx) exhibit their lung-protective effects in a well-established lung injury model in newborn mice exposed to hyperoxia. Biodistribution studies after intravenous injection revealed that MEx localize mostly to the liver and the lung. MEx interact with lung myeloid cells, restore the apportion of alveolar macrophages, and attenuate proinflammatory cytokine production. In a series of elegant experiments, the group demonstrates that MEx promote an immunosuppressive bone marrow-derived myeloid cell (BMDMy) phenotype: adoptive transfer of MEx-educated BMDMy, but not naive BMDMy, preserved alveolar architecture, blunted fibrosis and pulmonary vascular remodeling, and improved exercise capacity in this model. These findings provide further evidence for the antiinflammatory and reparative mechanisms of action of MSCs and their MEx.
Based on the above results, it is not surprising that MEx were found to accumulate mostly in the liver within 24 hours. Whether the liver could be the exclusive site of further macrophage/myeloid cell education or whether MEx migrate to the BM to directly interact with immune cells in this location deserves further exploration. Likewise, lineage tracing studies may answer the question whether educated cells subsequently migrate from the BM to the lungs or whether MEx only affect circulating immune cells. MEx administration early during the disease process was also able to blunt fibrosis, arguing in favor of early intervention and thus providing some clinical directions for these findings. Finally, it is uncertain whether identification of the MEx biological cargo will be critical for the clinical application of MEx therapy, although more understanding of the RNA and protein components that are most therapeutic might advance more focused therapies for preventing BPD in micropremies.
Although these observations demonstrate that much more needs to be learned about the biology of MSCs and their nanovesicles, the time is ripe for well-designed early-phase clinical trials to test the feasibility and safety of MSC-based therapies in preterm infants at risk of BPD. The results of the very first phase I trials suggest feasibility and short-term safety of a proprietary cord blood-derived MSC product administered as early as 10 days of life via the intratracheal route (16-18). Results of a phase II trial testing this same product in 66 preterm infants at 23-28 weeks' GA did not show a significant improvement in the primary outcome of death or moderate/severe BPD with MSCs compared with placebo (19). In that study, a subgroup analysis suggested an improvement in the secondary outcome of severe BPD (53% [8/15] to 19% [3/16]) with MSCs in the 23 to 24 gestational weeks group, but the study was underpowered, prompting a larger trial focused on these lower GA categories. Other cell products such as human amnion epithelial cells
|
2021-10-30T06:17:27.577Z
|
2021-03-30T00:00:00.000
|
{
"year": 2021,
"sha1": "eaf2277e90813e1478e67512c613de536f3a3e60",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1164/rccm.202109-2139ed",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5342401514501a0d14a575d627b4989a0fba7e91",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
33183480
|
pes2o/s2orc
|
v3-fos-license
|
A Cross‑sectional Study of Common Psychiatric Morbidity in Children Aged 5 to 14 Years in an Urban Slum
Aim: Study of the prevalence of common psychiatric disorders in children aged 5 to 14 years in a health post area of an urban slum. Objectives: (1) To study frequency of specific psychiatric disorders in the study population, (2) To study the relationship between sociodemographic variables and psychiatric morbidity. Settings and Design: The present study was conducted in one of the five health posts of an urban slum, which is a field practice area of the teaching medical institute. It was a cross‑sectional study. Materials and Methods: Sample size was estimated by using 20% as a prevalence of psychiatric morbidity which was obtained from previous studies done in developing countries. Household was used as a sampling unit and systematic random sampling method was used for selecting household. Total 257 children aged 5 to 14 years were included in the study. A pre‑designed, semi‑structured diagnostic interview schedule based on DSM‑IV criteria was used for data collection. Statistical Analysis Used: The tests of significance used were Chi‑square and Logistic regression analysis. Results: The prevalence of psychiatric morbidity in this study was 14.8%. Non‑organic enuresis, Attention deficit hyperactivity disorder, Conduct disorder, and Mental retardation were identified as the common mental health problems. Conclusions: Factors like nuclear family, parents not living together, large family size, and positive family history of psychiatric disorder were associated with psychiatric morbidity in children.
Introduction
The burden of mental disorders is great as they are prevalent in all societies. They create a substantial personal burden for affected individuals and their families, and produce significant economic and social hardships that affect society as a whole. Studies of children and adolescents have also demonstrated a high prevalence of mental disorders in primary care settings. Despite the large number of people who attend primary care settings with mental disorders, their recognition and treatment is generally inadequate. Estimates of the prevalence of mental disorders, the burden they impose if left untreated, and the existence of effective primary care-based treatments are important issues for mental health integration into primary care. [1] Without early and effective identification and interventions, childhood mental disorders can persist and lead to a downward spiral of school failure, poor employment opportunities, and poverty in adulthood. World Health Organization said that there is paucity of information on prevalence and the burden of major mental and behavioral disorders in all countries, particularly in developing countries. [2] Children under 16 years of age constitute more than 40% of India's population and information about their mental health needs is a national imperative. Community surveys have the advantage of being more representative; they include children and adolescents who do not attend school and those who do not access mental health services. [3] Hence, the present study was carried out in the community to estimate the magnitude of childhood psychopathology. In addition, it also aimed to study the sociodemographic correlates of the psychiatric disorders.
Materials and Methods
A cross-sectional study was conducted in urban health center post in an urban slum, which is a field practice area of Department of Community Medicine of the teaching medical institute in Mumbai. This health post caters to 1,10,000 people residing in 20,000 households.
According to World Health Organization, [4] community-based studies have revealed an overall prevalence rate for mental disorders of about 20% in several national and cultural contexts. Early Indian community-based studies reported prevalence rates of psychiatric disorders among children ranging from 2.6 to 35.6%. [5][6][7][8] Accordingly, by assuming 20% as a prevalence of psychiatric morbidity, a sample size of 246 was calculated and systematic random sampling technique was used with household as a sampling unit.
A pre-designed interview schedule was used for data collection. It had two components as Introductory and Diagnostic Screening Interview. [9] Introductory Interview consisted of sociodemographic variables, developmental history, health status of the child, history of psychiatric disorder in the family or child, and school profile of the child. Diagnostic Screening Interview is a standardized tool for diagnosing psychiatric disorders. It is a semi-structured diagnostic interview designed to assess current and past episodes of psychopathology in children and adolescents based on DSM-IV criteria. A child was said to be suffering from psychiatric disorder if s/he satisfies any of the following conditions: (a) any child who satisfied DSM-IV criteria under any domain, (b) known case of psychiatric illness, and (c) history of Epilepsy.
Interviews were conducted after getting informed consent. In stage 1, a single interviewer collected data by conducting face-to-face interviews after getting training in common childhood and adolescence psychiatric illnesses in Department of Psychiatry of a teaching medical institute. In stage 2, children and adolescents who fulfilled the DSM-IV criteria under any domain for presence of psychiatric disorder but for whom specific diagnosis was not available were referred to the Child Guidance Clinic (CGC) at Department of Psychiatry where a psychiatrist examined them to arrive at the specific diagnosis. For those who fulfilled the DSM-IV criteria but did not report to CGC, the case history and response sheet were discussed with the same psychiatrist to arrive at the specific diagnosis. Limitations for study were: (a) Responses given by the respondents were relied upon; (b) It was not possible to study all the factors that may have relationship with psychiatric morbidity (e.g. Family cohesiveness, Peer relationship).
Data classification and analysis was done using SPSS software.
The tests of significance used were Chi-square and Logistic regression analysis.
Results
Of the 148 households who gave the informed consent for the study, 35 (23.7%) had no children between 5 to 14 years. Thus, from 113 households, 306 respondents were approached. Of 306, 29 were not available at house even after three visits, five could not be interviewed due to language barrier or communication problem, and 15 left the interviews half the way. Thus, total 257 respondents were included. The response rate was calculated as 84%.
Non-organic enuresis (6.2%), ADHD (4.3%), Conduct disorder (CD; 2.7%), and MR 2.3%) were identified as the common psychiatric disorders [ Table 1]. The relationship with sociodemographic variables [ Table 2] showed that prevalence of psychiatric morbidity was highest in children aged 8 to 11 years, in males than in females (χ 2 = 3.90; df = 1; P < 0.05), in children of illiterate mothers (χ 2 = 8.92; df = 2; P < 0.05), in children residing in nuclear family (χ 2 = 6.50; df = 1; P < 0.05), in children having more number of siblings in the family as well as large family size, in children of the family where parents were not living together (χ 2 = 17.14; df = 1; P < 0.05), and in children having family history of psychiatric disorders (χ 2 = 6.98; df = 1; P = <0.05). Multivariate logistic regression analysis identified nuclear family, parents not living together, large family size, and positive family history of psychiatric disorder had higher odds for presence of psychiatric morbidity [ Table 3].
Discussion
Prevalence of psychiatric morbidity in present study (14.8%) was lower as compared to prevalence (about 20%) in previous community-based studies by WHO. [4] This difference in psychiatric morbidity rate could be due to various reasons. First, the present study was conducted in Indian setting and community-based studies in India such as Srinath, et al., [3] Rahi, et al. [7] , and Anita, et al. [8] reported prevalence of psychiatric morbidity in children as 12.0% (4-16 years), 16.5% (4-14 years), and 16.5% (6-14 years). Second, Diagnostic Screening Interview [9] and DSM-IV criteria were used to identify cases in present study while other studies had used different instruments and criteria. Prevalence rates can vary markedly with screening instruments used, changes in the assessment questions used in community surveys, minor changes in diagnostic criteria, number of informants, and sampling methodology. [10] Third, it is stated that the behavioral and emotional problems in children may differ from one cultural context [11] to another and this finding reinforces the concept. Other reasons could include lack of incentives or privacy, time taken for the interview, children and adolescents' perceptions of the mental health profession, higher threshold of tolerance for certain symptoms, and other sociocultural factors.
Srinath, et al. [3] reported prevalence of 6.2% for Non-organic enuresis which was the commonest disorder found in children aged 4 to 16 years. Bansal and Barman [12] reported prevalence of 4.5% for Non-organic enuresis. In our study, Non-organic enuresis was also the most common psychiatric disorder found in 6.2% children. The findings indicate a significant presence of the problem. There is thus a great need to emphasize the importance of recognizing this condition to the caretakers of children. The prevalence rate of ADHD in studies conducted in the developed countries is reported to be 4% (range, 1.7%-17.8%). [13] Costello, et al. [14] reported that there is increase in median prevalence of ADHD from 3% to 4%. Malhotra, et al. [15] found that there was an increase in the number of registrations of Hyperkinetic disorders (5%). Srinath, et al. [3] and Bansal and Barman [12] reported a point prevalence estimate for hyperkinetic disorder to be 1.6% and 6%, respectively, while we found that 4.3% children were having ADHD. The higher prevalence of ADHD in our study may reflect their recognition as a medical disorder impacting academic achievements, in the parents, teachers, and among physicians. Ford et al. [16] found increased prevalence of ADHD in boys. We also found that male children (3.1%) were more affected with ADHD than female children (1.2%). Male children have a higher frequency of externalizing disorders, which are more easily recognized due to their disruptiveness that may be the reason behind male preponderance. Previous studies [16,17] in developed nations reported that CD and Oppositional Defiant Disorder (ODD) was prevalent in 1.5-3.3% (more in boys) and 2.3-5.5% (gender difference less clear), respectively. In our study, CD and ODD was found in 2.7% and 1.2% children, respectively, with no gender difference. Prevalence of MR reported in previous Indian studies was 0.9%, [3] 3.25%, [8] and 1.5%, [12] while it was 2.3% in our study area. Prevalence of Stuttering (1.9%) and Epilepsy (0.4%) was found to be comparable to ICMR [3] research (1.5% and 0.7%, respectively) in 4 to 16 years children. The prevalence rates of 0.1% by Srinath et al., [3] 0.37% by Anita et al. [8] for depressive disorders, and 0.6-3.0% [13] from newer studies were revealed for Major Depression (MD). In our study, prevalence of MD was 0.4%. The reasons for low prevalence of MD need to be explored in the context of the increasing evidence of suicidal behavior in young Indian population. The present study might have underestimated psychiatric morbidity among adolescents, and particularly among adolescent girls whose vulnerability to emotional or internalizing disorders is well documented. [18] In line with previous studies, [3,7,15] a higher prevalence rate was seen in 8 to 11 years age group but it was not significant. Factors like increasing burden of studies in schools, emotional disturbances related to early adolescence, or mothers' perception of any resultant undesired change in behavior as abnormal may be contributing to high prevalence in 8 to 11 years children. Psychiatric morbidity was predominantly found in male as compared to female children. This is in agreement with earlier studies. [7,19] Male predominance may be due to psychological or biological factors since greater attention is often paid to the male children and the parents notice any abnormal behavior earlier resulting in early identification.
Education of mother was found to have a negative relationship with psychiatric morbidity. Other researchers [7,20] have observed significantly higher prevalence of the psychopathological disorders in children of illiterate mothers. This may be explained by the fact that education and awareness increases mothers' perception of any developmental or behavioral deviance of the child at an earlier stage when it is still amenable to prevention and/or treatment. Belonging to a nuclear family was found to be conducive to generation of mental disorders in children, probably due to relative lack of attention paid to them in the absence of grandparents, uncles, aunts, etc., Thus, they might have been emotionally deprived. Similar findings are reported by Anita et al. [8] and Bansal and Barman [12] in children living in nuclear families. Presence of family history of psychiatric disorder was significantly (χ 2 = 6.98; df = 1; P=<0.05) associated with psychiatric morbidity in children, which was in line with previous studies. [3,7,13] There was significantly higher rate of psychiatric morbidity among the children who had parents not living together (separated parents, deaths of parents as psychosocial stress factors). Similar findings were reported by Rahi et al. [7] and Merikangas et al. [20] Separation of child from parents causes not only physical loss but also emotional deprivation. This separation experience may cause persistent defects in the ability to form relationships and intellectual functioning may get impaired.
Further analysis can enhance the understanding of the patterns of comorbidity and its sociodemographic correlates. An incidence study may be the next step required in understanding epidemiology of psychiatric disorders in children.
|
2018-04-03T05:59:34.612Z
|
2013-04-01T00:00:00.000
|
{
"year": 2013,
"sha1": "25007e666d6476034e6642722b7d1ab996611d94",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/2249-4863.117413",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2c3e1a5f3e0a55e3cbc17cd41228a16b3476a579",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257044484
|
pes2o/s2orc
|
v3-fos-license
|
Coastal wetlands can be saved from sea level rise by recreating past tidal regimes
Climate change driven Sea Level Rise (SLR) is creating a major global environmental crisis in coastal ecosystems, however, limited practical solutions are provided to prevent or mitigate the impacts. Here, we propose a novel eco-engineering solution to protect highly valued vegetated intertidal ecosystems. The new ‘Tidal Replicate Method’ involves the creation of a synthetic tidal regime that mimics the desired hydroperiod for intertidal wetlands. This synthetic tidal regime can then be applied via automated tidal control systems, “SmartGates”, at suitable locations. As a proof of concept study, this method was applied at an intertidal wetland with the aim of restabilising saltmarsh vegetation at a location representative of SLR. Results from aerial drone surveys and on-ground vegetation sampling indicated that the Tidal Replicate Method effectively established saltmarsh onsite over a 3-year period of post-restoration, showing the method is able to protect endangered intertidal ecosystems from submersion. If applied globally, this method can protect high value coastal wetlands with similar environmental settings, including over 1,184,000 ha of Ramsar coastal wetlands. This equates to a saving of US$230 billion in ecosystem services per year. This solution can play an important role in the global effort to conserve coastal wetlands under accelerating SLR.
Option 1 (status quo) also considers the 'no action' strategy, which may lead to the ecosystem perishing (depending on accretion rates) as it becomes permanently inundated. Option 2 (horizontal migration) has significant uncertainty regarding the availability of space, sediment type, slope and plant physiological response 6,30 . This 'retreat' option is particularly challenging for ecosystems of international importance, such as Ramsar wetlands, where they are geographically fixed in a location and may be limited by the area's topography or upland barriers. Option 3 (vertical accretion) is also associated with large uncertainties as accretion processes are highly complex and variable across space and time, including inter-and intra-annual variations 31 . Past accretion rates may not be reliable indicators of potential future rates, as they may represent a period of significantly higher or lower suspended sediment delivery in part due to historic anthropogenic activities 32,33 . As such, future accretion rates are challenging to predict. Overall, the uncertainty in accretion rates, presence of physical barriers and land management complexities, suggest both horizontal migration and vertical accretion management strategies may not be a viable solution for managing high priority intertidal ecosystems under SLR 33 .
In intertidal wetlands, where ecosystems are aligned with tidal inundation patterns, future SLR will alter a site's hydrology and impact existing vegetation communities 26 . One solution to this pressure, is to preserve the existing tidal hydrology by artificially manipulating the tidal regime. In many locations worldwide this could be achieved by implementing a synthetic tide using hydraulic control gates. While alternative methods that minimize intervention, impact and resources should be preferred, this method can be suitable where existing intertidal ecosystems and their services are at risk and no other alternative is feasible.
In this study, we present an eco-engineering solution to offset SLR impacts in high priority intertidal ecosystems via a synthetic tidal regime. This is achieved by assessing the existing tidal dynamics of the intertidal ecosystem of interest and then replicating these conditions at a location threatened by elevated sea levels (Supplementary Figure 1). A conceptual diagram illustrating how vegetated intertidal ecosystems can be restored using this "Tidal Replicate Method" is presented in Fig. 1.
The proposed method has the potential to preserve large areas of intertidal wetlands around the world in response to SLR. Focusing on Ramsar wetlands of international importance, we show that this method provides a practical solution to protect many valuable intertidal wetlands from permanent inundation, thereby potentially saving billions of dollars in ecosystem services globally. Additionally, the method has the capacity to work in conjunction with natural accretion rates providing a backup solution if the natural accretion rate is exceeded. Considering the high level of uncertainty related to the potential horizontal migration of intertidal wetlands to more elevated adjacent lands, this eco-engineering solution could play an important role in adaptively managing the global effort to conserve coastal wetlands in the face of accelerating SLR over the twenty-first century.
Results
Synthetic tidal regime. The proposed Tidal Replicate Method requires synthesising the tidal dynamics of the desired vegetated intertidal ecosystem, based on hydroperiod of the vegetation species. This involves understanding the hydroperiod conditions of an intertidal community (e.g. saltmarsh or mangroves), including the frequency, depth, and duration of inundation in relation to the elevation of the area of interest. The synthesised tide can then be replicated onsite by installing an automated tidal control system, which we refer to as a 'Smart-Gate' , at the entrance of the wetland or connecting channel (Supplementary Figure 2). Through a series of water level triggers, the SmartGate imposes tidal conditions necessary to encourage the recruitment and establishment of target vegetation species.
The synthetic tidal regime is initially developed based on the existing relationship between the intertidal ecosystem and the tidal dynamics. Tidal planes at the site of interest are used and analysed to calculate tidal inundation patterns (for example tidal planes for the Hunter River estuary in eastern Australia are shown in Supplementary Figure 3 and Table 1). The aim of this analysis is to develop a relationship between tidal ranges and vegetation species which, in turn, provides the number of tides per year that would inundate a site and the inundation depth.
The Tidal Replicate Method was applied to a study site over a period of 3 years and results are presented here. At the study site, field survey results showed that saltmarsh is abundant above mean high water (MHW), while mangroves dominate at lower elevations (Supplementary Figure 4). Saltmarsh habitats primarily occurred between MHW and the highest astronomical tide (HAT), with a 50th percentile (median) elevation of + 0.77 m Australian Height Datum (AHD) and a 95th percentile elevation of + 1.1 m AHD. Mangroves occurred throughout the whole tidal envelope, however, the 50th percentile elevation of + 0.44 m AHD was observed from the selected points, with the 95th percentile elevation of + 0.89 m AHD. The ingress of some mangroves into saltmarsh communities was observed at several locations. For the study site, to maximise saltmarsh extent, topographic surveys were used to delineate between the intertidal wetland area and the main tidal channel (crest at + 0.3 m AHD). This elevation trigger ensured that the tide could rise to 0.3 m AHD onsite allowing regular water exchange and connectivity, without impacting the intertidal area. As such, the baseline minimal trigger www.nature.com/scientificreports/ www.nature.com/scientificreports/ was set at a threshold of + 0.3 m AHD (Supplementary Figure 5). Thereafter, any water levels above the 0.3 m AHD trigger were set to be indicative of saltmarsh inundation patterns. Based on the surveys, the median tidal level for saltmarsh at reference sites was + 0.77 m AHD. Therefore, the tidal inundation regime of the reference sites was superimposed onto the study site, with + 0.3 m AHD trigger being the base level. In other words, the tidal regime of the reference sites (with natural saltmarsh vegetation) was replicated at the study site. This resulted in a synthetic tidal regime being created where water levels exceed + 0.3 m AHD 2.8% of the time (equivalent to approximately 110 tides per year exceeding + 0.3 m AHD). This is equivalent to an inundation frequency of water levels above mean spring high tide. To optimise the saltmarsh area and limit mangrove encroachment at the site, additional trigger levels (rather than a single trigger) were created based on the topography of the site and the desired tidal inundation regime (equivalent to the natural tidal regime of reference sites for saltmarsh). Table 1 provides the estimated approximate annual inundation rate for the specified elevations at the study site. These levels were successfully applied onsite via a SmartGate structure over a 3-year period (Supplementary Figure 6). During this time, the surveyed reference sites remained saltmarsh.
Saltmarsh vegetation development/response. Aerial imagery from drone surveys indicated a posi-
tive trend of saltmarsh vegetation extent and distribution since the Tidal Replicate Method was implemented onsite (Fig. 2a). Repeated quadrat vegetation field sampling indicated that the desired saltmarsh species were recruited namely, Sarcocornia quinqueflora, Sporobolus virgincus and Suaeda australis. Sarcocornia quinqueflora had the highest recruitment with a 50% increase in cover (m 2 ) since the Tidal Replicate Method was implemented (from November 2017 to December 2020). Total saltmarsh vegetation cover increased from ~ 0.2% in November 2017, to 45% in December 2020 ( Fig. 2b) based on field sampling, indicating the feasibility of the method.
Discussion
There is limited guidance relevant to the conservation of high value intertidal wetland communities threatened by accelerating SLR 26 . In this study, we applied an eco-engineering solution to a threatened intertidal ecosystem and demonstrated its outcomes 3-years post rehabilitation. As desired, the site, which would have been inundated under natural tidal conditions, has re-established saltmarsh vegetation following the implementation of the Tidal Replicate Method. This indicates that the method is feasible and should be compatible at intertidal wetlands with similar geometry (e.g. one main entrance/exit channel) and shallow water levels.
The concept of controlling the tidal regime through a SmartGate hydraulic structure can be applied to tidal wetlands regardless of their size as long as they meet the geometry and boundary conditions required. For example, this concept was applied at the Ramsar listed Tomago Wetlands site in eastern Australia spanning over 400 ha with similar outcomes of saltmarsh growth and return of migratory shorebirds 34 . Additionally, a range of different physical methods delivering the same concept can be used depending on the value of the ecosystem (i.e. Ramsar wetlands have high value). For example, advanced electrical gates with a larger upfront investment can be used in some locations, whereas low cost buoyant lifting gates can be used to control the hydrology onsite in other locations. In many circumstances, larger upfront costs are required where various risks are identified, and lesser ongoing maintenance is desired.
Retreating landwards and sediment supply are alternative methods that could potentially achieve the same outcomes. However, the method proposed here has several advantages: connectivity with the main tidal channel is preserved and no permanent (fish) barriers are installed (e.g. the system is open to flushing ~ 90% of the time), it preserves onsite soil/sediment characteristics, it can be implemented onsite and modified based on accretion rates, and it typically only requires one piece of infrastructure for its functionality (depending on the site geometry). Further, there is limited ongoing maintenance, it does not require large volumes of exotic foreign sediment to be brought in (which could negatively affect other areas), and it does not impact the existing onsite seedbank (Supplementary Table 2). However, the main benefit of this method is that the synthetic tidal regime can be designed to maintain or create saltmarsh, mangrove, or mudflat ecosystems as well as a specific combination of these, as desired. Additionally, it has the flexibility to adaptively manage the tidal inundation regime over time (e.g. as rehabilitation progresses), with varying land accretion and SLR rates. It is also noteworthy that the ecosystem services (e.g. storm water retention, protection from tidal surge, etc.) provided by saltmarsh vegetation developed using the Tidal Replicate Method should be the same as saltmarsh developed under natural conditions. The main limitation with this method, however, is its limited applicability to intertidal ecosystems located along the open coast or in large oceanic embayments, as a channelised entrance (i.e. hydraulic control point) to the site is required to control the site's hydrology. www.nature.com/scientificreports/ www.nature.com/scientificreports/ A comparison with retreating landwards and sediment supply methods. A comparison of the proposed method to the sediment supply and landward migration strategies highlights the value of the Tidal Replicate Method. For instance, the sediment supply method involves landform building via sediment deposition and vertical accretion on areas that are under threat from SLR 35 . The sediment supply method requires the sediment material to be similar with the material naturally found onsite and, hence, may need to be transported from remote locations. In some cases, it requires large quantities of sediment and the process may be prolonged and ongoing 36 . Additionally, sourcing sediment may be challenging and, if dredging is required, significant pumping costs may make this process prohibitive. In contrast, the Tidal Replicate Method overcomes such problems by adjusting the tidal regime to promote the desired conditions onsite for a range of sea level and sediment accretion changes over time.
An alternative option for protecting vegetated intertidal ecosystems is to foster the landwards (or upslope) retreat with SLR. Recent research suggests that in the face of SLR, the provision of upslope accommodation space is more critical for the future global extent of vegetated intertidal ecosystems than vertical accretion 6,37,38 . However, this may not always be a feasible option and depends on firstly, the availability of surrounding lowlying land with suitable elevation, which may be limited by urbanisation, natural geographic boundaries, existing infrastructure and private land ownership 39,40 , and secondly, the political decision-making process regarding the management of these coastal areas (e.g. sediments may not be appropriate for rehabilitation and the timeframes for rehabilitation could be beyond the timing for the wetland retreat).
The landward retreat option is a less desirable approach as it can affect global organic soil carbon accumulation 41 . Existing vegetated intertidal ecosystems may be holding millennia old blue carbon stocks that can be released if such ecosystems are degraded or lost 42 . Additionally, other ecosystems that provide different but specific functions may already exist on the landward side. Landward retreat can place these ecosystems under threat and conservation may need to be considered at some locations. Some sites, like Ramsar listed wetlands, are geographically linked to a location, and cannot be moved as their boundaries are set by law. Many of these sites may have high cultural value and provide services for regional communities 43 and may need to be preserved.
Global sea level rise vs accretion rate.
Where upland slope retreat is not an option, the ability for any vegetated intertidal ecosystem to adapt to SLR will be largely reliant on the site's ability to maintain accretion rates in line with SLR. The global mean SLR during the satellite altimetry period (1993-2014) has increased at a rate of 3.3 ± 0.4 mm/year 44 and SLR has been shown to accelerate at a rate of 0.084 ± 0.025 mm/year in the 25 years leading up to 2017 21 . Based on the IPCC's projected lower and upper-end scenarios, global SLR is expected to increase at a rate of 4-9 mm/year (RCP2.6) and 10-20 mm/year (RCP8.5), by the year 2100 17 . However, the potential impacts of SLR on intertidal ecosystems may be minimal if the rate of vertical accretion exceeds or maintains pace with the projected rates of SLR. There is currently very limited information on the maximum SLR rate at which intertidal ecosystems can adjust to SLR via accretion, without being permanently submerged.
Sediment accretion in intertidal systems is mostly associated with sediment supply, tidal inundation and frequency, plant productivity and porewater salinity 45 . Sediment accretion rates for intertidal saltmarsh ecosystems are reported to range from 0.3 to 0.8 mm/year for Europe, USA and Australia 46 , while some studies have reported up to 1.3 mm/year for USA 47 . Accretion rates are highly variable in different geomorphic settings and large discrepancies exist in the literature. For example, studies have shown that saltmarsh accretion rates have not been sufficient to keep pace with SLR over the last century and accretion rates may not be able to keep pace with future SLR even under the most optimistic IPCC SLR scenario 48 . A recent study suggests that mangroves may not be able to sustain sufficient accretion when relative SLR exceeds 6.1 mm/year (with current sea levels expected to exceed 7 mm/year by 2050 under high emissions) 49 . In summary, based on our understanding of current accretion rates and limited sediment supply (partly due to anthropogenic flow attenuation via upstream structures), vegetated intertidal ecosystems are unlikely to maintain accretion with future SLR (i.e. resulting in widespread submergence of wetlands) 7,50 . In these circumstances, the Tidal Replicate Method could be utilised to adaptively manage the tidal regime in line with accretion and SLR rates.
Global implications. Ramsar convention listed coastal wetlands provide many valuable ecosystem services, however, their value and benefits are usually underestimated 51 . A Ramsar wetland provides ecosystem services estimated at $194,000 ha −1 year −1 (USD) 6,52 . Millions of hectares of Ramsar wetlands are currently under threat from SLR and no long-term solution has been proposed or action taken to protect these high priority wetlands from being lost. The Tidal Replicate Method, where applicable, is a feasible solution for protecting or preserving these ecosystems. Here, Ramsar listed wetlands worldwide were examined to determine if the Tidal Replicate Method is broadly transferrable to these wetlands. The Centre for International Earth Science Information Network (CIESIN, Columbia University, 2013) and Ramsar Convention data repository (https ://ramsa r.org/) were used to identify Ramsar wetlands worldwide. Coastal and intertidal wetlands with a minimum elevation of 3 m (approximately equal to the higher end SLR scenario), were filtered resulting in 480 Ramsar wetlands (from the initial 1800) in all continents. Thereafter, the geometry and geographical location of the short-listed sites were investigated to determine whether the Tidal Replicate Method is applicable (e.g. each Ramsar wetland site was assessed to ensure that a single channel was available to control the hydraulics). This comprehensive survey identified 32 Ramsar listed sites over six continents that can potentially utilise the Tidal Replicate Method to adapt to SLR. If an automated tidal control system (e.g. SmartGate) is implemented at these sites, over 1,184,000 ha of wetlands of international significance can be preserved from partial or full permanent inundation in response to accelerating SLR (Fig. 3). This equates to an ecosystem service savings of $230 billion USD per year versus the status quo or no action strategy (
Conclusion
SLR is threatening high priority vegetated intertidal ecosystems and unless widespread action is taken, thousands of hectares of wetland ecosystems may be lost. Currently, there is no global strategy in place to conserve or adaptively manage high value vegetated intertidal ecosystems. As these threats are focused on the hydrologic regime, a reasonable solution is to actively manage a site's hydrology to ensure it can adaptively replicate the desired onsite conditions. Here, we present an eco-engineering solution, the Tidal Replicate Method, that can protect vegetated intertidal ecosystems by mimicking natural tidal conditions. The method is based on the inundation depth and frequency requirements of the desired vegetation type and establishes a synthetic tidal regime, implemented via an automated tidal control system (SmartGate). This novel method was implemented at a test site and demonstrated positive results. The method allows the site to be adaptively managed as sea levels or net accretion rates change with time. Worldwide, we estimate over 1,184,000 ha of high priority coastal wetlands can be preserved if the Tidal Replicate Method is adopted in other locations with similar settings.
Materials and methods
Study site. An intertidal temperate coastal wetland located at Kooragang Island (Hunter Wetlands National Park; − 32.866707S, 151.715561E), approximately 7 km upstream of the oceanic entrance of the Hunter River estuary, Newcastle, Australia, was chosen as the study site to implement the method. The Hunter River estuary is a wave-dominated barrier estuary with a trained and continuously dredged entrance, subject to a semidiurnal tidal regime with a maximum amplitude of approximately 2 m 53 . The site is recognised as a Ramsar site of international importance. The location and characteristics of the site ensure it can be used as an example to replicate SLR impacts. The site's (wetland) catchment is 24 hectares, low-lying (median elevation is 1.2 m) and has no upstream freshwater inputs. The wetland has a single estuarine channel (known as Fish Fry Creek) that is 170 m long, 10 m wide and 1.0 m deep at low tide level 7,54 , and connects to the south arm of the Hunter River estuary. The channel connects the estuary to the intertidal wetland which covers an area of 112,450 m 2 . The site In the twentieth century, levees and internal drainage were implemented in this region to create a flood detention system which resulted in tidal waters being excluded from the wetland 55 . Following coastal wetland rehabilitation works at the area in the early 2000s, tidal flow was reintroduced to the site. However, changes in the site's hydrology and topography favoured the expansion of mangroves, resulting in extensive loss of saltmarsh vegetation 56 . This change also affected the wetland ecosystem function including species habitats (decline in migratory shorebirds and frogs) 57 . In all, these actions resulted in a site that under natural conditions (e.g. the existing tidal regime) encouraged non-saltmarsh vegetation expansion and was not suitable for saltmarsh growth despite it historically being an important saltmarsh location for migratory shorebirds 40,58 . As such, the site was experiencing deeper tidal inundation patterns than desired, similar to that experienced with SLR, hence, making it an ideal location to trial the Tidal Replicate Method.
Vegetation elevation and tidal planes. Field campaigns were carried out between 3rd-9th October 2016 to survey saltmarsh and mangrove tidal range and land surface elevations at the study site. In addition, other reference sites, where hydrological processes were unaffected by human activity, were also sampled across the lower Hunter River estuary. Seven nearby sites were investigated across the lower estuary, including areas on Hexham Island, Kooragang Island and Tomago Wetlands. The results from the survey were then used to determine the tidal range and topography that promotes saltmarsh vegetation growth (Supplementary Figure 4). The sediment supply rates at the study and the reference sites were known to be similar (i.e. statistically not significant) 59 . Survey points taken at each site were identified by a tagging system and grouped based on three categories; (i) saltmarsh and (ii) upper and (iii) lower bounds of mangrove stands. Over 500 points of saltmarsh and mangrove populations were surveyed at the seven sites during the field investigation. All points were surveyed to AHD using a Trimble 5800 RTK-GPS (real-time kinematic global positioning system), accurate to less than ± 20 mm. To generate near future time series tidal water elevations for the study site to develop the synthetic tidal regime, a calibrated hydrodynamic model of the Hunter River estuary developed by the Water Research Laboratory, UNSW, Sydney was utilised 60 .
Digital elevation model and vegetation ground-truthing. A total of seven drone surveys over a
3-year period were conducted at the site to determine surface elevation through photogrammetry and vegetation development by multispectral data. Drone surveys were conducted in February and October 2017, and April and August 2018, and April and December 2019 and August 2020. For each drone survey, an eBee RTK survey grade aerial drone was flown over the site and the data was processed using the Pix4D advanced photogrammetry software to create a digital elevation model. A total of six ground control points were distributed around the site during each survey to increase the accuracy of the drone survey. Using the same software, a high resolution, geo-rectified ortho-mosaic was produced.
On-ground vegetation sampling was carried out to ground-truth drone surveys for the presence/absence of saltmarsh vegetation. There was no saltmarsh at the start of rehabilitation process. Nine field sampling events over a 3-year period in November 2017, February, June and November 2018, March, July and November 2019, and February and December 2020 were undertaken. Sampling was completed in the low, middle, and higher marsh zones based on tidal inundation depth and frequency. At each zone, 25 random 1 m 2 quadrats were placed to measure vegetation species and cover with 75 quadrats for the entire site. Each quadrat location was marked with GPS coordinates and marker pegs for consecutive sampling events.
Synthetic tidal regime. The synthetic tidal regime was based on local estuary data and reference sites with unimpeded tidal flushing and known flushing conditions. The number of tides and inundation levels for the study site were estimated based on the relationship between tidal planes, topography, and vegetation hydrological requirements. Based on site specific topographic conditions, the base water level (1st trigger level-the deepest water depth that stays in channel before flowing overbank) was determined. This base water level corresponded to the desired water level to be imposed at the site (e.g. the median level of saltmarsh determined based on vegetation elevation survey). The hydroperiod at the site in the synthetic tidal regime (exceedance probability), was equal to the time water levels were higher than the equivalent level in a natural tidal regime at the reference sites. The number of tides per year to reach a certain peak (trigger) were estimated as the number of times water passes a trigger over the total number of tides in a year (~ 700).
Additional trigger levels can be created to increase the control over inundation depths over time and be used to adaptively manage the tidal signal at the site (avoid the creation of non-salt marsh species that would happen naturally). Thereby, the tidal signal is artificially lowered/manipulated to generate a site-specific tidal regime within the wetland that matches the natural tidal hydroperiod observed at nearby reference sites (i.e. excludes the tidal regime that would naturally want to exist onsite). The developed synthetic tidal regime is designed such that trigger levels are as close as possible to natural tide levels. Water levels immediately before and after the SmartGate hydraulic structure were measured using Solinst Levelogger Edge Model 3001 (Solinst Canada Ltd., Georgetown, Canada) data loggers with an accuracy of ± 5 mm to ensure desired trigger levels were achieved. www.nature.com/scientificreports/
|
2023-02-21T14:49:55.482Z
|
2021-01-13T00:00:00.000
|
{
"year": 2021,
"sha1": "2371d769185e4d84d40a7913f2867de12dfbe9df",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-80977-3.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2371d769185e4d84d40a7913f2867de12dfbe9df",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
219329796
|
pes2o/s2orc
|
v3-fos-license
|
Animal-Assisted and Pet-Robot Interventions for Ameliorating Behavioral and Psychological Symptoms of Dementia: A Systematic Review and Meta-Analysis
Patients with dementia suffer from psychological symptoms such as depression, agitation, and aggression. One purpose of dementia intervention is to manage patients’ inappropriate behaviors and psychological symptoms while taking into consideration their quality of life (QOL). Animal-assisted intervention (AAI) and pet-robot intervention (PRI) are effective intervention strategies for older people with cognitive impairment and dementia. In addition, AAI and PRI have been shown to have positive effects on behavioral and psychological symptoms of dementia (BPSD). However, studies into the association between AAI/PRI and BPSD have elicited inconsistent results. Thus, we performed a meta-analysis to investigate this association. We analyzed nine randomized controlled trials on AAI and PRI for dementia patients published between January 2000 and August 2019 and evaluated the impact of AAI/PRI on agitation, depression, and QOL. We found that AAI and PRI significantly reduce depression in patients with dementia. Subsequent studies should investigate the impact of AAI and PRI on the physical ability and cognitive function of dementia patients and conduct a follow-up to investigate their effects on the rate of progression and reduction of symptoms of dementia. Our research will help with neuropsychological and environmental intervention to delay or improve the development and progression of BPSD.
Introduction
In 2016, it was estimated that 47 million individuals are living with dementia worldwide, and this figure is projected to increase to 113 million in 30 years. As a result, the public health burden of dementia is anticipated to significantly increase in the coming years [1]. Currently, the World Health Organization is striving to promote dementia prevention and increase dementia awareness by significantly investing in health and welfare and active research into dementia [2]. Furthermore, many countries have
Characteristics of the Included Studies
Nine studies met the inclusion criteria for this study, and their general characteristics are presented in Table 1. Only studies with a PEDro score of 4-7 and thus deemed to be of "fair" or "good" quality were included [34]. A total of 507 participants were included in the meta-analysis. In the included studies, dementia patients were subjected to various interventions involving living or robotic animals. Each study was systematically analyzed and compared with the rest of the studies. The control group was typically subjected to the conventional treatment program provided at the hospital or facility at which the study was conducted.
Meta-Analysis of the Effects of AAI and PRI on Agitation in Dementia Patients
In the meta-analysis of the effects of AAI and PRI on agitation in dementia patients, the effect size was 0.70 (95% confidence interval: p = 0.12, I 2 = 89%), which was considered a large effect size. Overall, AAI and PRI did not significantly affect agitation in dementia patients ( Figure 1). In the meta-analysis of the effects of AAI and PRI on agitation in dementia patients, the effect size was 0.70 (95% confidence interval: p = 0.12, I 2 = 89%), which was considered a large effect size. Overall, AAI and PRI did not significantly affect agitation in dementia patients ( Figure 1).
Meta-Analysis of the Effects of AAI and PRI on Depression in Dementia Patients
In the meta-analysis of the effects of AAI and PRI on depression in dementia patients, the effect size was -0.47 (95% confidence interval: p < 0.001, I2 = 0%). Overall, AAI and PRI significantly reduced depression in dementia patients ( Figure 2).
Meta-Analysis of the Effects of AAI and PRI on the QOL of Dementia Patients
In the meta-analysis of the effects of AAI and PRI on the QOL of dementia patients, the effect size was 0.13 (95% confidence interval: p = 0.34, I 2 = 0%), which was considered a small effect size. Overall, AAI and PRI did not significantly affect the QOL of dementia patients ( Figure 3).
Meta-Analysis of the Effects of AAI and PRI on Depression in Dementia Patients
In the meta-analysis of the effects of AAI and PRI on depression in dementia patients, the effect size was −0.47 (95% confidence interval: p < 0.001, I2 = 0%). Overall, AAI and PRI significantly reduced depression in dementia patients ( Figure 2).
Meta-Analysis of the Effects of AAI and PRI on Agitation in Dementia Patients
In the meta-analysis of the effects of AAI and PRI on agitation in dementia patients, the effect size was 0.70 (95% confidence interval: p = 0.12, I 2 = 89%), which was considered a large effect size. Overall, AAI and PRI did not significantly affect agitation in dementia patients ( Figure 1).
Meta-Analysis of the Effects of AAI and PRI on Depression in Dementia Patients
In the meta-analysis of the effects of AAI and PRI on depression in dementia patients, the effect size was -0.47 (95% confidence interval: p < 0.001, I2 = 0%). Overall, AAI and PRI significantly reduced depression in dementia patients ( Figure 2).
Meta-Analysis of the Effects of AAI and PRI on the QOL of Dementia Patients
In the meta-analysis of the effects of AAI and PRI on the QOL of dementia patients, the effect size was 0.13 (95% confidence interval: p = 0.34, I 2 = 0%), which was considered a small effect size. Overall, AAI and PRI did not significantly affect the QOL of dementia patients ( Figure 3).
Meta-Analysis of the Effects of AAI and PRI on the QOL of Dementia Patients
In the meta-analysis of the effects of AAI and PRI on the QOL of dementia patients, the effect size was 0.13 (95% confidence interval: p = 0.34, I 2 = 0%), which was considered a small effect size. Overall, AAI and PRI did not significantly affect the QOL of dementia patients ( Figure 3).
Meta-Analysis of the Effects of AAI and PRI on the QOL of Dementia Patients
In the meta-analysis of the effects of AAI and PRI on the QOL of dementia patients, the effect size was 0.13 (95% confidence interval: p = 0.34, I 2 = 0%), which was considered a small effect size. Overall, AAI and PRI did not significantly affect the QOL of dementia patients ( Figure 3).
Publication Bias
When publication bias with respect to agitation, four studies were within the 95% confidence interval and were plotted to the left of the overall effect estimate ( Figure 4A). When publication bias with respect to depression and QOL with respect to the effect of AAI and PRI was assessed ( Figure 4B,C), all plotted dots were within the 95% confidence interval.
Publication Bias
When publication bias with respect to agitation, four studies were within the 95% confidence interval and were plotted to the left of the overall effect estimate ( Figure 4A). When publication bias with respect to depression and QOL with respect to the effect of AAI and PRI was assessed ( Figure 4B and C), all plotted dots were within the 95% confidence interval.
Discussion
Currently, more than 90% of dementia patients suffer from BPSD [39], which poses major difficulties to both dementia patients and their caregivers. The type of BPSD varies according to dementia type, stage of the illness and various other factors. Particularly, patients of frontotemporal lobar degeneration (FTLD) show more prominent behavioral variants such as disinhibition, impulsivity, aggression, and personality change than those with other types of dementia [40][41][42]. Another study demonstrated that patients with dementia with Lewy bodies (DLB) present hallucinations and aberrant motor behavior (AMB) more so than Alzheimer's disease (AD) patients [43,44]. An increased rate of anxiety, depression, and psychosis may occur in vascular dementia (VD) [40,43,45]. Depression and agitation are the most common symptoms affecting various dementia patients. Furthermore, it is known that agitation, apathy, disinhibition, irritability, and motor dysfunction become serious as dementia progresses. In particular, depression and anxiety become more severe in the moderate stage of dementia [46][47][48]. In the early stages of dementia, apathy mainly appears, which is one of the first symptoms of the various forms of dementia. Apathy is a dangerous barrier that affects social interaction and activities of daily living due to lack of interest, enthusiasm, and apathetic response to interpersonal communication [49]. These psychological and behavioral changes from the early stages of dementia can affect aspects of BPSD such as depression and anxiety more seriously as dementia progress. Although BPSD, which varies depending on the type and progression of dementia, contains a range of important symptoms that affect the quality of life, stress, and prognosis of dementia patients and their caregivers, there is little of interest in and study on nonpharmacological interventions to treat BPSD. Thus, we performed a meta-analysis to investigate the effect of AAI and PRI-one of the nonpharmacological interventions using animals-on agitation, depression, and QOL in dementia patients [15,26,27].
The meta-analysis of the effects of AAI and PRI on agitation showed a medium effect size of 0.70 ( Figure 1). Three studies that utilized AAI and two studies that utilized PRI were included in
Discussion
Currently, more than 90% of dementia patients suffer from BPSD [39], which poses major difficulties to both dementia patients and their caregivers. The type of BPSD varies according to dementia type, stage of the illness and various other factors. Particularly, patients of frontotemporal lobar degeneration (FTLD) show more prominent behavioral variants such as disinhibition, impulsivity, aggression, and personality change than those with other types of dementia [40][41][42]. Another study demonstrated that patients with dementia with Lewy bodies (DLB) present hallucinations and aberrant motor behavior (AMB) more so than Alzheimer's disease (AD) patients [43,44]. An increased rate of anxiety, depression, and psychosis may occur in vascular dementia (VD) [40,43,45]. Depression and agitation are the most common symptoms affecting various dementia patients. Furthermore, it is known that agitation, apathy, disinhibition, irritability, and motor dysfunction become serious as dementia progresses. In particular, depression and anxiety become more severe in the moderate stage of dementia [46][47][48]. In the early stages of dementia, apathy mainly appears, which is one of the first symptoms of the various forms of dementia. Apathy is a dangerous barrier that affects social interaction and activities of daily living due to lack of interest, enthusiasm, and apathetic response to interpersonal communication [49]. These psychological and behavioral changes from the early stages of dementia can affect aspects of BPSD such as depression and anxiety more seriously as dementia progress. Although BPSD, which varies depending on the type and progression of dementia, contains a range of important symptoms that affect the quality of life, stress, and prognosis of dementia patients and their caregivers, there is little of interest in and study on nonpharmacological interventions to treat BPSD. Thus, we performed a meta-analysis to investigate the effect of AAI and PRI-one of the nonpharmacological interventions using animals-on agitation, depression, and QOL in dementia patients [15,26,27].
The meta-analysis of the effects of AAI and PRI on agitation showed a medium effect size of 0.70 ( Figure 1). Three studies that utilized AAI and two studies that utilized PRI were included in the meta-analysis. The studies that used AAI reported larger effect sizes than those that used PRI, but AAI and PRI were not found to significantly affect agitation overall [23,24,35]. Our result contrasts with the results of a previous study which showed an alleviation in the agitation. However, since the level of evidence for the randomized controlled trials (RCTs) in previous studies was very low, we thought that the opposite results were obtained. Accordingly, our results support the suggestion of previous studies that the level of evidence is low [32].
The meta-analysis of the effects of AAI and PRI on depression showed a medium effect size of −0.47 (Figure 2). Three studies that used AAI were included, and two reported that this intervention strategy reduced depression [23,24,35]. Two studies that used PRI were included, and these showed a medium effect size [36,37]. AAI and PRI were found to significantly reduce depression, which serves as evidence that AAI and PRI are effective at reducing depression in dementia patients (p < 0.001).
The meta-analysis of the effects of AAI and PRI on QOL showed a small effect size of 0.13, but the results were not statistically significant (p > 0.05) (Figure 3). Two studies used AAI, and both reported that these interventions improved QOL [11,24]. One study used PRI, and reported that this intervention did not significantly affect QOL [36]. The meta-analysis results showed that AAI and PRI did not significantly affect QOL, which supports previous findings [32].
The present study analyzed the effects of AAI and PRI on BPSD and found that these interventions did not affect agitation or QOL but significantly reduced depression. It is well known that the brain with depression in dementia has reduced connectivity on amygdala and emotion control regions [50,51]. AAI and PRI provide an emotional effect and a and sense of closeness to dementia patients [52], which may the reduced amygdala connectivity in dementia patients. In addition, AAI and PRI could have a positive effect on hippocampus in the brain with depression through activities that require memory, such as checking the health of animals, walking, and feeding. On the other hand, the agitation-related connectivity is the orbital frontal cortex and anterior cingulate cortex, which is a region that has little association with emotional support obtained through activities with animals. Thus, AAI and PRI did not show a significant effect in agitation. Although AAI and PRI have been effective in improving depression, it is difficult to dramatically relieve all BPSD symptoms. Moreover, it is known that BPSD is specifically related with the patient's low of QOL [53]. Therefore, in this study, it is considered that AAI and PRI were difficult to significantly influence QOL. A previous meta-analysis reported that AAI do not affect activities of daily living, depression, agitation, QOL, or cognitive function. In addition, a number of limitations are associated with interventions involving the use of living animals: patients may be fearful of or allergic to animals, animals may provoke falls in vulnerable patients, and animals may pose an infection risk to patients [32]. Moreover, there are a number of difficulties associated with managing animals-they need to be fed, produce feces, and may smell. However, it is clear that AAI can enhance the emotional wellbeing and QOL of dementia patients. Although robotic animals cannot evoke the same variety of emotions and sensations as living animals, they are easier to manage and could aid patients wherever needed. Subsequent studies should additionally examine the impact of living animals and robotic animals on the emotional wellbeing, cognitive function, and physical ability of dementia patients. Furthermore, patients should be followed-up to investigate the efficacy of these interventions in slowing the progression of dementia.
Several studies have suggested that psychiatric symptoms such as depression and anxiety are associated with dementia and cognitive impairment [54][55][56]. Indeed, patients with dementia have an increased risk of major depression, and many suffer from anxiety [57,58]. Interestingly, amyloid-beta (Aβ) burden and tau-related pathology are known to worsen in Alzheimer-type dementia with depression [55,59]. In addition, depression and agitation are causative factors of sleep disorders, and they can promote the development of dementia by inhibiting Aβ clearance and inducing systemic inflammation [60][61][62][63]. Therefore, it is important to alleviate the psychological symptoms of dementia patients. In this study, we confirmed that AAI and PRI can relieve the psychological symptoms of dementia patients. Several mechanisms by which AAI and PRI may affect BPSD have been proposed. First, AAI and PRI affect hormone levels. Previous studies consistently reported that dog-raising people exhibit higher levels of oxytocin, a hypothalamic neuropeptide [64,65]. Oxytocin is closely related to cognitive function, depression, agitation, and social communication and has been proposed as a pharmacological intervention for neurobehavioral disorders in patients with prefrontal dementia [66,67]. In addition, it has been reported that animal owners exhibit reduced cortisol levels [68]. In AD, cortisol levels substantially increase and this steroid hormone elicits neurotoxic effects in the hippocampus and thus exacerbates Aβ pathology and contributes to cognitive impairment [69]. Therefore, AAI may improve BPSD by increasing oxytocin levels and reducing cortisol levels. Furthermore, the relationship between loneliness and depression is well established, and loneliness has been reported to promote Aβ deposition in the brain of AD patients [70,71]. In addition, loneliness is known to contribute to cognitive decline by lowering cognitive reserve [72]. Surprisingly, AAI is known to reduce the loneliness of residents in long-term care facilities [73]. Therefore, AAI and PRI may effectively reduce loneliness and depression in dementia patients.
Second, it is possible that AAI and PRI modulate brain structure and functional connectivity. Patients with dementia exhibit atrophy of the hippocampus and entorhinal cortex, areas of the brain associated with emotional and spatial memory [74]. In addition, late-stage dementia is associated with dysfunction of the amygdala and cerebral cortex [75,76]. Accordingly, patients with dementia have problems with language, reasoning, emotions, and social behavior. Furthermore, atrophy of the hippocampus and cerebral cortex affects the functional connectivity of frontotemporal and limbic circuits involved in depression and mood regulation [77]. Strikingly, emotion-related brain areas may be affected by dementia patients' relationship and emotional stability. Indeed, improvements in executive function, social skills, mood regulation, learning, memory, and attention were noted in patients receiving cognitive rehabilitation therapy through various AAI [52]. In addition, in children with ADHD, AAI had a calming effect, increased motivation, improved cognitive function, and promoted socialization [78]. It is thought that interaction with a therapy animal enhances functional connectivity between the frontotemporal and limbic systems. Moreover, having to look after an animal and remember to perform tasks such as feeding it is thought to improve memory and learning ability and attenuate hippocampal and cortical atrophy. Social interaction is possible through relationships and walking with animals, and through group meetings, depression will be alleviated. Although the neurological mechanisms underlying the effects of AAI and PRI have not been fully elucidated, accumulating evidence suggests that AAI and PRI can effectively improve BPSD.
Although a number of previous studies have also investigated living-and robotic-animal-assisted interventions for patients with dementia, our study has a number of strengths [31][32][33]. First, we comprehensively investigated the effects of interventions involving living and robotic animals and, for the first time, compared the effects of AAI and PRI on BPSD. Second, we demonstrated trends in research in this field and confirmed that more research is now being conducted into interventions involving robotic animals for dementia patients. Third, two reviewers independently identified articles that met the inclusion criteria, and a high level of inter-rater agreement was noted. Fourth, we focused on BPSD and dementia. Although AAI and PRI are known to affect various symptoms of dementia patients, we conducted a literature search and meta-analysis focusing on BPSD. Finally, it is difficult to distinguish between mild cognitive impairment (MCI) and dementia patients unless a neurological examination is performed to definitively diagnose dementia. In this study, we aimed to confirm the effect of AAI and PRI in individuals who had been diagnosed with dementia, not MCI.
Nevertheless, our study has a number of limitations. One limitation of the meta-analysis is the small number of included studies, which shows that there is a lack of literature relating to AAI and PRI for dementia patients. In addition, we only selected studies published in peer-reviewed journals and did not include any grey literature, which may have introduced publication bias. Third, we were unable to identify specific subgroups of dementia patients who may benefit most from AAI and PRI. Finally, we searched only a few English language databases, so some relevant studies may have been missed.
Subsection
A meta-analysis was performed to analyze and validate studies that investigated the effects of AAI and PRI on dementia patients.
Search Strategy
Studies into the effect of AAI and PRI on dementia patients published between January 2000 and August 2019 were analyzed. Data were collected from three electronic databases-the Cochrane Library, Embase, and PubMed ( Figure 5). The search terms used were "Dementia" AND "animal-assisted therapy OR animal-assisted activity OR service animal programs OR animal OR robot". A total of 5364 studies were initially identified, and, after the exclusion of 4858 nonclinical trials, 506 studies underwent further analysis. An additional 506 studies were then excluded: 1 because the original text was unavailable, 9 because they were written in a language other than English, 173 because they were not RCTs, 216 because they were duplicates, 92 because they were inappropriate for the purpose of our study/because they were unsuitable based on a review of their titles and abstracts, and 7 because data were missing or disorganized. Ultimately, nine studies were included in the systematic review and meta-analysis. by discussion. The chi-squared test was performed to determine the significance of the Q statistic [82,83]. If the p-value of Q was less than 0.10, there was deemed to be significant statistical heterogeneity between studies. A higher significance level was used since the Q statistic has low statistical power when only a small number of studies are included in a meta-analysis [84]. All statistical analyses were performed using Review Manager 5.3 software (RevMan; the Cochrane Collaboration, Oxford, UK).
Selection Criteria
Studies were included if they met all of the following criteria: (i) the study population comprised dementia patients, (ii) the experimental intervention was an AAI or PRI, (iii) the participants were randomized into groups, (iv) standardized evaluations were conducted to compare the effects of the intervention and control treatment, and (v) sufficient data were available to compute the effect size.
Study Selection and Data Extraction
Two reviewers (S.P. and A.B.) independently identified studies that met the inclusion criteria and performed data extraction. Disagreements between the reviewers were resolved by discussion. From each selected study, the following data were extracted: author, year of publication, mean age of the participants, study design (sample size, intervention type, follow-up duration, and frequency of intervention), and outcome measurement tools.
Qualitative Assessment of Study Methodology
One reviewer (S.P.) assessed the quality of the nine selected studies by assigning each a PEDro score (OTseeker, 2003), and the results were verified by the other reviewer (A.B.). The PEDro score ranges from 0-10 and the quality of a study is classified as "poor" (≤3), "fair" (4-5), "good" (6-8), or "excellent" (9-10) [34]. Studies deemed to be of "fair" to "good" quality (4-7) were included in this analysis. Any disagreements between the investigators with respect to the qualitative assessment of the studies were resolved by discussion.
Qualitative Assessment of Study Methodology
For each of the included studies, the following data were presented: name of first author/names of all authors, year of publication, age of participants, sample size, type of intervention/intervention method, duration and frequency of intervention, instruments used to assess primary outcomes, and PEDro score. To analyze the effects of AAI on dementia patients based on these characteristics, the mean, standard deviation, and sample size of the intervention and control groups were computed (Table 1). We examined whether the direction of the effect size was identical across studies and if not, made them equal by multiplying the mean by −1 [79].
Statistical Analysis
It is inappropriate to determine whether a fixed effect model or random effect model should be employed using the heterogeneity statistic I 2 . In order to select an appropriate effect model, the characteristics of the study, the subjects of the study, the method of intervention, and the mean value of the intervention effect were examined. In order to select an appropriate model to determine statistical heterogeneity, the characteristics of individual studies, study design, study subjects, intervention methods, and average values of intervention effects were examined [80].
Effect sizes were calculated to determine and compare the effect of AAI and PRI/different interventions on activities of daily living, stress, depression, and mental health using the sample size, mean, standard deviation, and statistically significant test of the experimental and control groups. According to the analysis criteria suggested by Cohen [81], 0.2 or less was considered a small effect size, 0.5 a medium effect size, and 0.8 or more a large effect size. The quantitative results of the meta-analysis were presented using forest plots. Publication bias was assessed by creating funnel plots. These were assessed by two reviewers and any disagreements were resolved by discussion. The chi-squared test was performed to determine the significance of the Q statistic [82,83]. If the p-value of Q was less than 0.10, there was deemed to be significant statistical heterogeneity between studies. A higher significance level was used since the Q statistic has low statistical power when only a small number of studies are included in a meta-analysis [84]. All statistical analyses were performed using Review Manager 5.3 software (RevMan; the Cochrane Collaboration, Oxford, UK).
Conclusions
This study systematically reviewed, compared, and meta-analyzed the impact of AAI and PRI on agitation, depression, and QOL in dementia patients. Interventions involving both living and robotic animals were investigated. The meta-analysis revealed that AAI and PRI interventions significantly reduced depression but did not affect agitation or QOL. Comparison of AAI and PRI showed that each method has its benefits and shortcomings and indicated that the two methods could potentially complement each other. Interventions involving living animals had a more beneficial effect on the emotional wellbeing of dementia patients than PRI. Although robotic animals overcome some limitations of living animals, they were not shown to alleviate BPSD in this study. In the future, more research should be conducted on the impact of living and robotic animals on the emotional wellbeing, cognitive function, and physical ability of dementia patients. Furthermore, we hope that AAI and PRI, which have been found to effectively reduce depression in dementia patients based on follow-ups, are more commonly utilized in clinical practice.
Conflicts of Interest:
The authors declare no conflicts of interest.
|
2020-06-04T09:06:54.337Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "1c0275e30c944ac6caa104b7d3088eb90bb92275",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/8/6/150/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9d1fb36c528e93b59fab4db7168d3af6689ad0af",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55676866
|
pes2o/s2orc
|
v3-fos-license
|
Visualization of working fluid flow in gravity assisted heat pipe
Heat pipe is device working with phase changes of working fluid inside hermetically closed pipe at specific pressure. The phase changes of working fluid from fluid to vapor and vice versa help heat pipe to transport high heat flux. The article deal about construction and processes casing in heat pipe during operation. Experiment visualization of working fluid flow is performed with glass heat pipe filed with ethanol. The visualization of working fluid flow explains the phenomena as working fluid boiling, nucleation of bubbles, vapor flow, vapor condensation on the wall, vapor and condensate flow interaction, flow down condensate film thickness on the wall, occurred during the heat
Introduction
The thermosyphon (gravity assisted heat pipe) is shown in figure 1.The bottom is a heating (evaporation) section, the middle an adiabatic section, and the upper part a condensation section.A small quantity of liquid is placed in a tube from which the air is then evacuated and the tube sealed.
The lower end of the pipe is heated, this causing that the vapour flows from the bottom to the condensation section where it is cooled and condensed into liquid.Then the liquid flows downwards along the wall as a very thin film.However, liquid plugs may form at high heat input.This entire cycle repeats in heat pipe again and again.
The Thermosyphon works on both conduction and convection mechanism.The heat from the heat source is transferred to the thermosyphons through conduction and inside thermosyphon heat convection occurs [1].Since the latent heat of evaporation is large, considerable quantities of heat can be transported with a very small temperature difference from end to end.Thus, the structure will also have a high effective thermal conductance.
One limitation of the basic thermosyphon is that in order for the condensate to be returned to the evaporator region by gravitational force, the evaporation section must be situated at the lowest point [2].
The amount of heat that can be transported by these systems is normally several orders of magnitude greater than pure conduction through a solid metal [3].
A thermosyphon needs only a temperature difference to transfer large amount of heat, and are widely used in various areas of industry, such as chemical engineering, thermal engineering or thermal management systems with limited space applications and other heat recovery systems.
Flow regimes in heat pipe
In heat pipe occurs two-phase flow regime.The twophase regime is simplest case of multiphase, which means simultaneously flow of several phases.In case of heat pipe it is liquid-vapour phase.
The subject of two-phase flow has become increasingly important in a wide variety of engineering systems including heat pipes for their optimum design and safe operations.It is, however, by no means limited to today's modern industrial technology, and multiphase phenomena which require better understanding.Two-phase flows obey all of the basic laws of fluid mechanics.The equations are merely more complicated or more numerous than those of single-phase flows.The techniques for analyzing a 1 D flows fall in to several classes which can conveniently be arranged in ascending order of sophistication, depending on the amount of information which is needed to describe the flow.Perhaps the first step in rendering this problem of flow regime in heat pipe is to break it up into various regimes, which are each governed by certain dominant geometrical or dynamical parameters.Part of the definition of the flow regime is a description of the morphological arrangement of the components, or flow pattern.An example of the complexity of two-phase flows is depicted in figure 2. That shows a sequence of flow patterns taking place in an evaporator of heat pipe as more and more liquid is getting converted to vapour.Complexity of the problem arises in different parts of the evaporator that requires different methods of analysis, and the problem of how one regime develops from another has to be considered too [5].Many presentations of flow pattern and flow regime were mapped out by numerous authors for given apparatus and specific components.For examples Inoue [6] deal with heated surface temperature fluctuation and flow pattern.On the other side Liu [4] pointed out thickness of liquid film in heat pipe.
The flow inside a heat pipe is more complex than the Nusselt model [7].This can be demonstrated by examining a more realistic description of a liquid film in a heat pipe as in figure 3. First, a temperature gradient T exists, i.e.T 2 > T 1 , a thermocapillary force T T , expressed by equation ( 2), occurs due to the surface tension ı, thus increasing the film thickness [8] T T = ɐ ൌ ሺμɐ Ȁ μሻǤ ሺͳሻ Second, the counter-current flow of the vapour exerts a shear force T s on the film surface where u v and u l are vapour and liquid velocities m s -1 respectively.This shear may deter the downward flow of the film, resulting in a thicker film.
Last, under certain conditions, waves tend to appear on the film surface, causing thickness fluctuations above the average thickness į avg , which can be modelled by a sinusoidal function:
Results
The pictures in the figure 5 and 6 show two-phase flow of the working fluid in gravity assisted heat pipe at the time 28 -31 second from start-up.On the pictures in the figure 5 is shown one cycle since the vapour phase is occurring in heat pipe, over the condensate layer crating on the heat pipe top and condensate flowing down from condenser to the evaporator till the start of the second cycle when is in heat pipe again only vapour phase.
On the first picture is shown the vapour phase in heat pipe.On the second picture is shown creating of the condensate layer of the working fluid on the top section of the heat pipe and interaction vapour and liquid phase in the middle.On the third picture the uprising vapour condense below the condensate layer, because the condensate layer chocked the heat pipe.The part of the condensed vapour was added to the condensate layer and part drop down the wall to the bottom of heat pipe.At the same time the vapour are generated in evaporator part of heat pipe and is seen vapour flow off the drop of condensate on the bottom.On the fourth picture the vapour upraise to the top of heat pipe.On the fifth and sixth picture is shown the breaking through the condensate layer by vapour on the top.All collected condensate drop down the pipe wall.On the fifth picture is shown a thin film of the flowing down condensate on the top and vapour flow off the drop of condensate on the bottom.On the sixth picture is shown a thin film of the flowing down condensate on the bottom.On the seventh picture is shown vapour phase along the all pipe.All liquid phase is down in evaporator and part of the liquid phase condensed under the top of the pipe.In the figure 6 are pictures which show second cycle of the evaporation and condensation of the working fluid in heat pipe.On the first picture is shown vapour phase uprising in heat pipe.In the middle heat pipe it is seen a small nebulosity.
On the second picture is shown condensate layer in the middle.On the third picture the collected condensate is pressed up by vapour on the top.On the fourth and fifth picture is shown the part of the condensate is pressed to the top of heat pipe and part of the condensate is flow down to the evaporator.On the sixth picture is seen that the all condensate collected in the condenser flow down to evaporator.On the bottom is seen interaction vapour and liquid phase.On the top is collected next condensate layer.On the last seventh picture is shown a start of the new cycle.There is seen two condensate layers on the top of heat pipe the first upper layer is created from all collected condensate from previous picture and the second lower layer is created from next vapour phase flowed up.
Conclusions
The results from experiment bring near what is occurring inside the heat pipe during its operation.On the pictures was seen evaporation and condensation of the working fluid in same time.This experiment is just the start of the work which we would realize in the future.From reason of the better visualization it will be used to scanning flow regime high speed camera in future.In future work will be performed pictures of the flow regime various working fluids and various diameters of heat pipes.
DOI: 10
.1051/ C Owned by the authors, published by EDP Sciences, 2015
Experiment
Visualization of the flow regime in heat pipe was performed with gravity assisted heat pipe made from glass.The parameters of the heat pipe are inner diameter 10 mm, outer diameter 12 mm and total length 150 mm.The working fluid of heat pipe was ethanol.The heat pipe was putted in to hot water bath with temperature 90 °C and two-phase flow regime of working fluid was scanned by HD video camera.The visualization of the working fluid flow in heat pipes can be helpful in solution or CFD simulation of the flow dynamics or heat and mass transfer of the heat pipes[9,10].
Figure 5 .
Figure 5. First cycle of the evaporation and condensation.
Figure 6 .
Figure 6.Second cycle of the evaporation and condensation.
|
2018-12-11T10:02:15.441Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "5c507517b641b628665f8db05ff053b3ac1a46aa",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2015/11/epjconf_efm2014_02053.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5c507517b641b628665f8db05ff053b3ac1a46aa",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248642739
|
pes2o/s2orc
|
v3-fos-license
|
Graphene-maleic anhydride-grafted-carboxylated acrylonitrile butadiene-rubber nanocomposites
Ethylene-propylene grafted-maleic anhydride (EPR-g-MA) and a pure maleic anhydride (MA) were separately used to compound carboxylated acrylonitrile butadiene-rubber (XNBR) together with reduced graphene oxide (G) to form nanocomposites, by using melt compounding technique. The G-sheets in the presence of MA (GA samples) or EPR-g-MA (GB samples) generally increased the physico-mechanical properties including; crosslinking density, tensile strength and thermal degradation resistance etc., when compared with sample without MA or EPR-g-MA (GAO) and the virgin matrix. For the thermal degradation resistance measured by the char residue (%), by using thermal gravimetric analysis technique; GA1 (0.1 ph G and 0.5 ph MA) was 106.4% > XNBR and 58% > GAO (0.1 ph G) while that of GB1 (0.1 ph G and 0.5 ph EPR-g-MA) was 60% > XNBR and 22.2% > GAO respectively. Although, homogeneous dispersions of the G-sheets assisted by MA or EPR-g-MA was a factor, but the strong bonding (covalent, hydrogen and physical entanglements) occurring in GA and GB was observed to be the main contributing factor for these property enhancements. Thus, these nanostructured materials have exhibited multifunctional capabilities and could be used for advanced applications including high temperature (heat sinks), flame retardants, and structural applications.
Attempts to address concerns in elastomeric-GSD research has driven research interest in engineering functionalization of the elastomeric matrices and the GSD reinforcements [17,20,21,28]. For example, modification of the chemical structures of elastomers like NR by introduction of epoxide groups along its backbone to form epoxidated natural rubber (ENR) [17,29] and the conversion of NBR into carboxylated NBR (XNBR) or hydrogenated NBR [30,31,32] structures have already been investigated. These tailored matrices are aimed to enhance compatibility of GSD and related fillers, in order to promote strong interfacial interactions, enhance curing and strength to yield a high quality final product [17,21,22]. Though, functionalization of GSD via covalent and non-covalent techniques for homogenous dispersion and effective bonding of the individual sheets to a suitable matrix have already been resolved successfully [21,28,33,34,35]. Several issues still remain, due to complex process steps leading to high cost. A cost-effective and a simple route of preparing polymers composites using traditional fillers like: silica, nanoclays and carbon blacks with suitable coupling agents, which resulted in improved filler dispersions and promoted good filler-matrix interactions have been explored in the past [36,37,38]. Among these processing aids, Maleic anhydride (MA) coupling agent is very popular in preparing these vulcanizates. The MA (C 2 H 2 (CO) 2 O) is an organic compound formed by the dehydration of maleic acid [36,37,38]. Earlier Lopez et al. [36], successfully used MA as a compatibilizer to improve the interfacial adhesion between hydrophilic flax fibres and hydrophobic polymeric matrices. Chow et al. [37] also observed a significant increase in the mechanical properties in Polyamide (PA6)/Polypropylene (PP)-organoclay composites when ethylene-propylene rubber (EPR)-g-MA compatibilizer was incorporated into the mixture. Recently, Azizli et al. [39] studied the compatibilizer effects of EPDM-g-MA and ENR50 in XNBR/EPDM blend containing different types of nanoparticles (cloisites clays, silica and carbon blacks) and the composites showed improved physico-mechanical properties compared to the ones without compatibilizer or those void of nanoparticles.
The emergence of GSD has also resulted in great research interests in elastomer-GSD composites involving MA [20,40,41]. In spite of these efforts, elastomer-GSD research involving MA is still new and more work is needed to be done to address issues relating to the coupling effect in the reinforcing and curing mechanism of GSD in elastomeric matrices, so as to achieve their full potentials for advanced applications.
Earlier, the effects of G and GO in polar NBR and non-polar EPDM matrix were systematically and extensively explored. Weak chemical interactions between GO/G-sheets and rubber matrices was among the findings reported by Mensah et al. [14,22,26,27]. Presently, Mensah's group seeks to study the effect of MA (both pure MA and EPR-g-MA) on the physico-mechanical properties of XNBR in the presence of reduced graphene oxide (G). Various compositions of XNBR-MA-G (GA samples) and XNBR-EPR-g-MA-G (GB samples) were prepared using melt mixing technique. The characterizations include; state of filler dispersions in the matrix, rheological studies by rheometer, crosslinking density analysis, bound rubber test, tensile properties and thermal degradation analysis etc. The results obtained from these tests are clearly presented. Thus, this study gives an insight on how to improve interactions between GSD or related nanoparticles and elastomeric matrix with compatibilizers.
The ethylene-propylene-grafted-Maleic anhydride rubber (EPR-g-MA) was supplied by Intelligent Polymer Nano Lab, Polymer Nanotechnology Department, Jonbuk University, South Korea. The rest of the curatives; zinc oxide (ZnO), Stearic acid (SA), Sulfur (S), tetramethyl thiuram disulfide (TMTD), and N-cyclohexile 2-Benzotiazole Sulfonamide (CZ) were all obtained from Infochems Company Ltd (South Korea). The compound formulation expressed as parts per hundred of rubber (phr) with their corresponding codes are listed in Table 1.
Rubber compounding
The rubber compounding was done using a kneader (model: QPBV-300, QMESYST, South Korea) at 90 C and 30 rpm. Initially, the rubber was masticated in the kneader for 1 min, with the exception of sulphur; the other processing ingredients were simultaneously added and mixed for about 2 min. The MA or EPR-g-MA was added and mixed for about 1 min. Later, the G-sheets were incorporated and mixed for additional 1 min. The compound was removed and passed over a two-roll mill (QM300, QMESYSTEM) by addition of the sulphur repeatedly for about 10 min and then sheeted out. A rectangular sheet of samples (15 cm  15 cm x 2 mm) were moulded using electrical hot press machine (model: TO-200, TESTONE. South Korea), at a pressure of 25 tons at 160 C. The cured composites were allowed to cool overnight and then cut into standard shapes and sizes for characterizations. The tests done include; cure rheology, bound rubber content, crosslinking density studies, tensile properties, and thermal degradation analysis etc.
Particle size distribution of G using DLS
These big-sized G-sheets were weighed 20 mg in falcon tube and 40 ml of distilled water (DW) was added in the tube. The mixture was sonicated for half an hour at room temperature. After the G-sheets were dispersed completely in DW then the resulting mixtures were probe sonicated (power ¼ 450 W, frequency ¼ 20 kHz) for 3 h at 40 % amplitude. Then the solution (probe sonicated for 4 h) was centrifuged at 12 000 rpm, several times (for~1 h) until no precipitate settled down. The average sheet size for the second solution (4 h probe sonication) was observed as 113 nm. The temperature of the G solution was maintained using an ice bath during sonication.
Transmission electron microscopy (TEM) analysis of G
A solution of G and dimethyl furan (DMF) were gently dropped on TEM 200-mesh copper grids and allowed to dry. TEM images of the nanoparticles were taken using, JEOL, JEM2100 model.
Morphological studies by high optical microscopy (HOM)
The structure, state of dispersion and topological information of the XNBR-(G) compositions prepared with the two different MA was observed by using a high optical electron microscopy (HOM) technique obtained from Material Science Lab., University of Ghana. Representative samples of the compositions were cut into dimensions of about 1 cm  1 cm x 0.1 cm. The results obtained are well discussed.
Scanning electron microscopy analysis of G and rubber compounds
The morphologies of the G powders were coated with platinum via sputtering and then observed with field emission SEM (JEOL, JSM 599, Japan) obtained from CBNU. Also, the morphologies of the neat XNBR, and the composites (XNBR filled with G-sheets assisted with the different types of the MA and EPR-g-MA) were coated by platinum via sputtering and then observed with field emission SEM.
Analysis of bound rubber content
Study of bound rubber content in the unvulcanized nanocomposites was done by extracting the unbound rubber in toluene. For the extraction of unbound rubber,~1.0 g uncured compositions were cut into small size wrapped in cotton bag and immersed in about 300 mL toluene at room temperature for~7 days. The solution of each composition was changed every two days for 7 days. On the 7 th day, the weights of the gel compositions together with the cotton were noted. Later the content in the cotton bag were dried in oven at about 80 C for~6 h, air-dried for~3 and reweighed. The bound rubber content R b (%) in each nanocomposite was calculated using Eq. (1) as established in literature [42,43].
Where, R b (%) is the content of bound rubber, W fg is the weight of CB and gel, W t is the weight of nanocomposite. The m f and m r are the phr of CB and XNBR rubber in each composition.
Vulcanization properties of compounds by MDR
The curing properties of the separate vulcanizates prepared with the two different MA in the presence of G within XNBR matrices were studied by using an oscillating-die rheometer (MDR, model: PDR2030, TESTONE. Ltd., South Korea) operating at 160 C. The various curing parameters including; maximum torque (M H ), minimum torque (M L ), change in torque (ΔM ¼ M H -M L ), onset of cure time (t s2 ), optimum cure time (t 90 ), and curing rate index (CRI ¼ 100/(t 90 -t s2 )) of the various compounds were extracted from the rheo-curves, analysed and presented.
Crosslinking density by equilibrium swelling test
To estimate the network density of the vulcanizates, representative samples were equilibrated in toluene at room temperature for about 48 h. The swelling degree of he samples was calculated using Eq. (2) where W i is the weight of the rubber sample before immersion into solvent, W sw and W dr are the respective weights of the samples in the swollen state and after drying it in oven for about 80 C for 2 h. Also, the cross-linking density (N) was calculated using the Flory-Rehner Eq. (3); where V 2 is the volume fraction of polymer in the swollen gel at equilibrium, V 1 is the molar volume of the for toluene used (106.3 mL/mol) and χ 1 (0.374) is the polymer-solvent parameter determined from Bristow-Watson Eq. (4) [44].
where β is the lattice constant, usually taken as 0.34, V 1 is the molar volume of solvent (106.3 mL/mol), R is the universal gas constant, T is the absolute temperature and δ is the solubility parameter for the solvent (s) and polymer matrix (p) respectively. The solubility parameters of elastomer and the solvent toluene were 8.4 and 9.29 (cal/cc) 1/2 [45] respectively.
Tensile test
The tensile properties measurement was done for the vulcanizates based on ASTM D412 standard by using (QM100s machine, QMESYSTEM, South Korea) at a cross-head speed of 500 mm/min and at 25 C temperatures. Three samples were tested for each composition and averaged.
Thermal gravimetric analysis (TGA)
TGA was done to test for the thermal degradation resistance behaviour of the representative samples of XNBR, GAO, GA and GB, using a SDT Q600-TA. The conditions used for this test include; a nitrogen medium, equilibrium temperature of~25 C and a heating rate of 10 C/ min to a maximum temperature of 800 C.
Results and discussion
4.1. Morphology, structure and particle size of G-sheets Detailed characterizations of graphene oxide (GO) and G using Fourier transform infrared spectroscopy, Wide-angle x-ray diffraction (WAXD), Raman, and UV-spectra, have been reported in our previous work [22,26]. For the purpose of this current work, we present an SEM, TEM and a DLS analysis of the prepared G-sheets, as shown in Figure 1(a-c) respectively.
The extensive oxidation, exfoliation and further reduction of GO into G-sheets by hydrazine leaves the nanostructured materials amorphous with imperfections, characterized with wrinkled sheets by the SEM image in Figure 1a. The TEM image in Figure 1b also shows a similar structural deformation of wrinkled and folded transparent sheets. These amorphous and wrinkled structures occurs for the 2Dnanomaterial to attain thermodynamic stability [27] whilst it may offer advantages in confining and restricting the mobility of the polymer chains thereby improving the physico-mechanical properties of the resulting composites [21,46]. By using the TEM scale bar, G-sheets show an estimated thickness between 0.83-2 nm. Also, Figure 1c demonstrates the particle size distribution of G-sheets measured by the DLS. As shown in Figure 1c, the hydrodynamic diameter of G sheets is about 113 nm. The G nanoparticles were generally observed to be suitable as reinforcements to form composite with XBNR matrix for further studies.
Morphology and state of dispersion of fillers
To understand the state of dispersions of G-sheets within the XNBR matrix in the presence of MA or EPR-g-MA, high optical microscopy (HOM) and SEM techniques were used as shown in Figure 2(a-h). Figure 2(a-d) represent HOM images of the virgin XNBR, GAO, GA1 and GB1. The XNBR (Figure 2a) shows a very smooth surface while the composites (Figure 2(b-d)) show rough and non-uniform surface. The dark phase regions are dispersed Gsheets in XNBR matrix, with particle size ranging from few nanometer to above 100 μm.
To further evaluate the dispersion of G sheets in XNBR composites, SEM images of XNBR, GAO, GA1 and GB1 are depicted in Figure 2(e-h). It can be seen that cryo-fractured XNBR matrix shows a smooth surface structure. From Figure 2f-h, the fractured surfaces become rough and uneven due to G sheets added. The presence and the strong bonding of MA or EPR-g-MA in the presence of the G-sheets with XNBR, makes it difficult to break pieces of these representative samples by cryogenic fracturing process for SEM observation. This difficulty induces rough futures at the observing surfaces. Such rough morphological nature of rubber composites is reported to be an indication for effective load transfer at the filler-rubber matrix interfaces, leading to improvement in mechanical properties [47,48,49]. It is seen that there are no observable agglomerates across the whole XNBR surface and this indicates a good dispersion and distribution of G-sheets within the XNBR matrix.
Vulcanization mechanism
The on-set of cure and optimum curing time (t s2 , and T 90 ) and the cure rate index (CRI) of the various samples are compared in Figure 3(a-c) respectively. Clearly, the virgin matrix (XNBR) showed the fastest t s2 when compared to the remaining samples. The GAO sample experienced a slight delay in t s2 due to the incorporation of the G-sheets. Delays in t s2 may be due to the increase in initial viscosity of the compounds, which delays melting of the curatives to start crosslinking reaction [50]. Upon addition of MA or EPR-g-MA in the presence of the G-sheets, the t s2 values increased further for the corresponding samples when compared with the XNBR and the GAO. Azizli et. al [51]. recently observed that increasing the content of GO in silicone rubber (PVMQ)/XNBR blend in the presence of XNBR-g-GMA as a compatibilizer decreased the scorch time (t s2 ) by 24 %. Clearly, there are inconsistent reports on the reasons for delays in scorch time (t s2 ) for rubber vulcanizates, which may be as a result of different factors like; type of matrix, filler, processing aids and conditions used. However delays in t s2 may be useful as it may allow enough time for both the matrix and vulcanization ingredients to melt in order to ensure a stable crosslinking reactions to yield desired final products [22,26,50].
The T 90 and CRI in Figure 3(b and c) for the pure matrix (XNBR) outperformed that of the composites. Thus, it can be speculated that faster curing reactions (t s2, T 90 and CRI) for XNBR could be linked to the presence of fewer interactions which include; the physical polar-polar chain entanglements (XNBR-XNBR) and the main chain or primary crosslinking reactions between the unsaturated groups (C¼C) of XNBR and the monomeric polysulfide structures (Bt-S-S x -S-Bt). These Bt-S-S x -S-Bt structures are formed by the curatives; sulphur (S), accelerator and activators [14]. The Bt is an organic radical that is derived from the accelerator (benzothiazyl) during the crosslinking reaction [50,52]. Also, the reason for faster curing of XNBR is due to the absence of G, which is reported to be scavengers for curing aids [25]. When compared, G-sheets delayed the crosslinking reaction of GA1-GA3 more than those of GB1 and GB2. This could be explained in terms of the high melting point of MA, higher number of interactions (mostly polar-polar interactions) and higher bulk viscosity of GA nanocomposites.
During the vulcanization process, the pure MA salts with its high melting point requires enough time to melt before engaging in crosslinking reactions. Also, MA and G-sheets react through grafting (MA-g-G) and later grafted to the main chain (XNBR-g-G-MA-g-G-XNBR) other interactions like; hydrogen bonding (polar-polar interactions) between the nitrile groups of XNBR and OH of G (CN δÀ -H δþ ─O), physical entanglement of the polar-polar chains of XNBR (XNBR-XNBR) and the reaction between the carboxylic groups (HO─C¼O) of XNBR and those decorating the G-sheets etc. are all possible [14,27]. These interactions are as illustrated in Figure 4(a&b). The contribution from these secondary interactions in addition to the primary crosslinking reactions of the main chain (XNBR-S-S x -XNBR) could be the main reason for the delays seen in their curing times (t s2 and T 90 ) and the CRI (Figure 3c.) when compared with GB vulcanizates. This observation contradicts the idea that graphene sheets act as scavengers of cure accelerators [25]. Meanwhile, an advantage of these numerous interactions is the creation of tight network structures with high viscosities. Similarly, as depicted in Figure 5(a&b), many interactions: both primary and secondary are all possible in GB vulcanizates, however on heating, EPR-g-MA as rubber may exhibit relatively faster melting behaviour (low viscosity) to engage in crosslinking process reactions than the counterpart MA. Besides, interactions in GB vulcanizates generally include a mixture of polar interactions such as XNBR-S-S x -XNBR and CN δÀ -H δþ ─O, physical chain entanglements among the polar matrix (XNBR-XNBR) and chain entanglements among the saturated (C-C) non-polar matrix (EPR-EPR and EP-g-MA-EPR). There may also be physical and chemical interactions among their blends: polar-non-polar interactions (EPR-g-MA-g-G-XNBR) and their physical chain entanglements (EPR-XNBR). These heterogeneous interactions (mainly physical) within the GB nanocomposites may not promote effective tighter structures associated with high viscosities as compared to those in GA samples. Hence, it was easy for crosslinking reaction to ensue consequently in GB samples. This could account for their faster crosslinking reaction times (t s2 and T 90 ) and cure rate index (CRI). It was observed that EPR-g-MA is best additive for promoting faster curing of XNBR matrix, especially in the presence of G-sheets as compared to the pure MA [41,53].
Curing viscosity, density mechanical strength index
The effects of MA, EPR-g-MA and G-sheets on the minimum torque or viscosity index (M L ) of XNBR are compared in Figure 6(a). Addition of 0.1 ph G-sheets into XNBR (GAO sample) shot up the M L to a value above 7 % higher than the pure XNBR. Meanwhile, when MA was added, further increment of M L above 41 % for samples GA1 and 44 % for GA2 were recorded when compared with pure matrix XNBR as well as 58 % and 55 % higher M L in comparison with those containing EPR-g-MA (GB1 and GB2 samples). The increase in M L can be linked to viscosity to numerous interactions (higher crosslinking density effect), restricting the mobility of the XNBR chains [22,26,50]. Interestingly, at higher MA loading of 1 ph (GA3 sample), the M L declined for GA3 and this could be an indication of over-curing resulting in degradation of the networked structures [54]. The high viscosity index (M L ) results justify the slow curing nature as observed for GA vulcanizates.
The crosslinking density and mechanical strength indicators (M H , and ΔM) of the vulcanizates are compared in Figure 6(b) and Figure 6(c). The polar nature of XNBR is known to promote effective crosslinking reaction, adding the G-sheets and MA significantly increases the total network densities. This might have resulted in over curing reversion where network begins to break as earlier observed in ENR-SBR con-taining 3 ph of MA [55]. An opposite trend can be seen when MA was substituted with EPR-g-MA, that is, an increase in the EPR-g-MA tends to increase the (M H , and ΔM) slightly, even so these properties were comparable to XNBR and GAO compounds. It should be noted that, torque values (M L , M H , and ΔM) generated from cure rheometry depends on several factors including; the processing, conditions, filler-filler, polymer-filler networks and chain-chain interactions [56,57]. However, desired networks structures can generally boost the physico-mechanical properties of rubber composite [21,26]. To understand the reinforcing mechanisms that caused the differences in the values of (M L , M H and ΔM ¼ M H -M L ) for both MA and EPR-g-MA compositions, further analysis like; bound rubber and chemical crosslinking density by equilibrium swelling are carefully examined and reported.
Bound rubber content and crosslinking density
Rubber bound content helps to understand the initial formation of network structure (gel) in nanocomposite. The bound rubber content R b (%) practically depends on rubber-filler interactions in uncured state of composites [14]. The bound rubbers for the various compositions (filled and unfilled) are showed in Figure 7a. The R b (%) for the gum was significantly lowered due to the absence of G-sheets or coupler (MA or EPR-g-MA) but on addition of the G-sheets (GAO sample) it rose above 50% relative to XNBR. In the presence of MA or EPR-g-MA and G-sheets, a general increment in Rb (%) was observed from GA1 to G3 and GB1 to GB2 in comparison with pure XNBR. This increment may due to the strong interactions like hydrogen bonding in XNBR-G assisted by MA or EPR-g-MA [20,51,58]. However, the GB samples seem to show slight increment in R b (%) than its counterparts (GA1-GA3). In their uncured state, GB samples (dual matrix phase of EPR and XNBR chains) could have high tendency to physically entrap or adhere to the G-sheets and the curatives within their structures strongly than the GA (one matrix system) samples.
After vulcanization, it was clear that when MA was added to the Gsheets mixture, crosslinking density, N (molcm À3 ) values increased significantly compared to composites containing the EPR-g-MA as shown in Figure 7b. Two mechanisms are known for an increment in N; better dispersion of fillers within the matrix, and/or the strong interfacial interactions of filler-matrix [22,26,50]. Here, the numerous nd effective filler-G sheets bonding assisted by the MA were responsible for high N (molcm À3 ) in GA samples than those of GB which were observed to be mostly physical interactions. This is as depicted earlier in Figure 4(a&b) and Figure 5(a & b) respectively. Earlier, G and GO exhibited higher network structures in polar NBR than the non-polar EPDM counterparts [26]. The N (molcm À3 ) test has confirmed that incorporation of G-sheets into XNBR in presence of MA or EPR-g-MA generally enhance dispersions of G-sheets and also improves the interfacial interactions between XNBR and the G-sheets.
Tensile properties
Tensile strength properties of composites mainly depend on some factors: (i) surface chemistry of G sheets, (ii) grafting efficiency of MA or EPR-g-MA, (iii) interfacial interactions between G sheets and XNBR chains (like; XNBR-S x -G-g-MA-g-XNBR, XNBR-G, and XNBR-S x -EPR-g-MA-g-G), and (iv) several other interactions among individual G sheets (like; G-S x -G or G-G) [20,51,58]. Figure 8(a-d), shows the tensile properties of XNBR and its nanocomposites. Generally, addition of the G-sheets into the XNBR matrix improved the tensile strength compared to the virgin matrix. However, when MA was incorporated, the strength was seen increasing further for GA1 and GA2 until it declined at GA3 (1 ph MA). At this high loading level of MA, it was suspected that extreme network density was created in GA3 matrix which reduced its viscoelasticity, thereby making it brittle-like and easy to fracture. The result was the lower tensile strength recorded.
It is seen that the GA2 (1 ph G and 0.5 ph MA) composite exhibits the highest tensile strength, signifying it contained desired amount of network structures enough to transfer stress across the interface of Gsheets and the matrix [21,23,48]. When compared, the GB1 had over 8% growth in tensile strength than its counterpart GA1. Meanwhile, GA2 also attained over 52% strength compared to GB2. Therefore, GA samples have generally indicated higher tensile strength than those of GB samples. On the other hand, the GB samples seem to have broadly exhibited higher elongation at break (%) behaviour compared to GA samples, significantly at low G-sheets (0.1 ph)-loading level (Figure 8b). For example; while the GA2 had about 6% increment in elongation at break (%) than their counterparts GB1, the GB1 obtained~31% elongation at break (%) than GA1. This increment may predominantly be due to weaker filler-matrix interactions (physical interactions) with high mobility and lower stiffness of the chains rather than chemical links ( Figure 7c). On computing the reinforcing factor (M300/M100), as shown Figure 7d, it was interesting to see the GAO exhibited the highest M300/M100 property, which may mostly be related to high physical interactions (G-S x -G or G-O δ--H δþ -G) within GAO. These weak interactions were easily broken at higher strain by Payne's effect [59], hence the lower ultimate tensile strength (UTS) recorded, as compared to the compounds with high UTS (GA1, GA2 and GB1) whose reinforcing mechanism were mainly controlled by chemical networks. Clearly, the addition of MA or EPR-g-MA in presence of G-sheets benefited the pure matrix by enhancing its filler-matrix networks for GA and GB samples. Therefore, this current work presents samples with improved tensile properties compared to matrices filled with even higher content or functionalized GDS as summarised in the following rubber-GDS review works [21,23].
Thermal degradation properties
The weight residue (%) as a function of temperature for XNBR, GAO and GA and GB compositions has been presented in Figure 9(a-d). Figure 9(b&d) respectively are presented for clarity. The weight residues (%) and the respective temperature for decomposition (T i and T max ) which represents initial (10 % degradation) and maximum decomposition (90 % of degradation) of the composition was used to characterize the extent of thermal degradation of the various compounds as summarised in Table 2. In Figure 9a (Figure 9b for clarity) and Figure 9c (clearly shown in Figure 9d), the GA and GB samples broadly seemed to shield the XNBR matrix effectively from decomposition, this was associated with higher weight residue (%) when compared with the neat XNBR and the GAO. It was interesting to observe that higher content of the maleic anhydrides further increases the weight residues (%) for the composition (see the case of GA3); however increasing the content of the G-sheets did not have the same effect (see samples GA2 and GB2) in Table 2. In Table 2, a trend can be observed; degradation shifts from lower (T i ) to higher temperatures (T max ). The minimum and maximum decomposition of the XNBR occurred at 382 and 532 C respectively and upon addition of G-sheets or MA and EPR-g-MA, the T i and T max for GAO, GA and GB samples increased. Thus, higher heat was used to decompose these samples compared to XNBR.
Although, some scattered data can be observed in the weight residue (%), T i and T max for the filled compositions, however GA samples generally showed higher decomposition resistance compared to the GB. The much tighter structures associated with higher crosslinking density, N (molcm À3 ) and viscosity contained in GA samples introduced by MA-g-G-sheets might be the controlling factor for this enhancement. The current results outperformed those obtained in our previous work [22] contained in Table 2 where NBR was reinforced with 1 ph of GO in the absence of MA or EPR-g-MA. Also, the current results outperform the thermal degradation resistance of rubber-GDS composites already reported by other researchers [60,62,64], as presented in Table 2. It can therefore be concluded that, in this current work, it seems the combined effect of the physical presence of the G-sheets, their enhanced dispersions and particularly their grafting to the XNBR matrix by MA or EPR-g-MA, creating numerous tight networks structures, might be the controlling factors for the thermal degradation resistance property enhancement. Thus, the matrix was protected by the combined effect of these factors by delaying the leakage of pyrolysis products to cause further degradation of the main matrix [21,22].
Conclusion
Nanoparticles of reduced graphene oxide (G) assisted by two different kinds of maleic-anhydrides: ethylene-propylene grafted-maleic anhydride (EPR-g-MA) and a pure maleic anhydride (MA)) was used to reinforce carboxylated acrylonitrile butadiene-rubber (XNBR) to form nanocomposites by using melt compounding technique. It was observed that MA in presence of G-sheets delayed the curing of XNBR (GA samples) than XNBR containing EP-g-MA (GB samples), supposedly due to high melting behaviour of the MA and the tighter network structures created in the matrices. The tighter structures in GA nanocomposites was due to the combination of chemical interactions (XNBR-g-G-MA-XNBR and XNBR-S-S x -XNBR) and physical interactions (CN δÀ -H δþ ─O and XNBR-XNBR) whilst those in GB nanocomposites were observed to be mainly mixtures of polar and non-polar (XNBR-g-G-EPR-g-MA) interactions. The EPR-g-MA is rubber containing MA grafted to EPR matrix; hence chain mobility in EPR-g-MA could occur quickly above glass transition temperatures for both primary and secondary crosslinking reactions ensue. This was the reasons for the slow crosslinking reactions (longer t s2 and T 90 ) of the GA samples than their counterparts (GB samples). Consequently, the tighter network structures in GA resulted in higher crosslinking density, N (molcm À3 ), higher viscosity index (M L ), strength and modulus than GB samples. It was interesting to observe that GB samples obtained higher elongation at break (%) than the GA samples noted for high ductility, as results of the physical entanglement between EPR-g-MA and XNBR. In terms of thermal degradation study by TGA, the GA samples outperformed the GB samples the differences in the char residue (%) is considered. The sample GA1 (0.1 ph G-sheets and 0.5 ph MA) exhibited higher weight residue (%) of 106.4% > XNBR and 58% > GAO (0.1 ph G-sheets). However, its counterpart GB1 (0.1 ph G-sheets and 0.5 EP R-g-MA) was 60% > XNBR and 22.2% > GAO respectively. In summary, the presence of MA or EPR-g-MA in the presence of G-sheets improved the physico-mechanical properties of the currently prepared samples (GAO and XNBR) including those already reported by other researchers. Therefore, the present work has demonstrated a simple way of enhancing physico-mechanical properties of rubber matrix by controlling *GO functionalized ionic liquid in bromo-isoprene isobutylene rubber (BIIR), **NBR: author's previous work, and ***GO functionalized with hexadecyl amine (HDA).
its microstructure with G-sheets assisted with suitable coupler like Maleic anhydride (MA or EPR-g-MA). Such nanocomposites materials could have multifunctional capabilities such as high temperature applications (heat sinks), flame retardants, and structural materials upon further optimization etc.
Declarations
Author contribution statement Bismark Mensah: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Johnson Kwame Efavi, David Sasu Konadu: Analyzed and interpreted the data.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data availability statement
Data will be made available on request.
Declaration of interests statement
The authors declare no competing interests.
Additional information
No additional information is available for this paper.
|
2022-05-10T15:04:37.798Z
|
2022-11-30T00:00:00.000
|
{
"year": 2022,
"sha1": "faf3d9d116d41e9f106de3bfe9e29969cd3beb21",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844022032625/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a487ce5ea409a87f7376c2c1e3e70e0fd8418a8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257264696
|
pes2o/s2orc
|
v3-fos-license
|
Cognitive bias and how to improve sustainable decision making
The rapid advances of science and technology have provided a large part of the world with all conceivable needs and comfort. However, this welfare comes with serious threats to the planet and many of its inhabitants. An enormous amount of scientific evidence points at global warming, mass destruction of bio-diversity, scarce resources, health risks, and pollution all over the world. These facts are generally acknowledged nowadays, not only by scientists, but also by the majority of politicians and citizens. Nevertheless, this understanding has caused insufficient changes in our decision making and behavior to preserve our natural resources and to prevent upcoming (natural) disasters. In the present study, we try to explain how systematic tendencies or distortions in human judgment and decision-making, known as “cognitive biases,” contribute to this situation. A large body of literature shows how cognitive biases affect the outcome of our deliberations. In natural and primordial situations, they may lead to quick, practical, and satisfying decisions, but these decisions may be poor and risky in a broad range of modern, complex, and long-term challenges, like climate change or pandemic prevention. We first briefly present the social-psychological characteristics that are inherent to (or typical for) most sustainability issues. These are: experiential vagueness, long-term effects, complexity and uncertainty, threat of the status quo, threat of social status, personal vs. community interest, and group pressure. For each of these characteristics, we describe how this relates to cognitive biases, from a neuro-evolutionary point of view, and how these evolved biases may affect sustainable choices or behaviors of people. Finally, based on this knowledge, we describe influence techniques (interventions, nudges, incentives) to mitigate or capitalize on these biases in order to foster more sustainable choices and behaviors.
Introduction: The challenges of human welfare
Supported by science and technology, the world has undergone an explosively rapid change in only a few centuries which offers humanity enormous practical advantages in a large number of areas. Misery and misfortune as a result of food shortages, diseases, and conflicts that were previously considered unsolvable have been adequately tackled (Pinker, 2018). A large part of the world has achieved unprecedented economic growth, and on the waves of globalization, it is assumed that the less developed countries can in principle also benefit from this development (Harari, 2017). However, the technologies we use to increase our welfare today have effects, not only across the whole planet, but also stretching far into the future. In the wake of our pursuit of prejudice and unjust sentencing (Benforado, 2015); and accepting superstitions or conspiracy theories while rejecting scientific findings that contradict these beliefs (Yasynska, 2019).
In this article, we will focus on how the human brain and its evolved psychological characteristics affect people's decision making. Effects of the workings of our brain and of our evolutionary heritage on decision making manifest most prominently in cognitive biases (Kahneman et al., 1982;Hastie and Dawes, 2001;Shafir and LeBoeuf, 2002;Haselton et al., 2005;van Vugt et al., 2014;. Cognitive biases can be generally described as systematic, universally occurring, tendencies, inclinations, or dispositions in human decision making that may make it vulnerable for inaccurate, suboptimal, or wrong outcomes (e.g., Tversky and Kahneman, 1974;Kahneman, 2011;Korteling and Toet, 2022). Well-known examples of biases are hindsight bias (once we know the outcome, we tend to think we knew that all along), tunnel vision (when we are under pressure, we tend to overfocus on our goal and ignore all other things that are happening), and confirmation bias (we tend to only see information that confirms our existing ideas and expectations). People typically tend to pursue self-interest at the expense of the community (Tragedy of the commons). We tend to over-value items we possess (Endowment effect) and we have a strong urge to persist in courses of action, with negative outcomes (Sunk-cost fallacy). What is more, biased decision making feels quite natural and selfevident, such that we are quite blind to our own biases (Pronin et al., 2002). This means we often do not recognize it, and therefore do not realize how our biases influence our decision making.
Cognitive biases are robust and universal psychological phenomena, extensively demonstrated, described, and analyzed in the scientific literature. In a wide range of different conditions, people show the same, typical tendencies in the way they pick up and process information to judge and decide. In line with their systematic and universal character, cognitive biases are also prominent in societal issues and policymaking (e.g., Levy, 2003;McDermott, 2004;Mercer, 2005;Baron, 2009;Flyvbjerg, 2009;Vis, 2011;Arceneaux, 2012;Shiller, 2015;Bellé et al., 2018). For example, Arceneaux (2012) has shown that in discussing political arguments, individuals are more likely to be persuaded by arguments that evoke loss aversion, even in the face of a strong counterargument. And it has been demonstrated in many instances that policy makers tend to make risk-aversive decisions when they expect gains, whereas when facing losses they accept taking more risk (e.g., McDermott, 2004;Vis, 2011).
There are already many publications on cognitive biases showing how human psychological tendencies underly the choices and behaviors of people (e.g., Kahneman et al., 1982;Shafir and LeBoeuf, 2002;Kahneman, 2011). There is also some literature on which biases and human mechanisms play a role in our difficulties with preventing climate change (e.g., Gifford, 2011;van Vugt et al., 2014;Marshall, 2015;Stoknes, 2015). However, there is still lack of insight into how biases play a role in the process of environmental policymaking and how this knowledge may be used to deal with the major systemic challenges that the modern world is confronted with. Despite their possible substantial effects on society and human wellbeing, cognitive biases have never been a serious matter of concern in the social and political domain (Eigenauer, 2018). In this paper, we will therefore analyze the constellation of psychological biases that may hinder behavioral and policy practices addressing sustainability challenges. We will also look for ways to mitigate the potential negative effects of biases through influence techniques, like nudging (e.g., Thaler and Sunstein, 2008).
The rationale and drawback of biases
Given the inherent constraints of our information processing system (i.e., the limited cognitive capacities of the human brain) our intuitive inclinations, or heuristics, may be considered effective, efficient, and pragmatic. And indeed, intuitive or heuristic decision making may typically be effective in; natural (primal) conditions with time-constraints, lack (or overload) of relevant information, when no optimal solution is evident, or when we have built up sufficient expertise and experience with the problem (Simon, 1955;Kahneman and Klein, 2009;Gigerenzer and Gaissmaier, 2011). In these cases, the outcomes of heuristic decision making may be quite acceptable given the invested time, effort, and resources (e.g., Gigerenzer et al., 1999).
The fact that heuristic thinking deals with information processing limitations and/or data limitations (Simon, 1955) does not alter the fact that many of our judgments and decisions may systematically deviate from what may be considered optimal, advisable, or utile given the available information and potential gain or risk (Shafir and LeBoeuf, 2002). This has been demonstrated by a large body of literature, showing how cognitive heuristics or biases may lead to poor decisions in a broad range of situations, even including those without complexity, uncertainty, or time constraints . Imagine, for instance, a board of directors that has to decide about the continuation of a big project. Typically, the more they have invested so far, the less likely they are to pull the plug. This is not rational (and is therefore called the sunk cost fallacy), because what should matter is what the costs and benefits will be from this point forward, not what has already been spent. The Sunk-cost fallacy, like various other psychological biases affecting decision making, may continuously pop up in the world we live in. Examples are the Anchoring bias (Tversky and Kahneman, 1974;Furnham and Boo, 2011), Authority bias (Milgram, 1963), Availability bias Kahneman, 1973, 1974), and Conformity bias (Cialdini and Goldstein, 2004).
A large number of different biases have been identified so far and specific biases are also likely to occur in the domain of public decision making. By public decision making, we mean not only collective and democratic decision making, but also individual decision making. For different kinds and domains of decision making, different biases may occur. It may be expected that in decision making within the sustainability domain, certain (categories of) biases may more often occur than others. In this paper, we try to present the most relevant biases and the associated nudges, focusing on public decision making with regard to sustainability challenges.
Methods
Decision making in our modern society may be done on an individual basis, but may also involve many participants or stakeholders with their own perspectives and background, i.e., citizens, policy makers, company representatives, and interest groups (e.g., Steg and Vlek, 2009). To come to a comprehensive understanding of which psychological biases are likely to pop up in this context, we selected those biases that would likely be most prominent, given the typical (psychological) characteristics of sustainability issues. Next, we described interventions or influence techniques (incentives, nudges) to overcome, mitigate, or capitalize on these biases. This was done in three steps.
Step 1: Defining psychological characteristics of sustainability problems Sustainability issues have characteristics that may evoke certain biases. Here, we define "sustainability" as: a balanced development in which the exploitation of resources, the direction of investments, the orientation of technological development, and institutional change are all in harmony and enhance both current and future potential to meet long-term wellbeing. First, on the basis of the literature (e.g., Schultz, 2002;Steg and Vlek, 2009;van Vugt, 2009;van Vugt et al., 2014;Engler et al., 2018;Toomey, 2023) and a workshop with experts we defined a set of general social-psychologically relevant characteristics or factors, like "experiential vagueness" or "long-term effects" or "threat of the status quo" that are associated with most sustainability issues.
Step 2: Biases per sustainability characteristic Each characteristic of sustainability issues may relate to a few specific biases that may hamper sustainable choices and behaviors of people. For example, the long-term character of sustainability implies may be in conflict with our tendency to short-term thinking (Hyperbolic time discounting) or the tendency to underestimate both the likelihood of a disaster and its possible consequences, and to believe that things will always function the way they normally function (Normalcy bias). The subsequent identification of thinking tendencies and biases related to these characteristics was based on the literature entailing overviews of multiple biases (e.g., Korteling et al., 2020a), a Neuro-Evolutionary Bias Framework (Korteling et al., 2020a,b;Korteling and Toet, 2022), and on the literature on cognitive biases and sustainability challenges (e.g., Gardner and Stern, 2002;Penn, 2003;Fiske, 2004;Wilson, 2006;Steg and Vlek, 2009;van Vugt, 2009;van Vugt et al., 2014;Marshall, 2015;Engler et al., 2018).
Step 3: Influence techniques per sustainability characteristic Also, for each group of biases, some relevant intervention techniques that can be used, by for example government or policy makers, were briefly described. These interventions, incentives, or nudges, may be applied to mitigate the relevant biases or to capitalize on them for the purpose of stimulating decision making that is more in line with sustainability goals in the context of the current world. On the basis of a previous literature review (Korteling et al., 2021), we have chosen not to advocate specific educational approaches, aiming at bias mitigation training in order to foster sustainable decision making. Instead, our approach aims at interventions with regard to the context or environment in which people live order to promote more sustainable choices.
Example of the approach
Finally, we will illustrate our approach with the help of an example: A conflict between personal versus community interest is a typical characteristic that is associated with sustainability issues. Natural selection has favored individuals who prioritize personal benefits over Frontiers in Psychology 04 frontiersin.org those of unrelated others (Hardin, 1968;van Vugt et al., 2014). This means that making choices in the public interest is often hindered by our personal interests (Step 1). Sustainability also often involves a trade-off between personal interests, such as driving a car or flying, against collective interests, such as fresh air and a peaceful environment. This conflict relates to the bias called the Tragedy of the commons, i.e., the tendency to prioritize one's own interests over the common good of the community ( Step 2). Because we share our genes with our relatives, this tendency may be countered by invoking kinship as a nudge. Pro-environmental actions or appeals may thus be more effective if they emphasize the interests of our ingroup, children, siblings, and grand-children ( Step 3).
Most relevant psychological characteristics of sustainability challenges
Below, we list a set of prominent psychological characteristics that we consider relevant for sustainability issues. Although biases are inherent to the thinking and decision making in all people, it may be supposed that biases may differ depending on peoples' places, functions, and roles in decision situations. On the other hand, there are many mutual influences and dependencies in the policymaking arena. Therefore, we have decided not to make clear distinctions between the specific roles people play in this arena. So, we do not discern biases for citizens, politicians or policy makers.
• Experiential vagueness: Sustainability problems are slowly and gradually evolving. Therefore, the impact of the issue is difficult, if not impossible, to perceive or experience directly with our body and senses. Our knowledge of the issue is largely built on indirect and abstract cognitive information, i.e., on conceptual reasoning, abstract figures, written papers, and quantitative models. • Long-term effects and future risk: The negative consequences of green practices follow directly, whereas the positive aspects of green practices may emerge only after many years in the (far) future. The same counts for the positive consequences of not taking green action. In addition, sustainability concerns an unknown future with an abundance of possibilities that easily go beyond our imagination. • Complexity and uncertainty: The sustainability issue is very complicated (socially, technically, logistically, economically) and even "wicked. " Being able to judge and reason over most topics within the field requires multi-and transdisciplinary knowledge. Sustainability challenges are (therefore) accompanied by a high degree of uncertainty about their future progression and how it should be tackled and addressed. • Threat to the status quo: Many sustainability measures more or less have impact on (sometimes even threaten) our established way of living and basic societal infrastructure. When new measures have an impact on our "normal, " established way of living and basic societal infrastructure, this may be experienced as a threat that will result in losing our freedom and/or comfort ("fear of falling"). • Threat of social status: Many environmental problems result from a desire to possess or consume as much as possible, instead of consuming "enough" for a good life. Consumptive behavior and high energy consumption are intrinsically related to high social status, which is something most people do not want to lose.
• Social dilemma's: The sacrifices that have to be made in order to foster sustainability are mainly beneficial for the collective, whereas direct individual gains are often limited. In this "social dilemma, " humans tend to prioritize direct personal interests relative to more sustainable ones that benefit the planet. • Group pressure: Norms, values, and standards for what is considered as 'normal' or what is considered "desirable" are determined and reinforced by group pressure. Also with regard to green choices, we are often more strongly influenced by the behaviors and opinions of our peers than by our personal views and attitudes toward conservation.
Biases and interventions per psychological sustainability characteristic
For each of the above-mentioned general psychological characteristics of sustainability issues, the next subsections will provide an analysis and inventory of the (kinds of) cognitive biases that are probably most relevant and critically involved in the associated public and political decision making processes. Finally, for each general characteristic, influence techniques (interventions) to mitigate or capitalize on the relevant/critical biases will be briefly described. These interventions are based on the literature concerning "psychological influence" (e.g., Jowett and O'Donnell, 1992;Cialdini, 2006;Adams et al., 2007;Cialdini, 2009;Hansen, 2013;Heuer, 2013;Korteling and Duistermaat, 2018;Toomey, 2023). The influence techniques have an informational nature. They can be utilized in public communication, education, and policy making, especially in communication to the public, in different forms of media. Because the biases mentioned show a great deal of overlap and similarity-it was more about groups or types of similar biases-we chose not to make explicit links between specific biases and the associated nudge.
Experiential vagueness
Social scientists have long been puzzled as to why people are so poor at recognizing environmental risks and ignore global environmental hazards (Slovic, 1987;Hardin, 1995). Such apathy is probably a product of our evolutionary heritage that produced a brain that is optimized to perform biological and perceptual-motor functions (Haselton and Nettle, 2006;Korteling and Toet, 2022). For example, the vertebrate eye evolved some 500 billion years ago, compared to 50,000 years ago for human speech; while the first cave drawings are dated at 30,000 years, compared to the earliest writing system approximately 5,000 years ago (Parker, 2003; see also Grabe and Bucy, 2009). This comparatively more ancient visual perceptual and communicative apparatus enables us to quickly extract meaning from eye-catching images (Powel, 2017). In addition, there was always a tangible link between behavior and the environment. That is: if you do not eat, you will become hungry and search for food. If it starts raining, you may look for shelter in order to prevent becoming wet. A critical difference between the modern world and our ancestral environment is that we rarely see, feel, touch, hear, or smell how our behaviors gradually impact the environment (Uzzell, 2000;Gifford, Frontiers in Psychology 05 frontiersin.org 2011). Because our ancestors were not confronted with the relatively remote, slowly evolving, or abstract problems (Toomey, 2023), we probably are not well-evolved to be alarmed when confronted with potential or novel dangers that we cannot directly see, hear, or feel with our perceptual systems (van Vugt et al., 2014). The human senses and nervous system show a gradual decrease in responsiveness to constant situations. In general, we are more sensitive to, and more easily triggered by, sudden changes and differences in the stimulus (contrasts). Because of this neural adaptation, we often may have difficulty with perceiving and appreciating slow and gradual processes of change. Therefore, the gradual changes that are implied in our environment, like global warming, are not very easily noticed. So, most people are generally not really alarmed by the gradual evolving and remote environmental challenges that the world is facing. This may contribute to the relatively low public interest in the issue of environmental threats such as global climate change, pollution of the oceans, extinction of species, the negative health effects of particulate matter, and decreasing biodiversity (Swim et al., 2011).
Most relevant biases with regard to experiential vagueness
• Experience effect: the tendency to believe and remember things easier when they are experienced directly with our physical body and senses instead of abstract representations, like graphs and statistics, or text about scientific data (van Vugt et al., 2014). • Contrast effect: having difficulty with perceiving and appreciating gradual changes or differences (instead of contrasting ones), such as gradually decreasing biodiversity and climate change (Plous, 1993). • Story bias: the tendency to accept and remember more easily than simple or basic facts (Alexander and Brown, 2010).
Interventions to mitigate these biases
Key: Make the consequences of possible ecological breakdown tangible • To increase awareness of environmental threats people should experience by their senses (e.g., vision, sound, proprioception, and smell) how future situations will look and feel, e.g., by gaming, simulation or "experience tanks. " In raising and education, positive "nature experiences" can be used in order to promote a pro-environmental perspective of the world. • People have difficulty with correctly perceiving and judging abstract figures. Quantitative data, tables, and numbers do not really make an impression and are thus easily ignored or forgotten. 2 Make people therefore aware of environmental challenges using concrete examples and narratives that are related to real individuals with whom they can empathize and reinforce messages with vivid and appealing images, frames, and metaphors. • Use pictures, animations, artist impressions, podcasts, and video's instead of (or to support) written information. • Focus on the concrete consequences of severe threats. • Humans are evolved to love nature. So, increase the availability and number of opportunities (especially for city dwellers) to appreciate, experience and protect the healing value of the real nature, i.e., the fields, the woods, the waters, and the mountains (Schultz, 2002). • Sustainability interventions that imply the loss of assets or privileges should proceed slowly, gradual, and in small steps. The more positive and rewarding aspects of transitions can be presented as more contrasting, sudden and discrete events. • Narratives and stories consisting of coherent events and elements-real or imaginary-are more easily accepted and remembered than plain facts, which may be useful to create or enhance feelings of connectedness and commitment to pro-environmental initiatives. • From a psycho-social perspective face-to-face communication is probably the richest (and most natural) form of communication and interaction. Use therefore face-to-face communication to promote pro-environmental behavior.
Long-term effects and future risk
Sustainable choices are often only rewarded in the long-term future, while the costs and sacrifices have to take place in the present. Given two similar rewards, humans show a preference for one that arrives sooner rather than later. So, humans (and other animals) are said to discount the value of the later reward and/or delayed feedback (Alexander and Brown, 2010). In addition, this effect increases with the length of the delay. According to van Vugt et al. (2014), our tendency to discount future outcomes may have had substantial benefits in primitive ancestral environments, suggesting it is an evolved psychological trait (Wilson and Daly, 2005). If our ancestors had put too much effort into meeting future needs rather than their immediate needs, they would have been less likely to survive and pass on their genes in the harsh and unpredictable natural environment in which they lived (Boehm, 2012). Human psychology is thus naturally formed to maximize outcomes in the here and now, rather than in the uncertain future (van Vugt et al., 2014). Thus people in modern societies still may weigh immediate outcomes much more heavily than distant ones (Green and Myerson, 2004). This preference for today's desires over tomorrow's needs-and the conflict between people's desire for immediate rather than delayed rewards-may be the cause of the persistence of many environmental problems.
Our brain tends to build general conclusions and predictions on the basis of a (small) number of consistent, previous observations (inductive thinking). A typical and flawed inductive statement is: "Of course humanity will survive. Up to now, we have always survived our major threats and disasters. " 3 Even in highly educated and experienced people, inductive reasoning may lead to poor intuitive predictions concerning the risks in the (long-term) future (Taleb, 2007). We tend to focus on risks that we clearly see, but whose consequences are often relatively small, while ignoring the less obvious, but perhaps more serious ones. Next to such poor statistical intuitions, we have a Frontiers in Psychology 06 frontiersin.org preference for optimistic perspectives. This leads us to ignore unwelcome information and to underestimate the severity and probability of future (environmental) challenges and hazards (Ornstein and Ehrlich, 1989). This may be especially devasting when considering rare and unpredictable outlier events with high impact ("black swans"). Examples of black swans from the past were the discovery of America (for the native population), World War I, the demise of the Titanic, the rise of the Internet, the personal computer, the dissolution of the Soviet Union, and the 9/11 attacks. Many people ignore possible rare events at the edges of a statistical distribution that may carry the greatest consequences. According to Taleb (2007), black swans (or "unknown-unknowns") rarely factor into our planning, our economics, our politics, our business models, and in our lives. Although these black swans have never happened before and cannot be precisely predicted, they nevertheless need much more attention than we give them. Also global warming may trigger currently unknown climate tipping points when change in a part of the climate system becomes self-perpetuating beyond a warming threshold, which will lead to unstoppable earth system impact (IPCC, 2021, 2022).
Most relevant biases related to long-term effects
• Hyperbolic time discounting: the tendency to prefer a smaller reward that arrives sooner over a larger reward that arrives later. We therefore have a preference for immediate remuneration or payment compared to later, which makes it hard to withhold the temptation of direct reward (Alexander and Brown, 2010). • Normalcy bias: the tendency to underestimate both the likelihood of a disaster and its possible consequences, and to believe that things will always function the way they normally function (Drabek, 2012). By inductive reasoning, we fail to imagine or recognize possible rare events at the edges of a statistical distribution that often carry the greatest consequences, i.e., black swans (Taleb, 2007
Interventions to deal with these biases
Key: Bring the rewards of more sustainable choices to the present • In general, immediate reinforcements are usually better recognized or appreciated and have more effect. Provide thus immediate rewards for green choices, e.g., through subsidy and tax policy, so that it pays more directly to make them. • Bring long-term benefits in line with short-term ones. For example: investing in solar panels with a quick payback period, subsidizing the purchase of pro-environmental goods, or taxing the use of fossil fuels. • Make people aware that we live in a world that inherently involves unpredictable and (system-) risks with high impact, e.g., like the corona pandemic. These risks may have severe negative consequences, maybe not yet for themselves in the short term, but much more for their beloved children and grandchildren.
• Present required changes as much as possible in terms of positive challenges, that is in terms of potential benefits rather than negative terms: a more "relaxed and natural way of life" instead of "costs of energy transition. " Green policy will deliver a stable and predictable future within the foreseeable future that makes prosperity and well-being possible.
Complexity and uncertainty
The modern global world we live in is very complex with many intricate causal relationships. Everything is connected to everything, making it very difficult to see what exactly is going on in this dense network and how the interplay of societal, technological, economic, environmental, and (geo)political forces develops. Our wealth and comfort are made possible by many "hidden" enablers, such as child labor in third world sweatshops and animal suffering out of sight in the bio industry. The complexity of interrelated and hidden causes, consequences, or remedies is also very prominent in sustainability issues. Sustainability issues are about by a fine-grained logistic infrastructure and sophisticated technological inventions and their massive application. For example, the energy transition involves complex socio-technical systems that usually involve a high degree of uncertainty about how this will ultimately work out. Our cognitive capacities to pick up and understand all this technical, statistical, and scientific information are inherently limited (e.g., Engler et al., 2018;. How can we intuitively calculate how much CO 2 emission reduction is required and how much (or little) certain technical or economical interventions contribute to the reduction of greenhouse gases? Many people have also poor capacities for calculation and logic reasoning and a poor intuitive sense for coincidence, randomness, statistics, and probability reasoning (e.g., Monat et al., 1972;Sunstein, 2002;Engler et al., 2018). For instance, concepts like "exponential growth"-i.e., when the instantaneous rate of change of a quantity in time is proportional to the quantity itselfare generally poorly understood.
The inherent constraints of our cognitive system to collect and weight of all this information in a proper and balanced way may result in various biases preventing good judgment and decision making on the basis of the most relevant evidence. Our brain tends to selectively focus on specific pieces of information that 'resonate' with what we already know or expect and/or what associatively most easily pops up in the forming of judgments, ideas, and decisions (Tversky and Kahneman, 1974;Toomey, 2023). The fact that other (possible relevant or disconfirming) information may exist beyond what comes up in our mind may be insufficiently recognized or ignored (Kahneman, 2011). This often may lead to a rather simplistic view of the world (e.g., populism). We trust and focus on what is clearly visible or (emotionally) charged, what we (accidentally) know, what we happened to see or hear, what we understand, what intuitively feels true, or what associatively comes to mind (the knownknowns). In contrast, we are rather insensitive to the fact that much information does not easily come to us, is not easily comprehensible, or simply is unknown to us. So we easily may ignore the fact that there usually is a lot that we do not know (The unknowns). This characteristic of neural information processing has been termed: the Focus principle or "What You See Is All There Frontiers in Psychology 07 frontiersin.org Is" (WYSIATI, Kahneman, 2011). An important consequence of this principle is that we tend to overestimate our knowledge with regard to complex issues about which we lack experience or expertise (Kruger and Dunning, 1999). A situation may also be deemed as too uncertain or complicated and a decision is never made due to the fear that a new approach may be wrong or even worse. An abundance of possible options may aggravate this situation rendering one unable to come to a conclusion. In sustainability challenges, people may thus be very motivated to improve the situation, but still can be hampered by uncertainty and lack of understanding to take action.
Most relevant biases related to complexity and uncertainty
• Confirmation bias: the tendency to select, interpret, focus on and remember information in a way that confirms one's preconceptions, views, and expectations (Nickerson, 1998). • Neglect of probability: the tendency to completely disregard probability when making a decision under uncertainty (Sunstein, 2002). • Zero-risk bias: The tendency to overvalue choice options that promise zero risk compared to options with non-zero risk (Viscusi et al., 1987;Baron et al., 1993). • Anchoring bias: Biasing decisions toward previously acquired information. In this way, the early arrival of irrelevant information can seriously affect the outcome (Tversky and Kahneman, 1974;Furnham and Boo, 2011). • Availability bias: the tendency to judge the frequency, importance, or likelihood of an event (or information) by the ease with which relevant instances just happen to pop up in our minds (Tversky and Kahneman, 1973;Tversky and Kahneman, 1974). • Focusing illusion: the tendency to place too much emphasis on one or a limited number of aspects of an event or situation when estimating the utility of a future outcome (Kahneman et al., 2006). • Affect heuristic: basing decisions on what intuitively or emotionally feels right (Kahneman, 2011). • Framing bias: the tendency to base decisions on the way the information is presented (with positive or negative connotations), as opposed to just on the facts themselves (Tversky and Kahneman, 1981;Plous, 1993). • Knowledge illusion (Dunning-Kruger Effect): the tendency in laymen to over-estimate their own competence (Kruger and Dunning, 1999). • Surrogation (means-goal): the tendency to concentrate on an intervening process instead of on the final objective or result, e.g., concentrating on means vs goals or on measures vs intended objectives (Choi et al., 2012). • Ambiguity effect: the tendency to avoid options or actions for which the probability of a favorable outcome is unknown (Baron, 1994).
Interventions to deal with these biases
Key: Provide more information and education especially to better understand the environmental consequences of human decisions and actions • Consistency is more convincing than quantity. We believe that our judgments are accurate, especially when available information is consistent and representative for a known situation. Therefore, conclusions based on a very small body of consistent information are more convincing for most people than much larger bodies of (less consistent) data (i.e., "The law of small numbers"). • Repetition of a pro-environmental message has more impact than just one attempt. This exposure effect can be enhanced by using all possible communication channels and media. • Start with providing information the positive way you want it to taken by the target audience. Later the message may be extended by the less favorable nuances and details. • Provide better statistical education and training and improve the communication on uncertainty and risk. When it comes to numbers, quantities, and changes therein, focus on total amounts rather than on proportions. • Make pro-environmental information (e.g., about actions, initiatives, techniques etc….) salient and conspicuous. Focus (in a simple visual way) on the severe consequences of global warming and biodiversity loss (desertification, crop failure, and famine, millions of homeless and displaced people, risk of wars) instead of on the complex underlying mechanisms and processes. • Influence is unlikely to fail due to information that is not provided. Therefore, in setting up an information campaign, it is generally not needed to invest all efforts in providing maximum possible "evidence" that is intended to confirm the deception. Consistency is dominant. In general, clear, recognizable, and simple information will be most easily picked up and accepted. • Influence and persuasion is not only determined by what is, or is not, communicated (i.e., the content) but also by how it is communicated or presented (i.e., the frame or form). These latter superficial aspects are more easily, intuitively, and quickly processed than the deeper content of the message. This "framing" can thus be very well exploited for influencing peoples' choices. Each message can be framed in numerous ways. So it may be very effective to analyze how to wrap up a message in the way you want it to be taken. • Different people value, and pick-up, different information at different levels. Therefore, communicate messages at different levels of understanding, from the direct immediate consequences for the individual (micro) to the overarching long-term consequences for the world of the future and for future generations (macro). • Present and facilitate as much as possible "total solutions. " Which are tailor-made to the target audiences.
Threat of the status quo
A basic premise of evolution is that all organisms strive for the continuation of their existence. This not only concerns the existence per se, but also the maintenance of stable living conditions (that are instrumental to this ultimate goal). For this reason (under normal circumstances and to prevent unexpected risk), we tend to strive at maintaining the present situation and to remain consistent with previous patterns (default effect). So, we easily accept, or prefer, to continue on the path taken and to maintain the status quo (default options) and we are afraid of choosing alternative, options that may Frontiers in Psychology 08 frontiersin.org turn out suboptimal (Kahneman and Tversky, 1979;Johnson and Goldstein, 2003;Chorus, 2010). Energy transition, as a possible solution of a future problem, is by many people experienced as threatening, not only to our established comfortable way of living, but to our individual and social basic needs as well. A transition to more sustainable practices may thus cause bad feelings of losing security and possessions, sometimes termed "fear of falling. " In line with this, people have an overall tendency to experience the disutility of giving up an object as greater than the utility associated with acquiring it (i.e., Loss aversion). Thaler (1980) recognized this pattern, and articulated it as such: people often demand much more to give up an object than they would be willing to pay to acquire it. This is called the Endowment effect. In contrast to what most authors on cognitive biases suppose, we here speculate that the emotions that we feel when we anticipate possible loss of our assets are not the cause of our bias to avoid loss. Instead, they are the result of our pervasive bias for self-preservation and for maintenance our (neurobiological) integrity . So in brief: we often prefer to hold on to the current situation and to continue on previous (al) choices. As such, we default to the current situation or status quo.
Most relevant biases related to threat of the status quo
• Status Quo bias: the tendency to maintain the current state of affairs (Samuelson and Zeckhauser, 1988). • Default effect: the tendency to favor the option that would be obtained if the actor does nothing when given a choice between several options (Johnson and Goldstein, 2003). • Sunk cost fallacy (also known as Irrational escalation or Concorde effect): the tendency to consistently continue a chosen course with negative outcomes rather than alter it. The effort previously invested is the main motive to continue (Arkes and Ayton, 1999). • System justification: the tendency to believe that the current or prevailing systems are fair and just, justifying the existing inaccuracies or inequalities within them (social, political, legal, organizational, and economical) (Jost and Banaji, 1994;Jost et al., 2004). • Cognitive dissonance: the tendency to search for and select consistent information in order to try to reduce discomfort when confronted with facts that contradict own choices, beliefs, and values (Festinger, 1957). • Fear of regret: feeling extra regret for a wrong decision if it deviates from the default (Dobelli, 2011;Kahneman, 2011). • Loss aversion: the tendency to prefer avoiding losses to acquiring equivalent gains. Loss takes an (emotionally) heavier toll than a profit of the same size does (Kahneman and Tversky, 1984). • Endowment effect: the tendency to value or prefer objects that you already own over those that you do not (Thaler, 1980).
Interventions to deal with these biases
Key: Make sustainable options the default or easiest choice and present them as a gains rather than losses • Make desired pro-environmental choices and behavior the default (the normal standard) or easiest choice. For example, providing only reusable unless specifically request a single-use plastic shopping bag, or designing buildings and cities to make walking and biking more convenient. • Encourage active participation can be a major tool for triggering cognitive consistency pressures to build more sustainable habits. In general: active participation signals commitment to subjects, increasing their likely identification with the message or goal of the persuasion. Subsequently, they will tend to make choices that are consistent with their previous-in this case pro-environmental-actions. • Based on cognitive dissonance theory (Festinger, 1957), the expression of self-criticism in peer (discussion) groups is a major influence technique. Making people vocalize promises (or sins) in public drives subjects to remain consistent with their and words. • We believe that our judgements are accurate, especially when available information is consistent and representative for a known situation. It is therefore always important to provide consistent information. • People tend to focus on, interpret, and remember information in ways that confirm their existing ideas, expectations or preconceptions. Therefore, in order to create an open mind, it is better to start with undeniable, true evidence and take care to not to start with highly disputable information evidence. The more complicated and contradictory aspects can be tackled later. • The first goal in any effort to change another person's mind must be to ensure that the subject is at least seriously considering the desired alternative. This requires to start with strong and obvious evidence which fits into the target's existing conceptions of the world. In contrast, starting with less dramatic evidence tends to be unsuccessful since the information will be ignored, unnoticed, forgotten, or misperceived. • Present changes in terms of gains instead of losses and circumvent the loss felt by people when they are asked to invest funds and provide support to acquire the necessary funds for the transition. • Create a story different from loss: what are we gaining? For example: more rest, less rat race. Do not address people as consumers, but as citizens, changemakers, parents, etc.
Threat of social status
People are more focused on relative status than absolute status. This is, for example, demonstrated by the fact that people find an increase in wealth relative to their peers more important than their absolute wealth (Diener and Suh, 2000). In an experimental setting, researchers found that when presented with financial options, most people chose to earn less in absolute terms, as long as they relatively earned more than their peers (Frank, 1985). Not unrelated to our status-seeking tendency, humans tend to consume more than they need. In many historical civilizations, we find a penchant toward (excessive) consumption and showing of materials and riches (Bird and Smith, 2005;Godoy et al., 2007). From an evolutionary point of view, such displays of status may be rooted in a social advantage (Penn, 2003;Saad, 2007;Miller, 2009). Ancestors who strived for improvement of their situation and who tried to do better than their peers, probably have passed their genes better than those who had a more comfortable attitude. The wry side effects, however, are that the Frontiers in Psychology 09 frontiersin.org tendency to seek status through material goods-nowadays more than ever-may contribute substantially to the production of waste and the depletion of nonrenewable resources. Because we seek relative wealth, as opposed to seeking an absolute point of satisfaction, we are not easily satisfied and we tend to persistently strive for ever more status and wealth. Whether it be our smartphone, our sense of fashion, or our household appliances, they all rapidly become outdated as soon as newer or more fashionable versions enter the horizon. As economists say: we compare ourselves continuously with our neighbors; we want to "keep up with the Joneses. " Finally, items that are scarce or hard to obtain have typically more perceived quality and status than those that are easy to acquire. So many environmental problems can therefore be the result of a conflict between statusenhancing overconsumption versus having enough for a good life. This 'Hedonic treadmill' is encouraged by commercials offering us a never ending stream of new products that should make us, in one way or the other, happy and thus hungry to buy more.
Most relevant biases related to threat of social status
• Affective forecasting (Hedonic forecasting, Impact bias): the tendency to overestimate the duration and intensity of our future emotions and feelings regarding events, encouraging putting effort into favorable results (greed) and into avoiding threats (Wilson and Gilbert 2005). • Hedonic adaptation (Hedonic treadmill): the tendency to quickly return to a relatively stable level of happiness despite major positive or negative life events (Brickman and Campbell, 1971). • Social comparison bias: The tendency, when making decisions, to favor individuals who do not compete with one's own particular strengths (Garcia et al., 2010). • Scarcity bias: the tendency to attribute greater subjective value to items that are more difficult to acquire or in greater demand (Mittone and Savadori, 2009 For a more in depth study of this, please read, e.g., van Vugt (2009) and Raihani (2013).
• Use high-status and admired or popular influencers and celebrities to promote pro-environmental options, e.g., in social media campaigns. • Educate people to assess their quality of life in absolute terms of health, freedom, and comfort instead of in relative terms towards 'the Jonesses' . • Present the benefits of environmental as scarce. This can be done, for example, by pointing out others (competitors) who want the same goods or by drawing attention to possible future supply problems.
Personal versus community interest
Individual self-interest is often in conflict with the interest of the whole group. This is generally conceptualized as a social dilemma. This dilemma is usually referred to as the Tragedy of the Commons story (Hardin, 1968). This hypothetical example demonstrates the effects of unregulated grazing (of cattle) on a common piece of land, also known as "the commons. " In modern economic terms, 'commons' are any shared or unregulated resources to which all individuals have equal and open access, like the atmosphere, roads, or even the fridge of the office. Searching for direct individual profit, most individuals increase their use or exploitation of these common resources, thereby unintentionally causing it to collapse (Hawkes, 1992;Dietz et al., 2003). According to Hardin (1968) and van Vugt et al. (2014) the human mind is shaped to prioritize their personal interests over collective interests because natural selection favors individuals who can gain a personal benefit at the expense of unrelated others. Of course, there are situations under which the collective benefit will be prioritized over that of the induvial. But the conditions under which the human mind is triggered to prioritize the collective good over its own are generally less prevalent (Hardin, 1968).
According to Dawkins (1976), natural selection is the replication of one's genes, which often comes at the expense of the survival of others' genes. Power is thereby often instrumentally used for selfinterest at the cost of others. So, survival of the species is not what primarily matters. However, this prioritizing of self-interest is dependent on the relationship of the individual to the group. In tightknit communities where the individual knows himself to be dependent on the community, his behavior will be in line with this dependency and more likely be in favor of the in-group's interests. When the individual does not feel this connection to an in-group (community), he is probably more likely to prioritize self-interest. Evidence for this strategy is seen in social dilemma research showing that most individuals tend to make selfish choices when they interact with other people in one-shot encounters (Komorita and Parks, 1994;Fehr and Gächter, 2002;van Lange et al., 2013). The evolutionary tendency to let self-interest prevail at the expense of others has direct implications for environmental practice, which often concerns the overexploitation of limited resources, such as the oceans, natural areas, fish stocks, clean air, etc. Consequently, many sustainability problems result from this conflict between personal and collective interests.
Most relevant biases related to personal versus community interest
• Tragedy of the commons (Selfishness and self-interest): the tendency to prioritize one's own interests over the common good of the community (Hardin, 1968).
Frontiers in Psychology 10 frontiersin.org • Perverse incentive effect (Cobra effect): the tendency to respond to incentives in a way that best serves our own interests and that does not align with the beneficial goal or idea behind the incentives, which may lead to "perverse behaviors" (Siebert, 2001). • Anthropocentrism: the tendency to take the own, human perspective as the starting point for interpreting and reasoning about all sorts of things, such as nature and other living animals (Coley and Tanner, 2012).
Interventions to deal with these biases
Key: Introduce and present sustainable options as the most favorable and profitable • Because we share our genes with our relatives, kinship may be a good motivator of pro-environmental behavior. Pro-environmental appeals may be more effective if they emphasize the interests of our ingroup, children, siblings, and grand-children. • Create programs where pro-environmental choices result in direct personal (or business) gain, e.g., by proper incentives or rewards, like tax exemptions. • Create close-knit, stable, and small communities to foster pro-collective behavior and cooperation. • In all species, behaviors reinforced by rewards or positive feedback tend to be repeated (Thorndike, 1927(Thorndike, , 1933, and the more reinforcement, the greater the effect. Therefore, multiple reinforcements on desired social choices increase the chance that this will remain the case or repeat itself in the future.
Group pressure
Social psychologists have long known that people tend to adapt to the choices and behavior of others (Asch, 1956). Our tendency of following the majority is adaptive since for most species, the costs of individual learning, through trial and error, are substantial (Simon, 1990;Richerson and Boyd, 2006;Sundie et al., 2006;Sloman and Fernbach, 2018). Also for our ancestors, living in uncertain environments it would probably be better to follow and copy others' behavior than figuring things out for yourself (Kameda et al., 2003;Gorman and Gorman, 2016). This is therefore probably an ancient and natural adaptive tendency which may also help maintaining or strengthening a position within the social group (Korteling et al., 2020a). We thus easily follow leaders or people with high status and authority in groups. We adapt to people around us with which we feel connected, but have an aversion against strangers. We have difficulty being indebted to others and we like and support kind, attractive and agreeable people. This can lead, for example, to after-talk and blind copying of the behavior of others and the faithful following of persuasive and charismatic persons. In line with this, it has been found that green practices are more strongly influenced by the behaviors of our peers than by our personal attitudes toward conservation. For example, when people see that their neighbors are not conserving, they tend to increase their own energy consumption as well, even when they had been conserving energy in the past (Schultz et al., 2007). This herd behavior is unconscious, and is mediated by mirror neurons in the brain (Chartrand and Van Baaren, 2009). However, the unconscious nature of this herd behavior is often not acknowledged or even denied by the conformers themselves (Nolan et al., 2008) and is thus hard to battle. Our modern world is built on the basis of an enormous amount of unsustainable methods, tools, practices, and applications, so there is still a long way to go to achieve a sustainable world. Hence, the human tendency to copy the behavior of others and to regard other people's behaviors as the norm and justification of undesirable behavioral choices can be very detrimental to the achievement of sustainable goals. 4.7.1. Most relevant biases related to group pressure • Bandwagon effect: the tendency to adopt beliefs and behaviors more easily when they have already been adopted by others (Colman, 2003). • Conformity bias: the tendency to adjust one's thinking and behavior to that of a group standard. • Ingroup (−outgroup) bias: the tendency to favor one's own group above that of others (Cialdini and Goldstein, 2004). • Authority bias: the tendency to attribute greater accuracy to the opinion of authority figures (unrelated to its content) and to be more influenced by their opinions (Milgram, 1963). • Liking bias: the tendency to help or support another person the more sympathetically they feel, which is largely determined by: kindness, attractiveness, and affinity (Cialdini, 2006). • Reciprocity: the tendency to respond to a positive action with another positive action ("You help me then I help you") and having difficulty being indebted to the other person (Fehr and Gächter, 2002). • Social proof: the tendency to mirror or copy the actions and opinions of others, causing (groups of) people to converge too quickly upon a single distinct choice (Cialdini, 2006).
Interventions to deal with these biases
Key: Use social norms and peer pressure to encourage sustainable choices and behaviors • When a behavioral change is requested, it will probably be better to focus peoples' attention on others who already show the desired pro-environmental behavior instead of educating people about the bad behavior of others. • People can be seduced to choose for a certain option if they see this in many other people. So, present desirable pro-environmental behaviors as behaviors of the majority of the people (or at least large groups) people. Foster, for example, the desired behavioral choices by advertisements suggesting this behavior is already adopted by groups of people. • Use people with authority, powerful people, and/or attractive people to promote pro-environmental behavior. • Create feelings of commitment and indebtment for people who make sacrifices for the community in order to foster sustainability.
Frontiers in Psychology 11 frontiersin.org 5. Discussion and conclusion
Biases and nudges
In the present paper we have described how ingrained cognitive biases in human thinking may counter the development of green policy practices aimed at fostering a more sustainable and livable world. We have focused our study on how the form, content and communication of information affects our decisions and behavior with regard to sustainability. The influence techniques advocated in this paper are informational and psychological interventions, incentives, and/or nudges that could be effective with regard to biased thinking in the context of the current modern world. In general, biased information processing has served us for almost our entire existence (e.g., Haselton et al., 2005;. However, these natural and intuitive thinking patterns may be very counterproductive for coping with the global and complex problems the world is facing today. The many possible incentives and nudges presented show that there are many ways to deliberately capitalize on biased thinking in people in order to promote more sustainable behavioral choices. In previous publications we have explained how biases originate from ingrained neuro-evolutionary characteristics of our evolved brain (e.g., Korteling and Toet, 2022). This neuro-evolutionary framework provides more fundamental explanations for human decision making than 'explanations' provided by most social-or psychological studies. These latter (social-) psychological explanations are more 'proximate' in terms of "limitations of information processing capacity" (Simon, 1955;Broadbent, 1958;Kahneman, 1973;Norman and Bobrow, 1975;Morewedge and Kahneman, 2010), two metaphorical "Systems of information processing" (Stanovich and West, 2000;Kahneman, 2003;Evans, 2008;Kahneman, 2011), "emotions" (Kahneman and Tversky, 1984;Damasio, 1994), "prospects" prospects (e.g., Kahneman and Tversky, 1979;Mercer, 2005). "lack of training and experience" (Simon, 1992;Klein, 1997Klein, , 1998. Our neuro-evolutionary bias framework explains in terms of structural (neural network) and functional (evolutionary) mechanisms the origin of cognitive biases, why they are so systematic, persistent, and pervasive, and why biased thinking feels so normal, natural, and self-evident. Given the inherent/structural ("neural") and ingrained/functional ("evolutionary") character of biases, it seems unlikely that simple education or training interventions would be effective to improve human decision making beyond the specific educational context (transfer) and/or for a prolonged period of time (retention). On the basis of a systematic review of the literature, this indeed appears the case (Korteling et al., 2021). When it comes to solving the problems of the modern world, it will probably be impossible to defeat or eliminate biases in human thinking. Thus, we should always be aware of the pervasive effects of cognitive biases and be modest about our cognitive abilities to solve complex long-term problems in an easy way.
So, the effects on decision making of bias-mitigation training interventions are likely to be rather ineffective, in the same way that it is difficult to get people to change their eating habits by persuading them that chocolate or meat does not taste good. What is more: denying the ultimate and deep-seated neuro-evolutionary causes of the particularities and limitations of human thinking, may hamper adequate development and usage of effective interventions. For example: if governments strive to decrease the demand for energy-inefficient jacuzzi baths, but they ignore the influence of human evolutionary biases, this might lead to an intervention strategy that fails. Perhaps the government would try to persuade people that buying energy-consuming baths is unwise for the future. But in the context of our tendency to discount the value of future consequences, such a strategy on its own is likely to be rather ineffective. It would probably be more effective to use our knowledge of cognitive biases to our advantage. For example, the fact that we compare ourselves to our peers (Social comparison) might lead to a campaign in which the purchase of sustainable solar panels or a sustainable heat pump or fancy e-bike is related to status and prestige. Likewise, it is better to convey pro-environmental messages in a simple, consistent, repetitive, and tangible way and to focus on the consequences (bad or good) of ones choices, rather than on complex intervening processes. Finally, it is better to communicate information about the many aspects of sustainability at different levels of understanding at the same time, i.e., from the instant aspects for the individual to the global consequences for the world of the future.
The ethics of nudging
Above we have listed tips and tricks to provoke "sustainable decision making. " But as we write this, we realize all the more that this knowledge of how biases work, can be used for all kinds of purposes. In the 'wrong' hands, this knowledge about biases can be used to manipulate or incite the population to destructive. That is not even speculative, history has already shown this over and over again. Fossil industries that succeeded in holding back measures against global warming, doctors recommending brands of cigarettes, smear campaigns that led to witch-hunts, and anti-Semitic propaganda during World War II are just a few examples.
There is a serious ethical issue with using our knowledge of biases to our advantage (e.g., Bovens, 2009;Raihani, 2013). Who decides whether it is ethical to nudge citizens and use our knowledge of evolutionary biases to steer the choices and behavior of people? It sometimes may seem obvious that it is a good thing if you want to prevent incitement to hatred and violence, genocide or destructive such as smoking. But there is also a gray area. In the current pandemic, for example, we see that governments are doing their best to silence dissenting voices "for a good cause. " But counter voices also represent the basis of a democratic constitutional state, where counter voices must always be welcomed. Can we afford to go beyond our democratic boundaries, by nudging our citizens, for the sake of the climate? Our thought on this is as follows: Democracy means that everyone is allowed to make their voice heard about the goals that you want to achieve as a society. This report is about how to make your voice heard more effectively. It provides tools that everyone (not just politicians and policy makers) can use, for better or for worse. This applies to any instrument, AI, weapons, robots, ICT, etc.… The evil is not in the instrument, but in the purpose for which it is used. If we democratically choose to achieve certain goals, then it can be deemed defendable that governments use those instruments as effectively as possible to achieve those goals. It leaves people still free to choose their own path and goals.
A vision-based agenda
Politics can ensure that we as humanity behave more sustainably. In that case, our societal and physical environment will have to be organized differently, for example with far-reaching legislation (eg CO2 tax), a different market-oriented economy and a different transport system. However, these changes are held back by our ingrained preferences for short-term thinking, maintaining the status quo, personal interest, or herd behavior, which may result in fears like losing jobs or losing freedom. These thinking tendencies and fears are exploited by the lobbies of many powerful (e.g. fossil) parties with vested interests. That is why we have to search for ways to get moving as a society. An important part of this is managing well-being, and thereby discovering that there are ways to live sustainably, and also to be happy. This means that, more than ever, there is a need for knowledge and a substantiated vision about the core values that represent us, as humans, and our world, about who we are, how we want to live and where we want to go. This is not just a vision with long-term goals for human well-being, but also one that builds on our natural needs and that takes into account the hidden and inherent systemic risks of the modern, globalized world. This is essential in determining the course and the agenda for the future of humanity.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
|
2023-03-02T16:27:27.162Z
|
2023-02-28T00:00:00.000
|
{
"year": 2023,
"sha1": "2f810243db98f71f6ffdf762a7d1da1f8366fbdb",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1129835/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "120e10dd548a3d9eb5a4aab5aca107571583d1b7",
"s2fieldsofstudy": [
"Environmental Science",
"Psychology",
"Philosophy"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58539533
|
pes2o/s2orc
|
v3-fos-license
|
Cholesterol-Based Compounds: Recent Advances in Synthesis and Applications
This review reports on the latest developments (since 2014) in the chemistry of cholesterol and its applications in different research fields. These applications range from drug delivery or bioimaging applications to cholesterol-based liquid crystals and gelators. A brief overview of the most recent synthetic procedures to obtain new cholesterol derivatives is also provided, as well as the latest anticancer, antimicrobial, and antioxidant new cholesterol-based derivatives. This review discusses not only the synthetic details of the preparation of new cholesterol derivatives or conjugates, but also gives a short summary concerning the specific application of such compounds.
Introduction to Cholesterol-Based Compounds
Cholesterol (cholest-5-en-3β-ol) is considered to be a lipid-type molecule, being one of the most important structural components of cell membranes. Chemically, cholesterol is a rigid and almost planar molecule with a steroid skeleton of four fused rings, three six-membered and one five-membered, conventionally lettered from A to D (1,2-cyclopentanoperhydrophenanthrene ring system) ( Figure 1A). Therefore, the cholesterol molecule comprises four essential domains ( Figure 1B). In domain I, the polarity of the 3-hydroxy group constitutes an active site for hydrogen bond interactions with a myriad of biological molecules (e.g., phospholipids in membranes) [1]. In domain Cholesterol plays a vital role in life, particularly in cell membranes and as a precursor to the biosynthesis of several steroid hormones. In cell membranes, which are essentially constituted by a double layer of phospholipids, cholesterol has great influence on membrane fluidity, microdomain structure (lipid rafts), and permeability by interacting with both the hydrophilic headgroups and the hydrophobic tails of phospholipids. In addition, modifications of the stereochemistry and oxidation states of the fused rings, the side chain, as well as the functional groups of cholesterol, lead to a wide variety of biologically important molecules, such as bile acids, vitamin D, and several steroid hormones [1,2]. Interestingly, 13 Nobel Prizes have been awarded to scientists who studied the structure of cholesterol, its biosynthetic pathway, and metabolic regulation. Unfortunately, cholesterol has gained a bad reputation because it is increasingly associated with several cardiovascular and neurodegenerative diseases, among others [1,3].
Over the years, cholesterol has risen as an attractive starting material or a model system for organic synthesis due to its easily derivatized functional groups, availability, and low cost. Many useful chemical and enzymatic reactions are now widely used for multistep steroid transformations, leading to products of practical importance. The chemical transformations range from simple ones, such as manipulations of functional groups, to more complex ones, such as C-H activation or C-C bond formation with organometallic reagents. In 2014, a purely synthetic chemistry review was published, dealing only with the advances in cholesterol chemistry since 2000, focusing on cholesterol oxidation reactions, substitution of the 3β-hydroxy group, addition to the C5=C6 double bond, C-H functionalization, and C-C bond forming reactions. However, this review paper excluded simple derivatization reactions of cholesterol such as the preparation of carboxylic and inorganic acid esters, aliphatic and aromatic ethers, simple acetals, or glycosides [4]. From our perspective, the simpler chemical transformations very often lead to the preparation of new cholesterol-based molecules with potential applications in several important research fields. Therefore, in this review, we focused our attention on publications from 2014 to date and described not only the synthesis of cholesterol-based new molecules, but also the application of these molecules in different fields, such as drug delivery; bioimaging; liquid crystals; gelators; anticancer, antimicrobial, and antioxidant applications; as well as purely synthetic applications. However, some interesting papers published before 2014 were included to fill some of the lacking papers from the 2014 review paper. Throughout the text, several reaction schemes will be depicted to describe the chemical reaction involved in the preparation of the cholesterol-based compounds. For simplification purposes, the structures of cholesterol will consistently be represented using the abbreviations depicted in Figure 2. Cholesterol plays a vital role in life, particularly in cell membranes and as a precursor to the biosynthesis of several steroid hormones. In cell membranes, which are essentially constituted by a double layer of phospholipids, cholesterol has great influence on membrane fluidity, microdomain structure (lipid rafts), and permeability by interacting with both the hydrophilic headgroups and the hydrophobic tails of phospholipids. In addition, modifications of the stereochemistry and oxidation states of the fused rings, the side chain, as well as the functional groups of cholesterol, lead to a wide variety of biologically important molecules, such as bile acids, vitamin D, and several steroid hormones [1,2]. Interestingly, 13 Nobel Prizes have been awarded to scientists who studied the structure of cholesterol, its biosynthetic pathway, and metabolic regulation. Unfortunately, cholesterol has gained a bad reputation because it is increasingly associated with several cardiovascular and neurodegenerative diseases, among others [1,3].
Over the years, cholesterol has risen as an attractive starting material or a model system for organic synthesis due to its easily derivatized functional groups, availability, and low cost. Many useful chemical and enzymatic reactions are now widely used for multistep steroid transformations, leading to products of practical importance. The chemical transformations range from simple ones, such as manipulations of functional groups, to more complex ones, such as C-H activation or C-C bond formation with organometallic reagents. In 2014, a purely synthetic chemistry review was published, dealing only with the advances in cholesterol chemistry since 2000, focusing on cholesterol oxidation reactions, substitution of the 3β-hydroxy group, addition to the C5=C6 double bond, C-H functionalization, and C-C bond forming reactions. However, this review paper excluded simple derivatization reactions of cholesterol such as the preparation of carboxylic and inorganic acid esters, aliphatic and aromatic ethers, simple acetals, or glycosides [4]. From our perspective, the simpler chemical transformations very often lead to the preparation of new cholesterol-based molecules with potential applications in several important research fields. Therefore, in this review, we focused our attention on publications from 2014 to date and described not only the synthesis of cholesterol-based new molecules, but also the application of these molecules in different fields, such as drug delivery; bioimaging; liquid crystals; gelators; anticancer, antimicrobial, and antioxidant applications; as well as purely synthetic applications. However, some interesting papers published before 2014 were included to fill some of the lacking papers from the 2014 review paper. Throughout the text, several reaction schemes will be depicted to describe the chemical reaction involved in the preparation of the cholesterol-based compounds. For simplification purposes, the structures of cholesterol will consistently be represented using the abbreviations depicted in Figure 2. bioimaging; liquid crystals; gelators; anticancer, antimicrobial, and antioxidant applications; as well as purely synthetic applications. However, some interesting papers published before 2014 were included to fill some of the lacking papers from the 2014 review paper. Throughout the text, several reaction schemes will be depicted to describe the chemical reaction involved in the preparation of the cholesterol-based compounds. For simplification purposes, the structures of cholesterol will consistently be represented using the abbreviations depicted in Figure 2.
Drug Delivery Applications
Drug delivery is a method or process of administering a pharmaceutical compound to achieve a therapeutic effect in humans or animals. Drug delivery systems can in principle provide enhanced efficacy, reduced toxicity, or both for various types of drugs. Liposomes are the most common and well-investigated nanocarriers for targeted drug delivery because they have demonstrated efficiency in several biomedical applications by stabilizing therapeutic compounds, overcoming obstacles to cellular and tissue uptake, and improving the biodistribution of compounds to target sites in vivo [5].
In 2014, Vabbilisetty and Sun reported a study of terminal triphenylphosphine carrying anchor lipid effects on a liposome surface by postchemically selective functionalization via Staudinger ligation, using lactosyl azide as a model ligand. They synthesized two different anchor lipids, one of them based on the cholesterol molecule (Chol-PEG 2000 -thiphenylphosphine 3), which was synthesized through an amidation reaction of synthetic Chol-PEG 2000 -NH 2 1 with 3-diphenylphosphino-4-methoxycarbonylbenzoic acid N-hydroxysuccinimide (NHS) active ester 2 (Scheme 1) [6].
Drug Delivery Applications
Drug delivery is a method or process of administering a pharmaceutical compound to achieve a therapeutic effect in humans or animals. Drug delivery systems can in principle provide enhanced efficacy, reduced toxicity, or both for various types of drugs. Liposomes are the most common and well-investigated nanocarriers for targeted drug delivery because they have demonstrated efficiency in several biomedical applications by stabilizing therapeutic compounds, overcoming obstacles to cellular and tissue uptake, and improving the biodistribution of compounds to target sites in vivo [5].
In 2014, Vabbilisetty and Sun reported a study of terminal triphenylphosphine carrying anchor lipid effects on a liposome surface by postchemically selective functionalization via Staudinger ligation, using lactosyl azide as a model ligand. They synthesized two different anchor lipids, one of them based on the cholesterol molecule (Chol-PEG2000-thiphenylphosphine 3), which was synthesized through an amidation reaction of synthetic Chol-PEG2000-NH2 1 with 3-diphenylphosphino-4methoxycarbonylbenzoic acid N-hydroxysuccinimide (NHS) active ester 2 (Scheme 1) [6]. The authors verified that the Staudinger ligation could be carried out under mild reaction conditions in aqueous buffers without a catalyst and in high yields. The encapsulation and releasing capacity of the glycosylated liposome based on cholesterol were evaluated, respectively, by entrapping 5,6-carboxyfluorescein (CF) dye and monitoring the fluorescence leakage. It was concluded that Chol-PEG2000-thiphenylphosphine 3 is particularly suitable for the ligation of watersoluble molecules and can accommodate many chemical functions, being potentially useful in the coupling of many other ligands onto liposomes for drug delivery purposes [6].
In 2015, a new method was reported for the deposition of a single lipid bilayer onto a hard polymer bead starting from discoidal bicelles and using chemoselective chemistry to hydrophobically anchor the lipid assemblies, using cholesterol bearing an oxyamine linker. The synthesis of oxyamine-terminated cholesterol 6 involved two steps, starting with a Mitsunobu reaction of compound 4, followed by a reaction of 5 with hydrazine hydrate (Scheme 2) [7]. The discoidal bicelles were prepared in water media upon mixing dimyristoylphosphatidylcholine (DMPC), dihexanoylphosphatidylcholine (DHPC), dimyristoyltrimethylammonium propane (DMTAP), and the oxyamine-terminated cholesterol derivative 6, in a specific molar ratio. These bicelles were exposed to aldehyde-bearing polystyrene (PS) beads and readily underwent a change to a stable single lipid bilayer coating at the bead surface.
The authors verified that the Staudinger ligation could be carried out under mild reaction conditions in aqueous buffers without a catalyst and in high yields. The encapsulation and releasing capacity of the glycosylated liposome based on cholesterol were evaluated, respectively, by entrapping 5,6-carboxyfluorescein (CF) dye and monitoring the fluorescence leakage. It was concluded that Chol-PEG 2000 -thiphenylphosphine 3 is particularly suitable for the ligation of water-soluble molecules and can accommodate many chemical functions, being potentially useful in the coupling of many other ligands onto liposomes for drug delivery purposes [6].
In 2015, a new method was reported for the deposition of a single lipid bilayer onto a hard polymer bead starting from discoidal bicelles and using chemoselective chemistry to hydrophobically anchor the lipid assemblies, using cholesterol bearing an oxyamine linker. The synthesis of oxyamine-terminated cholesterol 6 involved two steps, starting with a Mitsunobu reaction of compound 4, followed by a reaction of 5 with hydrazine hydrate (Scheme 2) [7].
Drug Delivery Applications
Drug delivery is a method or process of administering a pharmaceutical compound to achieve a therapeutic effect in humans or animals. Drug delivery systems can in principle provide enhanced efficacy, reduced toxicity, or both for various types of drugs. Liposomes are the most common and well-investigated nanocarriers for targeted drug delivery because they have demonstrated efficiency in several biomedical applications by stabilizing therapeutic compounds, overcoming obstacles to cellular and tissue uptake, and improving the biodistribution of compounds to target sites in vivo [5].
In 2014, Vabbilisetty and Sun reported a study of terminal triphenylphosphine carrying anchor lipid effects on a liposome surface by postchemically selective functionalization via Staudinger ligation, using lactosyl azide as a model ligand. They synthesized two different anchor lipids, one of them based on the cholesterol molecule (Chol-PEG2000-thiphenylphosphine 3), which was synthesized through an amidation reaction of synthetic Chol-PEG2000-NH2 1 with 3-diphenylphosphino-4methoxycarbonylbenzoic acid N-hydroxysuccinimide (NHS) active ester 2 (Scheme 1) [6].
The authors verified that the Staudinger ligation could be carried out under mild reaction conditions in aqueous buffers without a catalyst and in high yields. The encapsulation and releasing capacity of the glycosylated liposome based on cholesterol were evaluated, respectively, by entrapping 5,6-carboxyfluorescein (CF) dye and monitoring the fluorescence leakage. It was concluded that Chol-PEG2000-thiphenylphosphine 3 is particularly suitable for the ligation of watersoluble molecules and can accommodate many chemical functions, being potentially useful in the coupling of many other ligands onto liposomes for drug delivery purposes [6].
In 2015, a new method was reported for the deposition of a single lipid bilayer onto a hard polymer bead starting from discoidal bicelles and using chemoselective chemistry to hydrophobically anchor the lipid assemblies, using cholesterol bearing an oxyamine linker. The synthesis of oxyamine-terminated cholesterol 6 involved two steps, starting with a Mitsunobu reaction of compound 4, followed by a reaction of 5 with hydrazine hydrate (Scheme 2) [7]. The discoidal bicelles were prepared in water media upon mixing dimyristoylphosphatidylcholine (DMPC), dihexanoylphosphatidylcholine (DHPC), dimyristoyltrimethylammonium propane (DMTAP), and the oxyamine-terminated cholesterol derivative 6, in a specific molar ratio. These bicelles were exposed to aldehyde-bearing polystyrene (PS) beads and readily underwent a change to a stable single lipid bilayer coating at the bead surface.
Ligand 9 was used to prepare conventional liposomes (CLs) and surface-modified liposomes (SMLs) through the reverse phase evaporation technique. These new liposomes were characterized by different techniques exhibiting the required particle size for targeting tumor and infectious cells. In vitro biological studies showed an enhanced binding affinity and cellular uptake of SMLs compared to CLs by HepG2 cells, making SMLs an interesting new approach for targeted drug delivery in liver cancer therapeutics [8].
Ligand 9 was used to prepare conventional liposomes (CLs) and surface-modified liposomes (SMLs) through the reverse phase evaporation technique. These new liposomes were characterized by different techniques exhibiting the required particle size for targeting tumor and infectious cells. In vitro biological studies showed an enhanced binding affinity and cellular uptake of SMLs compared to CLs by HepG2 cells, making SMLs an interesting new approach for targeted drug delivery in liver cancer therapeutics [8].
Recently, Lin et al. reported the synthesis of a fluorescent triple-responsive block-graft copolymer 27, bearing cholesteryl-and pyrenyl-side groups, with a disulfide (S-S) bridging point joining the hydrophilic and hydrophobic chains. The synthesis of such polymers relied on a typical click reaction between PNiPAAm10-S-S-P(αN3CL)10 26, pyrenylmethyl 4-pentynoate 25, and cholesteryl 4-pentynoate 24, affording PNiPAAm10-S-S-P(αN3CL10-g-PyrePA3/-CholPA7) 27 (Scheme 7) [12]. Experimental results indicated that copolymer 27 could undergo self-assembly into polymeric micelles with excellent fluorescence performance in aqueous solution. The drug-loading capacity of cholesteryl-grafted copolymer 27 was evaluated using doxorubicin (DOX) as a template drug, and the results showed reasonable DOX-loading capacity. The authors also demonstrated that DOXloaded micelles enter the cells at a substantially faster rate than their free-form counterparts, effectively inhibiting HeLa cell proliferation [12]. In 2014, the synthesis of a new dual-imaging and therapeutic agent for improved efficacy in Boron Neutron Capture Therapy (BNCT) in cancer treatment was reported [13]. The compound consists of a carborane unit (ten boron atoms) bearing a cholesterol unit on one side (to pursue incorporation into the liposome bilayer) and a Gd(III)/1,4,7,10-tetraazacyclododecane monoamide complex on the other side (as an magnetic resonance imaging (MRI) reporter to attain the quantification of the B/Gd concentration). The synthesis of the target compound Gd-B-AC01 (37) relied on an eight-step synthetic strategy, which ended with the complexation of 36 with Gd(III) in aqueous solution at pH 6.5 (Scheme 8). This dual probe 37 was functionalized with a polyethylene glycol (PEG)ylated phospholipid containing a folic acid residue at the end of the PEG chain. These liposomes presented interesting features such as the ability to selectively concentrate high amounts In 2014, the synthesis of a new dual-imaging and therapeutic agent for improved efficacy in Boron Neutron Capture Therapy (BNCT) in cancer treatment was reported [13]. The compound consists of a carborane unit (ten boron atoms) bearing a cholesterol unit on one side (to pursue incorporation into the liposome bilayer) and a Gd(III)/1,4,7,10-tetraazacyclododecane monoamide complex on the other side (as an magnetic resonance imaging (MRI) reporter to attain the quantification of the B/Gd concentration). The synthesis of the target compound Gd-B-AC01 (37) relied on an eight-step synthetic strategy, which ended with the complexation of 36 with Gd(III) in aqueous solution at pH 6.5 (Scheme 8). This dual probe 37 was functionalized with a polyethylene glycol (PEG)ylated phospholipid containing a folic acid residue at the end of the PEG chain. These liposomes presented interesting features such as the ability to selectively concentrate high amounts of boron in human ovarian cancer cells (IGROV-1), enough to perform efficient BNCT treatment with significantly reduced uptake by healthy cells in the surrounding regions. Furthermore, these liposomes, which can be used as nanoplatforms to deliver both Gd and B agents, can, in principle, be used for the simultaneous delivery of antitumor drugs such as DOX [13]. The behavior of such NPs in human serum albumin (HSA) protein environment was evaluated using mixed solutions of NPs from polymer conjugates with or without the anticancer drug doxorubicin bounded to them, 39 and 40, respectively. The authors found that in the absence of DOX, a small amount of HSA molecules bind to the cholesterol groups of the NPs by diffusing through the loose pHPMA shell or get caught in meshes formed by the pHPMA chains. On the other hand, the presence of DOX strongly hinders these interactions, and for that reason the delivery of DOX by these NPs in the human body is not affected by the presence of HSA [14].
Recently, Singh and coworkers reported the biofunctionalization of the surface of β-cyclodextrin nanosponge 41 (β-CD-NSP) with cholesterol, expecting to improve its cellular binding ability. The β-CD-NSP was functionalized by grafting cholesterol hydrogen succinate (CHS) through a coupling Zhang and coworkers studied the behavior of nanoparticles (NPs) formed by self-assembly of amphiphilic poly[N-(2-hydroxypropyl)methacrylamide] (pHPMA) copolymers bearing cholesterol side groups (39) as potential drug carriers for solid tumor treatment ( Figure 3) [14]. The behavior of such NPs in human serum albumin (HSA) protein environment was evaluated using mixed solutions of NPs from polymer conjugates with or without the anticancer drug doxorubicin bounded to them, 39 and 40, respectively. The authors found that in the absence of DOX, a small amount of HSA molecules bind to the cholesterol groups of the NPs by diffusing through the loose pHPMA shell or get caught in meshes formed by the pHPMA chains. On the other hand, the presence of DOX strongly hinders these interactions, and for that reason the delivery of DOX by these NPs in the human body is not affected by the presence of HSA [14]. The behavior of such NPs in human serum albumin (HSA) protein environment was evaluated using mixed solutions of NPs from polymer conjugates with or without the anticancer drug doxorubicin bounded to them, 39 and 40, respectively. The authors found that in the absence of DOX, a small amount of HSA molecules bind to the cholesterol groups of the NPs by diffusing through the loose pHPMA shell or get caught in meshes formed by the pHPMA chains. On the other hand, the presence of DOX strongly hinders these interactions, and for that reason the delivery of DOX by these NPs in the human body is not affected by the presence of HSA [14].
Recently, Singh and coworkers reported the biofunctionalization of the surface of β-cyclodextrin nanosponge 41 (β-CD-NSP) with cholesterol, expecting to improve its cellular binding ability. The β-CD-NSP was functionalized by grafting cholesterol hydrogen succinate (CHS) through a coupling reaction, affording β-CD-NSP-CHS 42 (Scheme 9) [15]. The cytotoxicity assays showed that β-CD-NSP 41 was nontoxic and that the surface biofunctionalized with CHS 42 improved both the therapeutic and drug delivery efficacy of DOX. The experimental results also demonstrated that CHS grafting may enhance DOX adsorption due to the hydrophobic charge on the surface. Therefore, the surface-engineered CD-NSP could be used as a carrier for low water-soluble small drug molecules to improve solubility and bioavailability in sitespecific drug delivery systems [15].
In attempting to develop an intelligent drug delivery for cancer chemotherapy, Li et al. synthesized dual redox/pH-sensitive amphiphilic copolymer 44 and cholesterol-modified poly(βamino esters)-grafted disulfide poly (ethylene glycol) methyl ether [PAE(-SS-mPEG)-g-Chol]. The precursor PAE-SS-mPEG 43 was successfully synthesized via Michael-type step polymerization using disulfide linkage-containing PEG segment. Finally, cholesterol was incorporated into the hydroxy-pendant group trough an esterification reaction, affording the copolymer PAE(-SS-mPEG)g-Chol 44 (Scheme 10) [16]. The cytotoxicity assays showed that β-CD-NSP 41 was nontoxic and that the surface biofunctionalized with CHS 42 improved both the therapeutic and drug delivery efficacy of DOX. The experimental results also demonstrated that CHS grafting may enhance DOX adsorption due to the hydrophobic charge on the surface. Therefore, the surface-engineered CD-NSP could be used as a carrier for low water-soluble small drug molecules to improve solubility and bioavailability in site-specific drug delivery systems [15].
The authors verified the interesting physicochemical properties of copolymer 44, namely redox and pH sensitivity. Doxorubicin-loaded hybrid polymer-lipid NPs (DOX-HDPLNPs) were prepared, and drug-loading capacity, delivery efficacy, and redox-and pH-triggered drug release behavior in vitro were studied. The results showed that DOX-HDPLNPs enhanced loading capacity and improved cellular uptake ability, as well as serum stability. The anticancer potential in tumor-bearing mice was addressed, indicating that the DOX-HDPLNPs prepared with redox-and pH-sensitive copolymer with disulfides and PEGylated lipid could efficiently enhance therapeutic efficacy with low cytotoxicity and side effects. Both in vitro and in vivo experiments indicated that DOX-HDPLNPs enhanced therapeutic efficacy with high cellular uptake and negligible cytotoxicity compared to the free drug DOX. Therefore, HDPLNPs can be considered to be smart delivery systems for hydrophobic anticancer drug delivery [16].
Tran et al. developed a copolymer in 2014, constituted of polynorbonene-cholesterol/ poly(ethylene glycol) [P(NBCh9-b-NBPEG)] 45, that undergoes self-assembly to form a long circulating nanostructure capable of encapsulating the anticancer drug DOX with high drug loading ( Figure 4) [17]. low cytotoxicity and side effects. Both in vitro and in vivo experiments indicated that DOX-HDPLNPs enhanced therapeutic efficacy with high cellular uptake and negligible cytotoxicity compared to the free drug DOX. Therefore, HDPLNPs can be considered to be smart delivery systems for hydrophobic anticancer drug delivery [16].
Tran et al. developed a copolymer in 2014, constituted of polynorbonene-cholesterol/ poly(ethylene glycol) [P(NBCh9-b-NBPEG)] 45, that undergoes self-assembly to form a long circulating nanostructure capable of encapsulating the anticancer drug DOX with high drug loading ( Figure 4) [17]. The authors found that the doxorubicin-loaded nanoparticles (DOX-NPs) were effectively internalized by human cervical cancer cells (HeLa) and that they showed dose-dependent cytotoxicity. Moreover, the DOX-NPs showed good in vivo circulation time and preferential accumulation in tumor tissue with reduced accumulation in the heart and other vital organs, and significantly inhibited tumor growth in tumor-bearing severe combined immunodeficient (SCID) mice. Based on these results, DOX-NPs can become useful carriers in improving tumor delivery of hydrophobic anticancer drugs [17].
A new series of amphiphilic diblock terpolymer poly(6-O-methacryloyl-D-galactopyranose)--bpoly(methacrylic acid-co-6-cholesteryloxyhexyl methacrylate) bearing attached galactose and cholesterol grafts [PMAgala-b-P(MAA-co-MAChol)s] 49 were prepared via Reversible Addition Fragmentation chain Transfer (RAFT) copolymerization followed by deprotection of galactose in the presence of trifluoroacetic acid (TFA) (Scheme 11) [18]. The new terpolymers (49) were studied for in vitro DOX release, and the results revealed high stability of the DOX-loaded terpolymer micelles under neutral conditions and significantly fast responsive DOX release. In addition, the results of fluorescence microscopy revealed that the DOX encapsulated in the synthesized diblock terpolymer PMAgala18-b-P(MAA26-co-MAChol9)/DOX The authors found that the doxorubicin-loaded nanoparticles (DOX-NPs) were effectively internalized by human cervical cancer cells (HeLa) and that they showed dose-dependent cytotoxicity. Moreover, the DOX-NPs showed good in vivo circulation time and preferential accumulation in tumor tissue with reduced accumulation in the heart and other vital organs, and significantly inhibited tumor growth in tumor-bearing severe combined immunodeficient (SCID) mice. Based on these results, DOX-NPs can become useful carriers in improving tumor delivery of hydrophobic anticancer drugs [17].
Molecules 2018, 23, x 9 of 68 low cytotoxicity and side effects. Both in vitro and in vivo experiments indicated that DOX-HDPLNPs enhanced therapeutic efficacy with high cellular uptake and negligible cytotoxicity compared to the free drug DOX. Therefore, HDPLNPs can be considered to be smart delivery systems for hydrophobic anticancer drug delivery [16]. Tran et al. developed a copolymer in 2014, constituted of polynorbonene-cholesterol/ poly(ethylene glycol) [P(NBCh9-b-NBPEG)] 45, that undergoes self-assembly to form a long circulating nanostructure capable of encapsulating the anticancer drug DOX with high drug loading ( Figure 4) [17]. The authors found that the doxorubicin-loaded nanoparticles (DOX-NPs) were effectively internalized by human cervical cancer cells (HeLa) and that they showed dose-dependent cytotoxicity. Moreover, the DOX-NPs showed good in vivo circulation time and preferential accumulation in tumor tissue with reduced accumulation in the heart and other vital organs, and significantly inhibited tumor growth in tumor-bearing severe combined immunodeficient (SCID) mice. Based on these results, DOX-NPs can become useful carriers in improving tumor delivery of hydrophobic anticancer drugs [17].
A new series of amphiphilic diblock terpolymer poly(6-O-methacryloyl-D-galactopyranose)--bpoly(methacrylic acid-co-6-cholesteryloxyhexyl methacrylate) bearing attached galactose and cholesterol grafts [PMAgala-b-P(MAA-co-MAChol)s] 49 were prepared via Reversible Addition Fragmentation chain Transfer (RAFT) copolymerization followed by deprotection of galactose in the presence of trifluoroacetic acid (TFA) (Scheme 11) [18]. The new terpolymers (49) were studied for in vitro DOX release, and the results revealed high stability of the DOX-loaded terpolymer micelles under neutral conditions and significantly fast responsive DOX release. In addition, the results of fluorescence microscopy revealed that the DOX encapsulated in the synthesized diblock terpolymer PMAgala18-b-P(MAA26-co-MAChol9)/DOX The new terpolymers (49) were studied for in vitro DOX release, and the results revealed high stability of the DOX-loaded terpolymer micelles under neutral conditions and significantly fast responsive DOX release. In addition, the results of fluorescence microscopy revealed that the DOX encapsulated in the synthesized diblock terpolymer PMAgala 18 -b-P(MAA 26 -co-MAChol 9 )/DOX micelles could be uptaken and delivered into cell nuclei in an efficient way, and their intracellular trafficking pathway could be altered compared to the free DOX control. The new terpolymers (49) could therefore be strongly considered for future smart nanoplatforms toward efficient antitumor drug delivery [18].
In 2014, a reduction-responsive polymersome based on the amphiphilic block copolymer PEG-SS-PAChol 52 was developed. The synthesis of 52 was achieved using PEG-SS-Br 50, a versatile atom transfer radical polymerization (ATRP) macroinitiator, and a cholesterol-containing acrylate 51, using CuBr as a catalyst and N,N,N ,N",N"-pentamethyldiethylenetriamine (PMDETA) as a ligand (Scheme 12) [19]. In 2014, a reduction-responsive polymersome based on the amphiphilic block copolymer PEG-SS-PAChol 52 was developed. The synthesis of 52 was achieved using PEG-SS-Br 50, a versatile atom transfer radical polymerization (ATRP) macroinitiator, and a cholesterol-containing acrylate 51, using CuBr as a catalyst and N,N,N′,N′′,N′′-pentamethyldiethylenetriamine (PMDETA) as a ligand (Scheme 12) [19]. The polymersome 52 was studied to come up with robust nanocarriers able to release their content inside the cells upon contact with the intracellular reducing environment. The physical crosslinking by a smectic phase of 52 in the hydrophobic sublayer, as well as the introduction of a disulfide bridge that links the hydrophilic PEG and hydrophobic blocks present in 52, were key features that gave stability, robustness, and reduction sensitivity to the polymersome. The results showed sensitivity of the block copolymer 52 to reduction, and the fluorescence dequenching of calcein both in glutathione (GSH) solution and in vitro with the mouse macrophage cells pretreated with GSH-OEt demonstrated the breakdown of polymersome under reduction conditions. To achieve significant calcein release, high concentrations of GSH and long incubation times were necessary. These reduction-responsive polymersomes (52) could be used as drug carriers with very long circulation profiles and slow release kinetics [19].
Recently, two new sterol-anchored polyethylene glycols, 55 and 58, were reported as potential alternatives to conventional phosphatidylethanolamine-PEGs. Their synthesis relied on the esterification reaction of cholesterol derivatives 53 and 56 with PEGs 54 and 57, as depicted in Scheme 13 [20]. The authors studied the biophysical properties of liposomes containing these two sterolanchored PEGs, 55 and 58, which exhibited an array of canonical PEGgylated-liposome behaviors The polymersome 52 was studied to come up with robust nanocarriers able to release their content inside the cells upon contact with the intracellular reducing environment. The physical crosslinking by a smectic phase of 52 in the hydrophobic sublayer, as well as the introduction of a disulfide bridge that links the hydrophilic PEG and hydrophobic blocks present in 52, were key features that gave stability, robustness, and reduction sensitivity to the polymersome. The results showed sensitivity of the block copolymer 52 to reduction, and the fluorescence dequenching of calcein both in glutathione (GSH) solution and in vitro with the mouse macrophage cells pretreated with GSH-OEt demonstrated the breakdown of polymersome under reduction conditions. To achieve significant calcein release, high concentrations of GSH and long incubation times were necessary. These reduction-responsive polymersomes (52) could be used as drug carriers with very long circulation profiles and slow release kinetics [19].
Recently, two new sterol-anchored polyethylene glycols, 55 and 58, were reported as potential alternatives to conventional phosphatidylethanolamine-PEGs. Their synthesis relied on the esterification reaction of cholesterol derivatives 53 and 56 with PEGs 54 and 57, as depicted in Scheme 13 [20].
The authors studied the biophysical properties of liposomes containing these two sterol-anchored PEGs, 55 and 58, which exhibited an array of canonical PEGgylated-liposome behaviors including retention of encapsulated small molecules, low serum protein adsorption, and reduced cellular uptake, yet they did not exhibit long circulation [20].
Polymeric micelles are known for their variety of therapeutic applications. In this field, two amphiphilic polymers were successfully synthesized using hyaluronic acid (HA), cholesterol, and octadecanoic acid as hydrophobic groups. Only the synthesis of cholesterol containing polymer HA-SA-CYS-Chol 60 is depicted in Scheme 14, since the other hydrophobic groups do not fit in the scope of this paper. Nevertheless, the authors concluded that different properties of hydrophobic groups of the amphiphilic carrier are closely implicated in the stability and drug-loading capacity of the amphiphilic carrier and micelles. HA-SA-CYS-Chol 60 presented a lower critical micellar concentration, producing docetaxel (DTX)-loaded micelles of a smaller particle size, higher encapsulation efficiency, and drug loading, when compared to the other hydrophobic tails [21]. Furthermore, in vivo animal studies revealed very good tumor-targeting properties and efficient antitumor effects at very low concentrations, with low systemic toxicity of HA-SA-CYS-Chol 60 micelles [21]. disulfide bridge that links the hydrophilic PEG and hydrophobic blocks present in 52, were key features that gave stability, robustness, and reduction sensitivity to the polymersome. The results showed sensitivity of the block copolymer 52 to reduction, and the fluorescence dequenching of calcein both in glutathione (GSH) solution and in vitro with the mouse macrophage cells pretreated with GSH-OEt demonstrated the breakdown of polymersome under reduction conditions. To achieve significant calcein release, high concentrations of GSH and long incubation times were necessary. These reduction-responsive polymersomes (52) could be used as drug carriers with very long circulation profiles and slow release kinetics [19].
Recently, two new sterol-anchored polyethylene glycols, 55 and 58, were reported as potential alternatives to conventional phosphatidylethanolamine-PEGs. Their synthesis relied on the esterification reaction of cholesterol derivatives 53 and 56 with PEGs 54 and 57, as depicted in Scheme 13 [20]. The authors studied the biophysical properties of liposomes containing these two sterolanchored PEGs, 55 and 58, which exhibited an array of canonical PEGgylated-liposome behaviors including retention of encapsulated small molecules, low serum protein adsorption, and reduced cellular uptake, yet they did not exhibit long circulation [20].
Polymeric micelles are known for their variety of therapeutic applications. In this field, two amphiphilic polymers were successfully synthesized using hyaluronic acid (HA), cholesterol, and octadecanoic acid as hydrophobic groups. Only the synthesis of cholesterol containing polymer HA-SA-CYS-Chol 60 is depicted in Scheme 14, since the other hydrophobic groups do not fit in the scope of this paper. Nevertheless, the authors concluded that different properties of hydrophobic groups of the amphiphilic carrier are closely implicated in the stability and drug-loading capacity of the amphiphilic carrier and micelles. HA-SA-CYS-Chol 60 presented a lower critical micellar concentration, producing docetaxel (DTX)-loaded micelles of a smaller particle size, higher encapsulation efficiency, and drug loading, when compared to the other hydrophobic tails [21]. Furthermore, in vivo animal studies revealed very good tumor-targeting properties and efficient antitumor effects at very low concentrations, with low systemic toxicity of HA-SA-CYS-Chol 60 micelles [21].
A new liposomal formulation for drug delivery purposes was recently developed, based on the N-terminal cholesterol conjugation with a mitochondria-penetrating peptide (MPP) sequence, consisting of four amino acids [phenylalanine-arginine-phenylalanine-lysine (FRFK)]. More specifically, the synthesis of cholesterol-phenylalanine-arginine-phenylalanine-lysine (Chol-FRFK) 64 was achieved by coupling cholesteryl chloroformate 7 with amino acid-bound resins (62), followed by resin cleavage using TFA and the removal of protecting groups (Scheme 15) [22]. The authors developed the liposomes using dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE) and Chol-FRFK 64 for delivery of the hydrophobic drug antimycin A specifically targeted toward mitochondria and lung cancer A549 cells. The results indicated that this formulation can effectively deliver the encapsulated drug to the mitochondria because of the small size and moderately cationic charge of the liposomes, enabling cellular uptake with low toxicity. The liposomes were found to be stable for long periods at room temperature, and they acted synergistically with antimycin A, leading to the complete disruption of inner membrane potential [22].
In 2016, six new cholesterol-derived cationic lipids, 68-73, were synthesized via ether or ester linkages with different head groups (Scheme 16), which were used to create cationic liposomes for nonviral gene delivery vectors [23]. The authors studied the relationship between the structure of the synthesized lipids and the transfection efficiency and optimized gene transfection conditions of the liposomes. They found that the chemical structure of head groups and the linkage between cholesterol and head groups play important roles in gene delivery efficiency. Furthermore, lipids 69 A new liposomal formulation for drug delivery purposes was recently developed, based on the N-terminal cholesterol conjugation with a mitochondria-penetrating peptide (MPP) sequence, consisting of four amino acids [phenylalanine-arginine-phenylalanine-lysine (FRFK)]. More specifically, the synthesis of cholesterol-phenylalanine-arginine-phenylalanine-lysine (Chol-FRFK) 64 was achieved by coupling cholesteryl chloroformate 7 with amino acid-bound resins (62), followed by resin cleavage using TFA and the removal of protecting groups (Scheme 15) [22]. The authors developed the liposomes using dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE) and Chol-FRFK 64 for delivery of the hydrophobic drug antimycin A specifically targeted toward mitochondria and lung cancer A549 cells. The results indicated that this formulation can effectively deliver the encapsulated drug to the mitochondria because of the small size and moderately cationic charge of the liposomes, enabling cellular uptake with low toxicity. The liposomes were found to be stable for long periods at room temperature, and they acted synergistically with antimycin A, leading to the complete disruption of inner membrane potential [22]. transfection-efficient. The authors found that redox activities of co-liposomes and their lipoplexes could be regulated using the alkyl ferrocene moiety. The vesicles possessing ferrocene in the reduced state induced an efficient gene transfection capability using pEGFP-C3 plasmid DNA in three cell lines, even better than the commercial lipofectamine 2000 (Lipo 2000). This evidence suggests that these redox-driven systems could be used in gene delivery applications where transfection needs to be performed spatially or temporally [24]. In 2016, six new cholesterol-derived cationic lipids, 68-73, were synthesized via ether or ester linkages with different head groups (Scheme 16), which were used to create cationic liposomes for nonviral gene delivery vectors [23]. The authors studied the relationship between the structure of the synthesized lipids and the transfection efficiency and optimized gene transfection conditions of the liposomes. They found that the chemical structure of head groups and the linkage between cholesterol and head groups play important roles in gene delivery efficiency. Furthermore, lipids 69 and 73 exhibited higher transfection efficiency and lower toxicity than those of the tested commercial liposomes DC-Chol and lipofectamine 2000, even in the presence of serum [23].
Molecules 2018, 23, x 12 of 68 cholesterols 76 and 77, as well as 79 and 80, were incorporated into co-liposomes and shown to be transfection-efficient. The authors found that redox activities of co-liposomes and their lipoplexes could be regulated using the alkyl ferrocene moiety. The vesicles possessing ferrocene in the reduced state induced an efficient gene transfection capability using pEGFP-C3 plasmid DNA in three cell lines, even better than the commercial lipofectamine 2000 (Lipo 2000). This evidence suggests that these redox-driven systems could be used in gene delivery applications where transfection needs to be performed spatially or temporally [24]. In 2015, Vulugundam and coworkers reported the design and synthesis of new redox-active monomeric 76 and 77, and dimeric (gemini) 79 and 80, cationic lipids based on ferrocenylated cholesterol derivatives for the development of gene delivery systems (Scheme 17). The cationic cholesterols 76 and 77, as well as 79 and 80, were incorporated into co-liposomes and shown to be transfection-efficient. The authors found that redox activities of co-liposomes and their lipoplexes could be regulated using the alkyl ferrocene moiety. The vesicles possessing ferrocene in the reduced state induced an efficient gene transfection capability using pEGFP-C3 plasmid DNA in three cell lines, even better than the commercial lipofectamine 2000 (Lipo 2000). This evidence suggests that these redox-driven systems could be used in gene delivery applications where transfection needs to be performed spatially or temporally [24]. A series of macrocycle polyamine (cyclen and 1,4,7-triazacyclononane (TACN))-based cationic lipids 85 and 88, bearing cholesterol as a hydrophobic tail, were synthesized through ring-opening reactions (Scheme 18). These cationic lipids, 85 and 88, were used in combination with 1,2-dioleoylsn-glycero--3-phosphoethanolamine (DOPE) to prepare lipoplexes, which efficiently condense DNA into nanoparticles with a proper size and zeta potential [25]. Lipid 85, containing cyclen as a headgroup, demonstrated lower toxicity and better transfection efficiency (TE) in vitro, when compared to the commercial reference lipofectamine 2000 in both 7402 and A549 cancer cells. Furthermore, the authors rationalized the good serum tolerance of 85 due to the presence of a hydroxy group in its structure. These promising results indicated that cationic-lipid 85 should be considered for nonviral gene vectors in in vivo applications [25].
Aiming to extend the existent library of polycationic amphiphiles, Puchkov et al. designed and synthesized a new molecule, 92, based on triethylenetetramine and cholesterol (a spermine analogue containing the same number of amino groups but differing in the number of methylene units). The synthesis of the polycationic amphiphile 92 was based on the selective transformation of primary amines into secondary ones via nitrobenzenesulfonamides, and the molecule of cholesterol was incorporated through alkylation of bis(sulfonamide) 89 with bromo derivative of cholesterol 90 (Scheme 19) [26]. The authors used the triethylenetetramine-based amphiphile 92 to prepare cationic liposomes and concluded that the transfection properties of delivery nucleic acids in eukaryotic cells were inferior to those with amphiphiles based on spermine. Despite the polyamines (triethylenetetramine and spermine) having the same number of amino groups, their distribution was significantly different, which may have resulted in the difference in their transfection activity [26]. A series of macrocycle polyamine (cyclen and 1,4,7-triazacyclononane (TACN))-based cationic lipids 85 and 88, bearing cholesterol as a hydrophobic tail, were synthesized through ring-opening reactions (Scheme 18). These cationic lipids, 85 and 88, were used in combination with 1,2-dioleoyl-sn-glycero--3-phosphoethanolamine (DOPE) to prepare lipoplexes, which efficiently condense DNA into nanoparticles with a proper size and zeta potential [25]. A series of macrocycle polyamine (cyclen and 1,4,7-triazacyclononane (TACN))-based cationic lipids 85 and 88, bearing cholesterol as a hydrophobic tail, were synthesized through ring-opening reactions (Scheme 18). These cationic lipids, 85 and 88, were used in combination with 1,2-dioleoylsn-glycero--3-phosphoethanolamine (DOPE) to prepare lipoplexes, which efficiently condense DNA into nanoparticles with a proper size and zeta potential [25]. and A549 cancer cells. Furthermore, the authors rationalized the good serum tolerance of 85 due to the presence of a hydroxy group in its structure. These promising results indicated that cationic-lipid 85 should be considered for nonviral gene vectors in in vivo applications [25].
Aiming to extend the existent library of polycationic amphiphiles, Puchkov et al. designed and synthesized a new molecule, 92, based on triethylenetetramine and cholesterol (a spermine analogue containing the same number of amino groups but differing in the number of methylene units). The synthesis of the polycationic amphiphile 92 was based on the selective transformation of primary amines into secondary ones via nitrobenzenesulfonamides, and the molecule of cholesterol was incorporated through alkylation of bis(sulfonamide) 89 with bromo derivative of cholesterol 90 (Scheme 19) [26]. The authors used the triethylenetetramine-based amphiphile 92 to prepare cationic liposomes and concluded that the transfection properties of delivery nucleic acids in eukaryotic cells were inferior to those with amphiphiles based on spermine. Despite the polyamines (triethylenetetramine and spermine) having the same number of amino groups, their distribution was significantly different, which may have resulted in the difference in their transfection activity [26]. and A549 cancer cells. Furthermore, the authors rationalized the good serum tolerance of 85 due to the presence of a hydroxy group in its structure. These promising results indicated that cationic-lipid 85 should be considered for nonviral gene vectors in in vivo applications [25].
Aiming to extend the existent library of polycationic amphiphiles, Puchkov et al. designed and synthesized a new molecule, 92, based on triethylenetetramine and cholesterol (a spermine analogue containing the same number of amino groups but differing in the number of methylene units). The synthesis of the polycationic amphiphile 92 was based on the selective transformation of primary amines into secondary ones via nitrobenzenesulfonamides, and the molecule of cholesterol was incorporated through alkylation of bis(sulfonamide) 89 with bromo derivative of cholesterol 90 (Scheme 19) [26]. The authors used the triethylenetetramine-based amphiphile 92 to prepare cationic liposomes and concluded that the transfection properties of delivery nucleic acids in eukaryotic cells were inferior to those with amphiphiles based on spermine. Despite the polyamines (triethylenetetramine and spermine) having the same number of amino groups, their distribution was significantly different, which may have resulted in the difference in their transfection activity [26]. The authors conducted molecular dynamic simulations as well as in vitro studies with the PTXloaded liposomes. The results showed that these cationic liposomes enhanced loading efficiency and stability over the conventional liposomes, which can be rationalized based on the hydrogen bonding between CAE and PTX and the deeper penetration of PTX in the bilayer. Moreover, these novel liposomes demonstrated improved cytotoxicity in three different cell lines (MDA MB 231, H5V, and HDMEC) and enhanced endothelial cell migration inhibition compared to conventional liposomes. The absence of genotoxicity makes cholesterol-arginine ester 94 an interesting biocompatible cationic ligand in drug delivery applications [27].
The design and synthesis of thermosensitive polymers of N-(2-hydroxypropyl)methacrylamide mono/dilactate of different molecular weights and composition bearing a cholesterol anchor 98 (Chol-pHPMAlac) was reported in 2014 (Scheme 21). These new cholesterol-based polymers 98 were incorporated onto liposome formulations loaded with DOX. The authors concluded that the release of DOX from such liposome formulations was effective at low temperatures and could be adjusted according to the grafting density of Chol-pHPMAlac 98. Chol-pHPMAlac 98 with a cloud point of 19.0 °C and a Mn of 10.0 kDa showed interesting releasing features because it was stable at body temperature, releasing its content only under hyperthermia conditions. These releasing features make these liposomes interesting for local drug delivery using hyperthermia [28]. The authors conducted molecular dynamic simulations as well as in vitro studies with the PTXloaded liposomes. The results showed that these cationic liposomes enhanced loading efficiency and stability over the conventional liposomes, which can be rationalized based on the hydrogen bonding between CAE and PTX and the deeper penetration of PTX in the bilayer. Moreover, these novel liposomes demonstrated improved cytotoxicity in three different cell lines (MDA MB 231, H5V, and HDMEC) and enhanced endothelial cell migration inhibition compared to conventional liposomes. The absence of genotoxicity makes cholesterol-arginine ester 94 an interesting biocompatible cationic ligand in drug delivery applications [27].
The design and synthesis of thermosensitive polymers of N-(2-hydroxypropyl)methacrylamide mono/dilactate of different molecular weights and composition bearing a cholesterol anchor 98 (Chol-pHPMAlac) was reported in 2014 (Scheme 21). These new cholesterol-based polymers 98 were incorporated onto liposome formulations loaded with DOX. The authors concluded that the release of DOX from such liposome formulations was effective at low temperatures and could be adjusted according to the grafting density of Chol-pHPMAlac 98. Chol-pHPMAlac 98 with a cloud point of 19.0 °C and a Mn of 10.0 kDa showed interesting releasing features because it was stable at body temperature, releasing its content only under hyperthermia conditions. These releasing features make these liposomes interesting for local drug delivery using hyperthermia [28]. The authors conducted molecular dynamic simulations as well as in vitro studies with the PTX-loaded liposomes. The results showed that these cationic liposomes enhanced loading efficiency and stability over the conventional liposomes, which can be rationalized based on the hydrogen bonding between CAE and PTX and the deeper penetration of PTX in the bilayer. Moreover, these novel liposomes demonstrated improved cytotoxicity in three different cell lines (MDA MB 231, H5V, and HDMEC) and enhanced endothelial cell migration inhibition compared to conventional liposomes. The absence of genotoxicity makes cholesterol-arginine ester 94 an interesting biocompatible cationic ligand in drug delivery applications [27].
The design and synthesis of thermosensitive polymers of N-(2-hydroxypropyl)methacrylamide mono/dilactate of different molecular weights and composition bearing a cholesterol anchor 98 (Chol-pHPMAlac) was reported in 2014 (Scheme 21). These new cholesterol-based polymers 98 were incorporated onto liposome formulations loaded with DOX. The authors concluded that the release of DOX from such liposome formulations was effective at low temperatures and could be adjusted according to the grafting density of Chol-pHPMAlac 98. Chol-pHPMAlac 98 with a cloud point of 19.0 • C and a M n of 10.0 kDa showed interesting releasing features because it was stable at body temperature, releasing its content only under hyperthermia conditions. These releasing features make these liposomes interesting for local drug delivery using hyperthermia [28]. The Chol-U-Pr-mPEG/insulin complex not only preserved the insulin conformation, but also was shown to be effective in its protection from hydrolysis by protease and in the suppression of blood glucose levels in mice [29].
Anticancer, Antimicrobial, and Antioxidant Compounds
Many new cholesterol derivatives bearing a wide range of bioactive scaffolds have been developed in the search for new anticancer, antimicrobial, or antioxidant agents with improved efficacy. In this context, Rodríguez et al. described an efficient synthesis of (6E)-hydroximinosteroid homodimers (105) linking two steroidal monomers at position 3 of the steroid scaffold via rutheniumcatalyzed cross-metathesis reaction (Scheme 23). The synthesis of the precursor monomers was carried out through a five-step reaction sequence starting from cholesterol 28 (Scheme 23) [30]. Recently, Asayama and coworkers reported a byproduct-free PEGylation method for the modification of insulin. The strategy involves the reaction of cholesterol chloroformate 7 with aminopropyl mPEG in the presence of triethylamine to afford the conjugate Chol-U-Pr-mPEG 99 (Scheme 22), complexation with insulin in aqueous solution, and subsequent freeze-drying [29]. Recently, Asayama and coworkers reported a byproduct-free PEGylation method for the modification of insulin. The strategy involves the reaction of cholesterol chloroformate 7 with aminopropyl mPEG in the presence of triethylamine to afford the conjugate Chol-U-Pr-mPEG 99 (Scheme 22), complexation with insulin in aqueous solution, and subsequent freeze-drying [29]. The Chol-U-Pr-mPEG/insulin complex not only preserved the insulin conformation, but also was shown to be effective in its protection from hydrolysis by protease and in the suppression of blood glucose levels in mice [29].
Anticancer, Antimicrobial, and Antioxidant Compounds
Many new cholesterol derivatives bearing a wide range of bioactive scaffolds have been developed in the search for new anticancer, antimicrobial, or antioxidant agents with improved efficacy. In this context, Rodríguez et al. described an efficient synthesis of (6E)-hydroximinosteroid homodimers (105) linking two steroidal monomers at position 3 of the steroid scaffold via rutheniumcatalyzed cross-metathesis reaction (Scheme 23). The synthesis of the precursor monomers was carried out through a five-step reaction sequence starting from cholesterol 28 (Scheme 23) [30]. The Chol-U-Pr-mPEG/insulin complex not only preserved the insulin conformation, but also was shown to be effective in its protection from hydrolysis by protease and in the suppression of blood glucose levels in mice [29].
Anticancer, Antimicrobial, and Antioxidant Compounds
Many new cholesterol derivatives bearing a wide range of bioactive scaffolds have been developed in the search for new anticancer, antimicrobial, or antioxidant agents with improved efficacy. In this context, Rodríguez et al. described an efficient synthesis of (6E)-hydroximinosteroid homodimers (105) linking two steroidal monomers at position 3 of the steroid scaffold via ruthenium-catalyzed cross-metathesis reaction (Scheme 23). The synthesis of the precursor monomers was carried out through a five-step reaction sequence starting from cholesterol 28 (Scheme 23) [30]. Recently, Asayama and coworkers reported a byproduct-free PEGylation method for the modification of insulin. The strategy involves the reaction of cholesterol chloroformate 7 with aminopropyl mPEG in the presence of triethylamine to afford the conjugate Chol-U-Pr-mPEG 99 (Scheme 22), complexation with insulin in aqueous solution, and subsequent freeze-drying [29]. The Chol-U-Pr-mPEG/insulin complex not only preserved the insulin conformation, but also was shown to be effective in its protection from hydrolysis by protease and in the suppression of blood glucose levels in mice [29].
Anticancer, Antimicrobial, and Antioxidant Compounds
Many new cholesterol derivatives bearing a wide range of bioactive scaffolds have been developed in the search for new anticancer, antimicrobial, or antioxidant agents with improved efficacy. In this context, Rodríguez et al. described an efficient synthesis of (6E)-hydroximinosteroid homodimers (105) linking two steroidal monomers at position 3 of the steroid scaffold via rutheniumcatalyzed cross-metathesis reaction (Scheme 23). The synthesis of the precursor monomers was carried out through a five-step reaction sequence starting from cholesterol 28 (Scheme 23) [30]. The cytotoxic activity of (6E)-hydroximinosteroid homodimers (105) was evaluated in vitro using human lung carcinoma A549, colon adenocarcinoma HCT-116, human Caucasian glioblastoma multiform T98G, and human pancreatic adenocarcinoma PSN1 cells. Only homodimer 105 (n = 2) showed selective cytotoxicity against HCT-116 cells: However, it presented no activity against the remaining cell lines. Nevertheless, the monomer counterparts 106 and 107 showed better cytotoxic activity against all cell lines when compared to homodimer 105 [30].
Richmond et al. reported the synthesis of four new (6E)-hydroximinosteroids (109), starting from the corresponding ketones (108) derived from cholesterol. The authors evaluated the cytotoxicity of all the prepared compounds (109) and compared the results to those of five polyhydroxylated sulfated analogs (110) (Scheme 24) [31]. The cytotoxic activity of (6E)-hydroximinosteroid homodimers (105) was evaluated in vitro using human lung carcinoma A549, colon adenocarcinoma HCT-116, human Caucasian glioblastoma multiform T98G, and human pancreatic adenocarcinoma PSN1 cells. Only homodimer 105 (n = 2) showed selective cytotoxicity against HCT-116 cells: However, it presented no activity against the remaining cell lines. Nevertheless, the monomer counterparts 106 and 107 showed better cytotoxic activity against all cell lines when compared to homodimer 105 [30]. Richmond Upon evaluation of the cytotoxic activity of the steroidal oxime 109 against two prostate carcinoma cell lines (PC-3 and LNCaP), the authors concluded that oxime 109 (R 1 = R 4 = OH, R 2 = R 3 = H) was the most active compound for PC-3, while for LNCaP the trisulfated analog 110 (R 5 = H, R 6 = OSO3Na) was the most active one [31].
The synthesis of new steroidal 5α,8α-endoperoxides starting from different steroids, including cholesterol, was reported, involving a four-step synthetic protocol. It involved the introduction of a diene in the cholesterol 28 structure through allylic bromination followed by elimination, and finally a photoinduced formation of the cholesterol-based 5α,8α-endoperoxide 115 (Scheme 26) [33]. Upon evaluation of the cytotoxic activity of the steroidal oxime 109 against two prostate carcinoma cell lines (PC-3 and LNCaP), the authors concluded that oxime 109 (R 1 = R 4 = OH, R 2 = R 3 = H) was the most active compound for PC-3, while for LNCaP the trisulfated analog 110 (R 5 = H, R 6 = OSO 3 Na) was the most active one [31].
A new greener methodology involving steroidal epoxides as intermediates for the synthesis of steroidal β-aminoalcohols was recently reported. The synthesis of β-aminoalcohol 112 involved two steps: i) Epoxidation of cholesterol 28 conducted by m-chloroperoxybenzoic acid (m-CPBA); and ii) solvent-free aminolysis of epoxide 111 mediated by sulfated zirconia (Scheme 25) [32]. The cytotoxic activity of (6E)-hydroximinosteroid homodimers (105) was evaluated in vitro using human lung carcinoma A549, colon adenocarcinoma HCT-116, human Caucasian glioblastoma multiform T98G, and human pancreatic adenocarcinoma PSN1 cells. Only homodimer 105 (n = 2) showed selective cytotoxicity against HCT-116 cells: However, it presented no activity against the remaining cell lines. Nevertheless, the monomer counterparts 106 and 107 showed better cytotoxic activity against all cell lines when compared to homodimer 105 [30]. Richmond Upon evaluation of the cytotoxic activity of the steroidal oxime 109 against two prostate carcinoma cell lines (PC-3 and LNCaP), the authors concluded that oxime 109 (R 1 = R 4 = OH, R 2 = R 3 = H) was the most active compound for PC-3, while for LNCaP the trisulfated analog 110 (R 5 = H, R 6 = OSO3Na) was the most active one [31].
The synthesis of new steroidal 5α,8α-endoperoxides starting from different steroids, including cholesterol, was reported, involving a four-step synthetic protocol. It involved the introduction of a diene in the cholesterol 28 structure through allylic bromination followed by elimination, and finally a photoinduced formation of the cholesterol-based 5α,8α-endoperoxide 115 (Scheme 26) [33]. The antiproliferative activity of the cholesterol-based β-aminoalcohol 112 was evaluated using MCF-7 cells, and the results showed better cytotoxic effects than cholesterol 28 itself, either by crystal violet staining (CVS) or 3-(4,5-dimethylthiazo-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assays. Furthermore, cell images obtained by Harris' hematoxylin and eosin staining protocol evidenced formation of apoptotic bodies because of the presence of cholesterol β-aminoalcohol 112 in a dose-dependent fashion [32].
The synthesis of new steroidal 5α,8α-endoperoxides starting from different steroids, including cholesterol, was reported, involving a four-step synthetic protocol. It involved the introduction of a diene in the cholesterol 28 structure through allylic bromination followed by elimination, and finally a photoinduced formation of the cholesterol-based 5α,8α-endoperoxide 115 (Scheme 26) [33]. The authors evaluated the in vitro antiproliferative activities of the 5α,8α-endoperoxides against human cancer cell lines derived from various human cancer types, such as human hepatocellular cancer cell lines (HepG2, SK-Hep1) and human breast cancer cell lines (MDA-MB-231, MCF-7). It was found that some compounds exhibited potent anticancer activities through inducing cancer cell apoptosis against the four tested cancer cell lines, particularly the cholesterol-based 5α,8αendoperoxide 115, which was the most promising derivative, presenting IC50 values ranging from 8.07 to 12.25 μM [33].
A six-step synthetic route based on cholesterol 28 as a starting material was designed to prepare two new steroidal thiadiazole derivatives, 121 (R = H, Me), with an A-homo lactam and a Bnorsteroidal skeleton (Scheme 27) [34]. The antiproliferative activity of compounds 118-121 against various cancer cell lines was evaluated, and the results showed that compounds 120 (R = Ph) and 121 (R = Me) displayed excellent selective inhibition to the A-549 (human lung carcinoma) cell line, with IC50 values of 7.8 and 8.0 μM, respectively [34]. The authors evaluated the in vitro antiproliferative activities of the 5α,8α-endoperoxides against human cancer cell lines derived from various human cancer types, such as human hepatocellular cancer cell lines (HepG2, SK-Hep1) and human breast cancer cell lines (MDA-MB-231, MCF-7). It was found that some compounds exhibited potent anticancer activities through inducing cancer cell apoptosis against the four tested cancer cell lines, particularly the cholesterol-based 5α,8α-endoperoxide 115, which was the most promising derivative, presenting IC 50 values ranging from 8.07 to 12.25 µM [33].
A six-step synthetic route based on cholesterol 28 as a starting material was designed to prepare two new steroidal thiadiazole derivatives, 121 (R = H, Me), with an A-homo lactam and a B-norsteroidal skeleton (Scheme 27) [34]. The antiproliferative activity of compounds 118-121 against various cancer cell lines was evaluated, and the results showed that compounds 120 (R = Ph) and 121 (R = Me) displayed excellent selective inhibition to the A-549 (human lung carcinoma) cell line, with IC 50 values of 7.8 and 8.0 µM, respectively [34]. The authors evaluated the in vitro antiproliferative activities of the 5α,8α-endoperoxides against human cancer cell lines derived from various human cancer types, such as human hepatocellular cancer cell lines (HepG2, SK-Hep1) and human breast cancer cell lines (MDA-MB-231, MCF-7). It was found that some compounds exhibited potent anticancer activities through inducing cancer cell apoptosis against the four tested cancer cell lines, particularly the cholesterol-based 5α,8αendoperoxide 115, which was the most promising derivative, presenting IC50 values ranging from 8.07 to 12.25 μM [33].
A six-step synthetic route based on cholesterol 28 as a starting material was designed to prepare two new steroidal thiadiazole derivatives, 121 (R = H, Me), with an A-homo lactam and a Bnorsteroidal skeleton (Scheme 27) [34]. The antiproliferative activity of compounds 118-121 against various cancer cell lines was evaluated, and the results showed that compounds 120 (R = Ph) and 121 (R = Me) displayed excellent selective inhibition to the A-549 (human lung carcinoma) cell line, with IC50 values of 7.8 and 8.0 μM, respectively [34]. The new compound 124 was evaluated as an antiproliferative agent against six human solid tumor cell lines, displaying only moderate activity against the screened cell lines [35].
Cholesterol 28 was used as a template for the synthesis of a series of 2-methoxybenzoate analogs, bearing function groups such as carbonyl 131, hydroxyl 132, and thiosemicarbazones 133, which were evaluated as potential new anticancer agents. The synthetic route involved the reaction of cholesterol 28 with 2-methoxybenzoyl chloride and the subsequent functionalization of the 7 position of the steroid core with several functional groups (Scheme 30) [37]. The new compound 124 was evaluated as an antiproliferative agent against six human solid tumor cell lines, displaying only moderate activity against the screened cell lines [35].
D'yakonov et al. synthesized two new hybrid compounds based on cholesterol and 1,14-tetradeca-(5Z,9Z)-dienedicarboxylic acid, 127 and 129, which were synthetic analogues of natural (5Z,9Z)-dienoic acids. The synthetic methodology relied on the preparation of cholesterol-based oximes 126 and 128 and their further esterification using 1,14-tetradeca-(5Z,9Z)-dienedicarboxylic acid (Scheme 29) [36]. The authors evaluated the in vitro cytotoxic activities of the synthesized compounds 126-129 against Jurkat (leukemia), K562 (myelogenous leukemia), U937 (lung), HeLa (cervical), and Hek293 (kidney) human cell lines. The results showed that the hybrid molecules 127 and 129 efficiently induced apoptosis of the studied cell lines and were substantially more cytotoxic than their cholesterol oxime precursors 126 and 128 [36]. The new compound 124 was evaluated as an antiproliferative agent against six human solid tumor cell lines, displaying only moderate activity against the screened cell lines [35].
Cholesterol 28 was used as a template for the synthesis of a series of 2-methoxybenzoate analogs, bearing function groups such as carbonyl 131, hydroxyl 132, and thiosemicarbazones 133, which were evaluated as potential new anticancer agents. The synthetic route involved the reaction of cholesterol 28 with 2-methoxybenzoyl chloride and the subsequent functionalization of the 7 position of the steroid core with several functional groups (Scheme 30) [37]. All of the synthesized cholesterol derivatives were evaluated for their in vitro antiproliferative activities against CNE-2 (nasopharyngeal), BEL-7402 (liver), HepG2 (liver), and Skov3 (ovarian) human cancer cells, as well as HEK-293T human kidney epithelial cells. The results demonstrated that the presence of the 7-hydroxy group (compound 132) doubled the antiproliferative activity over the nonhydroxylated compound 130. Furthermore, none of the evaluated compounds showed inhibitory activity on HEK-293T normal cells, making them good candidates for cancer treatment [37].
The synthesis of a bis(cyclam)capped cholesterol lipid (139) was recently reported by Peters and coworkers, who also evaluated its bioactivity using primary chronic lymphocytic leukemia (CLL) cells. The synthesis of the bis(cyclam)capped cholesterol lipid relied on a four-step methodology, as depicted in Scheme 31 [38]. It was found that the bis(cyclam)capped cholesterol lipid 139 was watersoluble and self-assembled into micellar and nonmicellar aggregates in water. The authors also found that the bis(cyclam)capped cholesterol lipid 139 was as effective as the commercial drug AMD3100 in reducing chemotaxis along CXCL12 gradients, showing that 139 may be effective in disrupting the migration of CLL cells into protective niches such as the bone marrow and lymphoid organs [38].
All of the synthesized cholesterol derivatives were evaluated for their in vitro antiproliferative activities against CNE-2 (nasopharyngeal), BEL-7402 (liver), HepG2 (liver), and Skov3 (ovarian) human cancer cells, as well as HEK-293T human kidney epithelial cells. The results demonstrated that the presence of the 7-hydroxy group (compound 132) doubled the antiproliferative activity over the nonhydroxylated compound 130. Furthermore, none of the evaluated compounds showed inhibitory activity on HEK-293T normal cells, making them good candidates for cancer treatment [37].
The synthesis of a bis(cyclam)capped cholesterol lipid (139) was recently reported by Peters and coworkers, who also evaluated its bioactivity using primary chronic lymphocytic leukemia (CLL) cells. The synthesis of the bis(cyclam)capped cholesterol lipid relied on a four-step methodology, as depicted in Scheme 31 [38]. It was found that the bis(cyclam)capped cholesterol lipid 139 was water-soluble and self-assembled into micellar and nonmicellar aggregates in water. The authors also found that the bis(cyclam)capped cholesterol lipid 139 was as effective as the commercial drug AMD3100 in reducing chemotaxis along CXCL12 gradients, showing that 139 may be effective in disrupting the migration of CLL cells into protective niches such as the bone marrow and lymphoid organs [38]. All of the synthesized cholesterol derivatives were evaluated for their in vitro antiproliferative activities against CNE-2 (nasopharyngeal), BEL-7402 (liver), HepG2 (liver), and Skov3 (ovarian) human cancer cells, as well as HEK-293T human kidney epithelial cells. The results demonstrated that the presence of the 7-hydroxy group (compound 132) doubled the antiproliferative activity over the nonhydroxylated compound 130. Furthermore, none of the evaluated compounds showed inhibitory activity on HEK-293T normal cells, making them good candidates for cancer treatment [37].
The synthesis of a bis(cyclam)capped cholesterol lipid (139) was recently reported by Peters and coworkers, who also evaluated its bioactivity using primary chronic lymphocytic leukemia (CLL) cells. The synthesis of the bis(cyclam)capped cholesterol lipid relied on a four-step methodology, as depicted in Scheme 31 [38]. It was found that the bis(cyclam)capped cholesterol lipid 139 was watersoluble and self-assembled into micellar and nonmicellar aggregates in water. The authors also found that the bis(cyclam)capped cholesterol lipid 139 was as effective as the commercial drug AMD3100 in reducing chemotaxis along CXCL12 gradients, showing that 139 may be effective in disrupting the migration of CLL cells into protective niches such as the bone marrow and lymphoid organs [38]. In 2015, a paper describing the synthesis, as well as the antimicrobial and cytotoxic activities, of ten pharmacophoric motifs through CuAAC of chloroquinoline and glucose azide substrates with propargyl compounds such as chalcones, theophylline, and cholesterol was published. Within the scope of this review, only the synthesis of cholesterol-based derivatives 141 and 143 is presented (Scheme 32). Interestingly, the results from the antimicrobial evaluation showed that among the ten synthesized conjugates, triazole 143 exhibited the highest antibacterial activity against E. coli and S. aureus, and moderate antifungal activity against A. flavus and C. albicans. Furthermore, the sugar-cholesterol conjugate 143 displayed the best in vitro cytotoxic activity against the prostate cancer PC3 cell line [39]. In 2015, a paper describing the synthesis, as well as the antimicrobial and cytotoxic activities, of ten pharmacophoric motifs through CuAAC of chloroquinoline and glucose azide substrates with propargyl compounds such as chalcones, theophylline, and cholesterol was published. Within the scope of this review, only the synthesis of cholesterol-based derivatives 141 and 143 is presented (Scheme 32). Interestingly, the results from the antimicrobial evaluation showed that among the ten synthesized conjugates, triazole 143 exhibited the highest antibacterial activity against E. coli and S. aureus, and moderate antifungal activity against A. flavus and C. albicans. Furthermore, the sugarcholesterol conjugate 143 displayed the best in vitro cytotoxic activity against the prostate cancer PC3 cell line [39]. (20)) were used as starting materials for the preparation of three-motif pharmacophoric conjugates including cholesterol, 1,2,3-triazole, and either a chalcone, a lipophilic residue, or a carbohydrate tag [40]. The first set of cholesterol conjugates was prepared through the reaction of 3βazidocholest-5-ene 144 with propargylated chalcones or lactose derivatives under CuAAC conditions, affording chalcone conjugates 145 and 146 and lactose conjugates 147 and 148 (Scheme 33) [40]. A second set of cholesterol conjugates was prepared once again through CuAAC of (3β)-3-(prop-2-yn-1-yloxy)cholest-5-ene (20) with azido alkanols (149) and 3β-azidocholest-5-ene (144), affording cholesterol-triazole alkanols (150) and a triazole-linked cholesterol dimer (152), respectively (Scheme 34) [40]. Furthermore, compound 150 was converted in the respective bromo alkane 151 through a substitution reaction in the presence of CBr4 (Scheme 34) [40]. A carbohydrate-tagged set of cholesterol compounds was prepared by the CuAAC reaction of (3β)-3-(prop-2-yn-1-yloxy)--cholest-5-ene (20) with the appropriate glycosyl azides 153 and 155, affording compounds 154 and 156, respectively, upon cleavage of the acetyl protecting groups (Scheme 34) [40]. (20)) were used as starting materials for the preparation of three-motif pharmacophoric conjugates including cholesterol, 1,2,3-triazole, and either a chalcone, a lipophilic residue, or a carbohydrate tag [40]. The first set of cholesterol conjugates was prepared through the reaction of 3β-azidocholest-5-ene 144 with propargylated chalcones or lactose derivatives under CuAAC conditions, affording chalcone conjugates 145 and 146 and lactose conjugates 147 and 148 (Scheme 33) [40]. In 2015, a paper describing the synthesis, as well as the antimicrobial and cytotoxic activities, of ten pharmacophoric motifs through CuAAC of chloroquinoline and glucose azide substrates with propargyl compounds such as chalcones, theophylline, and cholesterol was published. Within the scope of this review, only the synthesis of cholesterol-based derivatives 141 and 143 is presented (Scheme 32). Interestingly, the results from the antimicrobial evaluation showed that among the ten synthesized conjugates, triazole 143 exhibited the highest antibacterial activity against E. coli and S. aureus, and moderate antifungal activity against A. flavus and C. albicans. Furthermore, the sugarcholesterol conjugate 143 displayed the best in vitro cytotoxic activity against the prostate cancer PC3 cell line [39]. (20)) were used as starting materials for the preparation of three-motif pharmacophoric conjugates including cholesterol, 1,2,3-triazole, and either a chalcone, a lipophilic residue, or a carbohydrate tag [40]. The first set of cholesterol conjugates was prepared through the reaction of 3βazidocholest-5-ene 144 with propargylated chalcones or lactose derivatives under CuAAC conditions, affording chalcone conjugates 145 and 146 and lactose conjugates 147 and 148 (Scheme 33) [40]. A second set of cholesterol conjugates was prepared once again through CuAAC of (3β)-3-(prop-2-yn-1-yloxy)cholest-5-ene (20) with azido alkanols (149) and 3β-azidocholest-5-ene (144), affording cholesterol-triazole alkanols (150) and a triazole-linked cholesterol dimer (152), respectively (Scheme 34) [40]. Furthermore, compound 150 was converted in the respective bromo alkane 151 through a substitution reaction in the presence of CBr4 (Scheme 34) [40]. A carbohydrate-tagged set of cholesterol compounds was prepared by the CuAAC reaction of (3β)-3-(prop-2-yn-1-yloxy)--cholest-5-ene (20) with the appropriate glycosyl azides 153 and 155, affording compounds 154 and 156, respectively, upon cleavage of the acetyl protecting groups (Scheme 34) [40]. A second set of cholesterol conjugates was prepared once again through CuAAC of (3β)-3-(prop-2-yn-1-yloxy)cholest-5-ene (20) with azido alkanols (149) and 3β-azidocholest-5-ene (144), affording cholesterol-triazole alkanols (150) and a triazole-linked cholesterol dimer (152), respectively (Scheme 34) [40]. Furthermore, compound 150 was converted in the respective bromo alkane 151 through a substitution reaction in the presence of CBr 4 (Scheme 34) [40]. A carbohydrate-tagged set of cholesterol compounds was prepared by the CuAAC reaction of (3β)-3-(prop-2-yn-1-yloxy)--cholest-5-ene (20) with the appropriate glycosyl azides 153 and 155, affording compounds 154 and 156, respectively, upon cleavage of the acetyl protecting groups (Scheme 34) [40]. Another carbohydrate-tagged compound, 159, was synthesized through the reaction of cholesterol 28 with an appropriate glycosyl donor 157 in a three-step protocol, as depicted in Scheme 35 [40]. The authors screened all the cholesterol conjugates for their in vitro antimicrobial and anticancer activities. Among all compounds, the chalcone-triazole-cholesterol derivative 145 (R = NMe2) was the one with the most promising antimicrobial activity, being as active as the controls against E. coli, S. aureus and C. albicans. Concerning the cytotoxic potential of the cholesterol conjugates, the cholesterol-triazole-lactoside congener 147 displayed the best in vitro cytotoxic effect against the prostate cancer PC3 cell line, with similar cytotoxicity to that of DOX, used as a control [40]. Another carbohydrate-tagged compound, 159, was synthesized through the reaction of cholesterol 28 with an appropriate glycosyl donor 157 in a three-step protocol, as depicted in Scheme 35 [40]. The authors screened all the cholesterol conjugates for their in vitro antimicrobial and anticancer activities. Among all compounds, the chalcone-triazole-cholesterol derivative 145 (R = NMe 2 ) was the one with the most promising antimicrobial activity, being as active as the controls against E. coli, S. aureus and C. albicans. Concerning the cytotoxic potential of the cholesterol conjugates, the cholesterol-triazole-lactoside congener 147 displayed the best in vitro cytotoxic effect against the prostate cancer PC3 cell line, with similar cytotoxicity to that of DOX, used as a control [40]. Another carbohydrate-tagged compound, 159, was synthesized through the reaction of cholesterol 28 with an appropriate glycosyl donor 157 in a three-step protocol, as depicted in Scheme 35 [40]. The authors screened all the cholesterol conjugates for their in vitro antimicrobial and anticancer activities. Among all compounds, the chalcone-triazole-cholesterol derivative 145 (R = NMe2) was the one with the most promising antimicrobial activity, being as active as the controls against E. coli, S. aureus and C. albicans. Concerning the cytotoxic potential of the cholesterol conjugates, the cholesterol-triazole-lactoside congener 147 displayed the best in vitro cytotoxic effect against the prostate cancer PC3 cell line, with similar cytotoxicity to that of DOX, used as a control [40]. A new methodology for the synthesis of steroidal pyrazolines (162) through the reaction of cholest-5-en-7-ones (160) with 2,4-dinitrophenylhydrazine (161) was reported by Shamsuzzaman and coworkers in 2016 (Scheme 36) [41]. The reaction proceeded by a well-known 1,4-/1,2-addition/dehyd ration mechanism to an α,β-unsaturated carbonyl compound. The new steroid-based pyrazolines (162) were evaluated for their in vitro antibacterial activity against three different strains (E. coli, Corynebacterium xerosis, and S. epidermidis), in which compound 162 (R = H) was the most active against C. xerosis and S. epidermidis, with minimum inhibitory concentrations similar to the positive control gentamicin. Compound 162 (R = H) also demonstrated moderate activity against fungal strains Mucor azygosporus, Claviceps purpurea, and A. niger, being the most effective compound tested. The in vitro anticancer activity against five human cancer cell lines (SW480 (colon), HeLa (cervical), A549 (lung), HepG2 (hepatic), HL-60 (leukemia)) of pyrazolines (162) was also screened, with the chlorinated compound 162 (R = Cl) the most active [41]. The same research group reported a green simple synthesis of steroidal 2H-pyran-2-ones (163), starting from 3-substituted cholest-5-en-7-ones (160) and ethyl acetoacetate in the presence of chitosan as an ecofriendly heterogeneous catalyst (Scheme 37) [42]. The synthesized steroidal 2Hpyran-2-ones (163) were tested in vitro against two cancer cell lines (HeLa (cervical) and Jurkat (leukemia)) and one normal cell line (PBMC: Peripheral blood mononuclear cell). All the tested compounds (163) exhibited moderate-to-good activity against the two human cancer cell lines and were less toxic against the noncancer cell line. Furthermore, the antioxidant potential of these new compounds (163) was also evaluated, exhibiting lower 2,2-diphenyl-1-picrylhydrazyl radical (DPPH) radical scavenging activity than the positive control, ascorbic acid [42].
A series of new steroidal pyrimidine derivatives (167) was prepared through the multicomponent reaction of cholestan-6-ones (164) with urea (166) and benzaldehyde (165) in the presence of trimethylsilyl chloride (TMSCl) as catalyst (Scheme 38) [43]. The antitumor activity of these steroidal pyrimidine-functionalized scaffolds (167) was screened against three human cancer cell lines, MDA-MB231 (breast), HeLa (cervical), and HepG2 (hepatic), and one noncancer normal cell line, PBMC, by MTT assay. All tested compounds showed cytotoxicities against the three cancer cell lines. Particularly, compound 167 (R = H) exhibited the highest cytotoxicity against the three cancer cell lines. However, all cases were lower than DOX, used as a positive control [43]. The authors also addressed the antioxidant activity of the pyrimidine compounds (167), concluding that these new compounds presented reduced DPPH radical, hydroxyl radical, nitric oxide radical, and H2O2 scavenging potential than L-ascorbic acid, used as a control. Moreover, the IC50 values pointed out that the scavenging activity of the tested compounds were in the order of nitric oxide radical < hydrogen peroxide < DPPH radical < hydroxyl radical [43]. The same research group reported a green simple synthesis of steroidal 2H-pyran-2-ones (163), starting from 3-substituted cholest-5-en-7-ones (160) and ethyl acetoacetate in the presence of chitosan as an ecofriendly heterogeneous catalyst (Scheme 37) [42]. The synthesized steroidal 2H-pyran-2-ones (163) were tested in vitro against two cancer cell lines (HeLa (cervical) and Jurkat (leukemia)) and one normal cell line (PBMC: Peripheral blood mononuclear cell). All the tested compounds (163) exhibited moderate-to-good activity against the two human cancer cell lines and were less toxic against the noncancer cell line. Furthermore, the antioxidant potential of these new compounds (163) was also evaluated, exhibiting lower 2,2-diphenyl-1-picrylhydrazyl radical (DPPH) radical scavenging activity than the positive control, ascorbic acid [42]. The same research group reported a green simple synthesis of steroidal 2H-pyran-2-ones (163), starting from 3-substituted cholest-5-en-7-ones (160) and ethyl acetoacetate in the presence of chitosan as an ecofriendly heterogeneous catalyst (Scheme 37) [42]. The synthesized steroidal 2Hpyran-2-ones (163) were tested in vitro against two cancer cell lines (HeLa (cervical) and Jurkat (leukemia)) and one normal cell line (PBMC: Peripheral blood mononuclear cell). All the tested compounds (163) exhibited moderate-to-good activity against the two human cancer cell lines and were less toxic against the noncancer cell line. Furthermore, the antioxidant potential of these new compounds (163) was also evaluated, exhibiting lower 2,2-diphenyl-1-picrylhydrazyl radical (DPPH) radical scavenging activity than the positive control, ascorbic acid [42].
A series of new steroidal pyrimidine derivatives (167) was prepared through the multicomponent reaction of cholestan-6-ones (164) with urea (166) and benzaldehyde (165) in the presence of trimethylsilyl chloride (TMSCl) as catalyst (Scheme 38) [43]. The antitumor activity of these steroidal pyrimidine-functionalized scaffolds (167) was screened against three human cancer cell lines, MDA-MB231 (breast), HeLa (cervical), and HepG2 (hepatic), and one noncancer normal cell line, PBMC, by MTT assay. All tested compounds showed cytotoxicities against the three cancer cell lines. Particularly, compound 167 (R = H) exhibited the highest cytotoxicity against the three cancer cell lines. However, all cases were lower than DOX, used as a positive control [43]. The authors also addressed the antioxidant activity of the pyrimidine compounds (167), concluding that these new compounds presented reduced DPPH radical, hydroxyl radical, nitric oxide radical, and H2O2 scavenging potential than L-ascorbic acid, used as a control. Moreover, the IC50 values pointed out that the scavenging activity of the tested compounds were in the order of nitric oxide radical < hydrogen peroxide < DPPH radical < hydroxyl radical [43]. A series of new steroidal pyrimidine derivatives (167) was prepared through the multicomponent reaction of cholestan-6-ones (164) with urea (166) and benzaldehyde (165) in the presence of trimethylsilyl chloride (TMSCl) as catalyst (Scheme 38) [43]. The antitumor activity of these steroidal pyrimidine-functionalized scaffolds (167) was screened against three human cancer cell lines, MDA-MB231 (breast), HeLa (cervical), and HepG2 (hepatic), and one noncancer normal cell line, PBMC, by MTT assay. All tested compounds showed cytotoxicities against the three cancer cell lines. Particularly, compound 167 (R = H) exhibited the highest cytotoxicity against the three cancer cell lines. However, all cases were lower than DOX, used as a positive control [43]. The authors also addressed the antioxidant activity of the pyrimidine compounds (167), concluding that these new compounds presented reduced DPPH radical, hydroxyl radical, nitric oxide radical, and H 2 O 2 scavenging potential than L-ascorbic acid, used as a control. Moreover, the IC 50 values pointed out that the scavenging activity of the tested compounds were in the order of nitric oxide radical < hydrogen peroxide < DPPH radical < hydroxyl radical [43]. The cases in which the attachment of a heterocycle in the steroid backbone changes the biological properties of the steroid molecule are not so rare, and often are an interesting platform for the development of new pharmacophores. In this context, Saikia et al. reported the synthesis of steroidal heterocyclic compounds (170) through the solvent-free microwave-assisted epoxide ring opening with nitrogen nucleophiles [44]. The first series of N-heterocycles was synthesized by the reaction of nitrogen nucleophiles with the epoxide 169, which was prepared in a three-step synthetic route starting from cholesterol acetate 125 (Scheme 39) [44]. The synthesis of another set of N-heterocycles, 173, was accomplished using a mixture of epoxides (171 (α) and 172 (β) (4:1)) as starting materials, which were obtained through the direct epoxidation of cholesterol acetate 125 (Scheme 40) [44]. It is worth noticing that compound 173 was obtained as a diastereomeric mixture, which upon recrystallization in ethanol provided pure alcohol 173. The cases in which the attachment of a heterocycle in the steroid backbone changes the biological properties of the steroid molecule are not so rare, and often are an interesting platform for the development of new pharmacophores. In this context, Saikia et al. reported the synthesis of steroidal heterocyclic compounds (170) through the solvent-free microwave-assisted epoxide ring opening with nitrogen nucleophiles [44]. The first series of N-heterocycles was synthesized by the reaction of nitrogen nucleophiles with the epoxide 169, which was prepared in a three-step synthetic route starting from cholesterol acetate 125 (Scheme 39) [44]. The cases in which the attachment of a heterocycle in the steroid backbone changes the biological properties of the steroid molecule are not so rare, and often are an interesting platform for the development of new pharmacophores. In this context, Saikia et al. reported the synthesis of steroidal heterocyclic compounds (170) through the solvent-free microwave-assisted epoxide ring opening with nitrogen nucleophiles [44]. The first series of N-heterocycles was synthesized by the reaction of nitrogen nucleophiles with the epoxide 169, which was prepared in a three-step synthetic route starting from cholesterol acetate 125 (Scheme 39) [44]. The synthesis of another set of N-heterocycles, 173, was accomplished using a mixture of epoxides (171 (α) and 172 (β) (4:1)) as starting materials, which were obtained through the direct epoxidation of cholesterol acetate 125 (Scheme 40) [44]. It is worth noticing that compound 173 was obtained as a diastereomeric mixture, which upon recrystallization in ethanol provided pure alcohol 173. The synthesis of another set of N-heterocycles, 173, was accomplished using a mixture of epoxides (171 (α) and 172 (β) (4:1)) as starting materials, which were obtained through the direct epoxidation of cholesterol acetate 125 (Scheme 40) [44]. It is worth noticing that compound 173 was obtained as a diastereomeric mixture, which upon recrystallization in ethanol provided pure alcohol 173. The cases in which the attachment of a heterocycle in the steroid backbone changes the biological properties of the steroid molecule are not so rare, and often are an interesting platform for the development of new pharmacophores. In this context, Saikia et al. reported the synthesis of steroidal heterocyclic compounds (170) through the solvent-free microwave-assisted epoxide ring opening with nitrogen nucleophiles [44]. The first series of N-heterocycles was synthesized by the reaction of nitrogen nucleophiles with the epoxide 169, which was prepared in a three-step synthetic route starting from cholesterol acetate 125 (Scheme 39) [44]. The synthesis of another set of N-heterocycles, 173, was accomplished using a mixture of epoxides (171 (α) and 172 (β) (4:1)) as starting materials, which were obtained through the direct epoxidation of cholesterol acetate 125 (Scheme 40) [44]. It is worth noticing that compound 173 was obtained as a diastereomeric mixture, which upon recrystallization in ethanol provided pure alcohol 173. The authors also considered the dehydration of the obtained cholesterol-based N-heterocycles (170 and 173), which was successfully accomplished using a catalytic amount of sulfuric acid in acetic acid, affording compounds 174 and 175, respectively (Scheme 41) [44]. The authors also considered the dehydration of the obtained cholesterol-based N-heterocycles (170 and 173), which was successfully accomplished using a catalytic amount of sulfuric acid in acetic acid, affording compounds 174 and 175, respectively (Scheme 41) [44]. Finally, the in vitro antibacterial activity of all compounds was evaluated, and the Nheterocycles 170 (Het = 4-nitroimidazole, piperidine, morpholine, thiomorpholine, tetrahydroisoquinoline) and dehydrated N-heterocycles 174 (Het = 4-nitroimidazole, morpholine) demonstrated moderate effects against the tested microorganisms (E. coli, P. syringae, B. subtilis, P. vulgaris and S. aureus). Specifically, compound 170 (Het = piperidine, morpholine, and thiomorpholine) inhibited all the tested strains, and the 170 (Het = tetrahydroisoquinoline) derivative showed inhibition against three gram-negative bacterial strains, E. coli, P. syringae, and P. vulgaris. The authors also concluded that the removal of the hydroxyl group decreased the antimicrobial activity of the tested compounds [44].
Recently, Morake and coworkers synthesized a series of artemisinin-cholesterol conjugates, 177, 179, 180, 182, 184, 186, and 188, expecting that the putative cholesterol transporters may enhance the activity of the parent drug (artemisinin) against malaria and tuberculosis [45]. The conjugates were designed to have different O-or N-linkers, such as ether, ester, and carbamate, varying the length of each linker as well. The first set of conjugates, 177, 179-180, was synthesized from cholesterol 28 or cholesteryl chloroformate 7 with dihydroartemisinin 178 or artesunate 176 (Scheme 42) [45]. Finally, the in vitro antibacterial activity of all compounds was evaluated, and the N-heterocycles 170 (Het = 4-nitroimidazole, piperidine, morpholine, thiomorpholine, tetrahydroisoquinoline) and dehydrated N-heterocycles 174 (Het = 4-nitroimidazole, morpholine) demonstrated moderate effects against the tested microorganisms (E. coli, P. syringae, B. subtilis, P. vulgaris and S. aureus). Specifically, compound 170 (Het = piperidine, morpholine, and thiomorpholine) inhibited all the tested strains, and the 170 (Het = tetrahydroisoquinoline) derivative showed inhibition against three gram-negative bacterial strains, E. coli, P. syringae, and P. vulgaris. The authors also concluded that the removal of the hydroxyl group decreased the antimicrobial activity of the tested compounds [44].
Recently, Morake and coworkers synthesized a series of artemisinin-cholesterol conjugates, 177, 179, 180, 182, 184, 186, and 188, expecting that the putative cholesterol transporters may enhance the activity of the parent drug (artemisinin) against malaria and tuberculosis [45]. The conjugates were designed to have different Oor N-linkers, such as ether, ester, and carbamate, varying the length of each linker as well. The first set of conjugates, 177, 179-180, was synthesized from cholesterol 28 or cholesteryl chloroformate 7 with dihydroartemisinin 178 or artesunate 176 (Scheme 42) [45]. The authors also considered the dehydration of the obtained cholesterol-based N-heterocycles (170 and 173), which was successfully accomplished using a catalytic amount of sulfuric acid in acetic acid, affording compounds 174 and 175, respectively (Scheme 41) [44]. Finally, the in vitro antibacterial activity of all compounds was evaluated, and the Nheterocycles 170 (Het = 4-nitroimidazole, piperidine, morpholine, thiomorpholine, tetrahydroisoquinoline) and dehydrated N-heterocycles 174 (Het = 4-nitroimidazole, morpholine) demonstrated moderate effects against the tested microorganisms (E. coli, P. syringae, B. subtilis, P. vulgaris and S. aureus). Specifically, compound 170 (Het = piperidine, morpholine, and thiomorpholine) inhibited all the tested strains, and the 170 (Het = tetrahydroisoquinoline) derivative showed inhibition against three gram-negative bacterial strains, E. coli, P. syringae, and P. vulgaris. The authors also concluded that the removal of the hydroxyl group decreased the antimicrobial activity of the tested compounds [44].
Recently, Morake and coworkers synthesized a series of artemisinin-cholesterol conjugates, 177, 179, 180, 182, 184, 186, and 188, expecting that the putative cholesterol transporters may enhance the activity of the parent drug (artemisinin) against malaria and tuberculosis [45]. The conjugates were designed to have different O-or N-linkers, such as ether, ester, and carbamate, varying the length of each linker as well. The first set of conjugates, 177, 179-180, was synthesized from cholesterol 28 or cholesteryl chloroformate 7 with dihydroartemisinin 178 or artesunate 176 (Scheme 42) [45].
The antimalarial activity of the novel artemisinin-cholesterol conjugates 177, 179, 182, 184, 186, and 188 were evaluated against Plasmodium falciparum (Pf ) NF54, K1, and W2 strains, in which the conjugates of 186 (N-linked artemisinin-cholesterol conjugates) were the most active derivatives. However, the potency of these compounds was lower than the precursors artemether and artesunate. The authors rationalized these results based on the low solubility in the culture medium given by cholesterol moiety, which may have affected the efficacies of the artemisinin-cholesterol conjugates. On the other hand, concerning the activities against Mycobacterium tuberculosis (Mtb) H37Rv, the conjugates displayed enhanced efficacy over the parent drug artemisinin [45].
The synthesis of three new cholesterol conjugates, 190, 193, and 194, via CuAAC reaction was recently reported [46]. These conjugates were prepared either to have a ferrocene-chalcone moiety 190 or sugar moieties 193 and 194 as well, both linked by a triazole group (Scheme 45) [46]. The antimicrobial activities of these cholesterol conjugates were evaluated in vitro against E. coli, S. aureus, A. flavus, and C. albicans. Surprisingly, the authors found that the cholesterol conjugate bearing ferrocene-chalcone moiety 190 was completely inactive against all the tested bacteria. On the other hand, sugar conjugates 193 and 194 showed moderate inhibitory activity against E. coli, A. flavus, and C. albicans, being even less potent than control compounds ampicillin and amphotericin B [46].
Employing a one-pot multicomponent reaction procedure using (thio)semicarbazide hydrochloride 196 and ethyl 2-chloroacetoacetate 195 allowed the preparation of a series of steroidal oxazole and thiazole derivatives (197) (Scheme 46) [47]. The antimicrobial activity of the new steroidal compound 197 was evaluated against two gramnegative (E. coli and P. aeruginosa) and two gram-positive bacterial strains (S. aureus and L. monocytogenes). Additionally, the bioactivity against pathogenic fungi (C. albicans and C. neoformans) was also addressed. The authors found that most of the compounds exhibited good antibacterial and antifungal activity against the tested strains. In addition, the compounds also showed interesting antibiofilm activity against S. aureus biofilm. Molecular docking studies showed effective binding of the steroidal compound 197 with amino acid residues of DNA gyrase and glucosamine-6-phosphate synthase through hydrogen bonding interactions [47].
Given the increasing importance of steryl ferulates [3-O-(trans-4-feruloyl)sterols] in pharmaceutical applications, Begum and coworkers reported the microwave-assisted synthesis of steryl ferulates from several steroids [48]. The synthesis of cholesterol-based steryl ferulate 199 is exemplified in Scheme 47, in which microwave (MW) irradiation played a crucial role in the esterification step with trans-4-O-acetylferulic acid 198 [48]. The antimicrobial activities of these cholesterol conjugates were evaluated in vitro against E. coli, S. aureus, A. flavus, and C. albicans. Surprisingly, the authors found that the cholesterol conjugate bearing ferrocene-chalcone moiety 190 was completely inactive against all the tested bacteria. On the other hand, sugar conjugates 193 and 194 showed moderate inhibitory activity against E. coli, A. flavus, and C. albicans, being even less potent than control compounds ampicillin and amphotericin B [46].
Employing a one-pot multicomponent reaction procedure using (thio)semicarbazide hydrochloride 196 and ethyl 2-chloroacetoacetate 195 allowed the preparation of a series of steroidal oxazole and thiazole derivatives (197) (Scheme 46) [47]. The antimicrobial activities of these cholesterol conjugates were evaluated in vitro against E. coli, S. aureus, A. flavus, and C. albicans. Surprisingly, the authors found that the cholesterol conjugate bearing ferrocene-chalcone moiety 190 was completely inactive against all the tested bacteria. On the other hand, sugar conjugates 193 and 194 showed moderate inhibitory activity against E. coli, A. flavus, and C. albicans, being even less potent than control compounds ampicillin and amphotericin B [46].
Employing a one-pot multicomponent reaction procedure using (thio)semicarbazide hydrochloride 196 and ethyl 2-chloroacetoacetate 195 allowed the preparation of a series of steroidal oxazole and thiazole derivatives (197) (Scheme 46) [47]. The antimicrobial activity of the new steroidal compound 197 was evaluated against two gramnegative (E. coli and P. aeruginosa) and two gram-positive bacterial strains (S. aureus and L. monocytogenes). Additionally, the bioactivity against pathogenic fungi (C. albicans and C. neoformans) was also addressed. The authors found that most of the compounds exhibited good antibacterial and antifungal activity against the tested strains. In addition, the compounds also showed interesting antibiofilm activity against S. aureus biofilm. Molecular docking studies showed effective binding of the steroidal compound 197 with amino acid residues of DNA gyrase and glucosamine-6-phosphate synthase through hydrogen bonding interactions [47].
Given the increasing importance of steryl ferulates [3-O-(trans-4-feruloyl)sterols] in pharmaceutical applications, Begum and coworkers reported the microwave-assisted synthesis of steryl ferulates from several steroids [48]. The synthesis of cholesterol-based steryl ferulate 199 is exemplified in Scheme 47, in which microwave (MW) irradiation played a crucial role in the esterification step with trans-4-O-acetylferulic acid 198 [48]. The antimicrobial activity of the new steroidal compound 197 was evaluated against two gram-negative (E. coli and P. aeruginosa) and two gram-positive bacterial strains (S. aureus and L. monocytogenes). Additionally, the bioactivity against pathogenic fungi (C. albicans and C. neoformans) was also addressed. The authors found that most of the compounds exhibited good antibacterial and antifungal activity against the tested strains. In addition, the compounds also showed interesting antibiofilm activity against S. aureus biofilm. Molecular docking studies showed effective binding of the steroidal compound 197 with amino acid residues of DNA gyrase and glucosamine-6-phosphate synthase through hydrogen bonding interactions [47].
Given the increasing importance of steryl ferulates [3-O-(trans-4-feruloyl)sterols] in pharmaceutical applications, Begum and coworkers reported the microwave-assisted synthesis of steryl ferulates from several steroids [48]. The synthesis of cholesterol-based steryl ferulate 199 is exemplified in Scheme 47, in which microwave (MW) irradiation played a crucial role in the esterification step with trans-4-O-acetylferulic acid 198 [48]. The authors evaluated the antioxidant capacity (DPPH radical scavenging, total antioxidant capacity, and reducing power) of all synthesized steryl ferulates in comparison to equimolar mixtures of steryl ferulates and γ-oryzanol (a natural mixture of steryl ferulates, abundant in cereal bran layers). The results showed that the mixture of steryl ferulates and γ-oryzanol was a better radical scavenger than most individual ferulates, including the cholesterol-based one, 199 [48].
Cholesterol-Based Liquid Crystals
A liquid crystal is basically a state of matter that has properties between those of conventional liquids and those of solid crystals. The classification of liquid crystals was proposed in the 19th century and is based on molecular arrangement. Since then, liquid crystals have been divided into smectic (from the Greek word "smegma", meaning soap) and nematic (from the Greek word "nema", meaning thread) crystals. In smectic liquid crystals, molecules are arranged so that their major axes are parallel, and their centers of mass lie in one plane. There are many different smectic phases characterized by different types and degrees of positional and orientational order. The most common ones are the smectic A phase, in which the molecules are oriented along the layer normal, and the smectic C phase, in which the molecules are tilted away from it. Nematic phases are the simplest liquid crystalline phases formed, since they only have long-range orientational order (of, e.g., molecules, columns) and no degree of long-range translational order [49]. There is also a chiral variant of nematic or smectic phases, when the molecules of the liquid crystalline substance are chiral, with these phases denoted N* or Sm(A/B)* (an asterisk denotes a chiral phase), respectively. These phases are often called the cholesteric phases, because they were first observed for cholesterol derivatives [49].
In 2014, Hiremath reported the synthesis of two new series of cholesterol-biphen-4-yl 4-(nalkoxy)benzoate conjugates (203), linked through either odd-parity or even-parity spacers (Scheme 48) [50]. The compounds in 203 are optically active, and both series of conjugates show a frustrated liquid crystalline state, with a thermodynamically stable twist grain boundary phase with a chiral smectic C structure (TGBC*) over an exceedingly wide thermal range [50]. The author explained such behavior based on the combined effect of extended geometry (conformation), strong chirality, and the enantiomeric excess of the molecules. Furthermore, the conjugates of 203 with an odd-parity spacer show an additional phase, the blue one. The clearing transition temperatures and the associated enthalpies alternate where the odd members exhibit lower values compared to those of even members. These results clearly demonstrate that the geometry (rodlike and bent conformation) and the thermal behavior of the conjugates of 203 are greatly influenced by the spacer parity [50].
A series of similar conjugates of 206, containing cholesterol, triazole, and biphenylene units, were synthesized via CuAAC chemistry (Scheme 49). Different flexible spacers were introduced in The authors evaluated the antioxidant capacity (DPPH radical scavenging, total antioxidant capacity, and reducing power) of all synthesized steryl ferulates in comparison to equimolar mixtures of steryl ferulates and γ-oryzanol (a natural mixture of steryl ferulates, abundant in cereal bran layers). The results showed that the mixture of steryl ferulates and γ-oryzanol was a better radical scavenger than most individual ferulates, including the cholesterol-based one, 199 [48].
Cholesterol-Based Liquid Crystals
A liquid crystal is basically a state of matter that has properties between those of conventional liquids and those of solid crystals. The classification of liquid crystals was proposed in the 19th century and is based on molecular arrangement. Since then, liquid crystals have been divided into smectic (from the Greek word "smegma", meaning soap) and nematic (from the Greek word "nema", meaning thread) crystals. In smectic liquid crystals, molecules are arranged so that their major axes are parallel, and their centers of mass lie in one plane. There are many different smectic phases characterized by different types and degrees of positional and orientational order. The most common ones are the smectic A phase, in which the molecules are oriented along the layer normal, and the smectic C phase, in which the molecules are tilted away from it. Nematic phases are the simplest liquid crystalline phases formed, since they only have long-range orientational order (of, e.g., molecules, columns) and no degree of long-range translational order [49]. There is also a chiral variant of nematic or smectic phases, when the molecules of the liquid crystalline substance are chiral, with these phases denoted N* or Sm(A/B)* (an asterisk denotes a chiral phase), respectively. These phases are often called the cholesteric phases, because they were first observed for cholesterol derivatives [49].
In 2014, Hiremath reported the synthesis of two new series of cholesterol-biphen-4-yl 4-(n-alkoxy)benzoate conjugates (203), linked through either odd-parity or even-parity spacers (Scheme 48) [50]. The compounds in 203 are optically active, and both series of conjugates show a frustrated liquid crystalline state, with a thermodynamically stable twist grain boundary phase with a chiral smectic C structure (TGBC*) over an exceedingly wide thermal range [50]. The authors evaluated the antioxidant capacity (DPPH radical scavenging, total antioxidant capacity, and reducing power) of all synthesized steryl ferulates in comparison to equimolar mixtures of steryl ferulates and γ-oryzanol (a natural mixture of steryl ferulates, abundant in cereal bran layers). The results showed that the mixture of steryl ferulates and γ-oryzanol was a better radical scavenger than most individual ferulates, including the cholesterol-based one, 199 [48].
Cholesterol-Based Liquid Crystals
A liquid crystal is basically a state of matter that has properties between those of conventional liquids and those of solid crystals. The classification of liquid crystals was proposed in the 19th century and is based on molecular arrangement. Since then, liquid crystals have been divided into smectic (from the Greek word "smegma", meaning soap) and nematic (from the Greek word "nema", meaning thread) crystals. In smectic liquid crystals, molecules are arranged so that their major axes are parallel, and their centers of mass lie in one plane. There are many different smectic phases characterized by different types and degrees of positional and orientational order. The most common ones are the smectic A phase, in which the molecules are oriented along the layer normal, and the smectic C phase, in which the molecules are tilted away from it. Nematic phases are the simplest liquid crystalline phases formed, since they only have long-range orientational order (of, e.g., molecules, columns) and no degree of long-range translational order [49]. There is also a chiral variant of nematic or smectic phases, when the molecules of the liquid crystalline substance are chiral, with these phases denoted N* or Sm(A/B)* (an asterisk denotes a chiral phase), respectively. These phases are often called the cholesteric phases, because they were first observed for cholesterol derivatives [49].
In 2014, Hiremath reported the synthesis of two new series of cholesterol-biphen-4-yl 4-(nalkoxy)benzoate conjugates (203), linked through either odd-parity or even-parity spacers (Scheme 48) [50]. The compounds in 203 are optically active, and both series of conjugates show a frustrated liquid crystalline state, with a thermodynamically stable twist grain boundary phase with a chiral smectic C structure (TGBC*) over an exceedingly wide thermal range [50]. The author explained such behavior based on the combined effect of extended geometry (conformation), strong chirality, and the enantiomeric excess of the molecules. Furthermore, the conjugates of 203 with an odd-parity spacer show an additional phase, the blue one. The clearing transition temperatures and the associated enthalpies alternate where the odd members exhibit lower values compared to those of even members. These results clearly demonstrate that the geometry (rodlike and bent conformation) and the thermal behavior of the conjugates of 203 are greatly influenced by the spacer parity [50]. The author explained such behavior based on the combined effect of extended geometry (conformation), strong chirality, and the enantiomeric excess of the molecules. Furthermore, the conjugates of 203 with an odd-parity spacer show an additional phase, the blue one. The clearing transition temperatures and the associated enthalpies alternate where the odd members exhibit lower values compared to those of even members. These results clearly demonstrate that the geometry (rod-like and bent conformation) and the thermal behavior of the conjugates of 203 are greatly influenced by the spacer parity [50].
A series of similar conjugates of 206, containing cholesterol, triazole, and biphenylene units, were synthesized via CuAAC chemistry (Scheme 49). Different flexible spacers were introduced in the system to evaluate the effect on the mesophase formation as well as the influence of the presence of a triazole linker [51]. The authors concluded that short (n = 5 and 6) and medium (n = 7, 8, and 9) alkyl spacers exhibit enantiotropic SmA* and monotropic SmC* phases, whereas the conjugate possessing the longest spacer (n = 10) favors the formation of enantiotropic SmA and N* phases. A close correlation between the transition temperatures and the increase in the length of the methylene spacer was also observed, and a higher clearing point was observed for the even spacers. Further comparison studies with (S)-2MBbip-n-Chol 207 (Scheme 49) demonstrated that the triazole ring plays a crucial role in the mesophase formation, wherein apart from the molecular dipole, the subtle electrostatic interaction and van der Waals forces enhance the SmC* phase [51].
A study involving the design, synthesis, and mesomorphic properties of the first examples of cholesterol-based calixarene liquid crystals was reported in 2015 by Guo and coworkers [52]. Novel cholesterol-1,3-bis-substituted calix [4]arene 209 and cholesterol-tetra-substituted calix [4]arene 210 derivatives were synthesized by reacting cholesterol-chlorinated derivatives (208) with calix [4]arene, as depicted in Scheme 50. The liquid crystalline behaviors of cholesterol-calix [4]arene compounds 209 and 210 were studied, and both showed excellent mesomorphic properties of the columnar molecular arrangement of the calix [4]arene bowlic column, with cholesterol units as ancillary lateral columns. Furthermore, the authors demonstrated that compounds with longer spacers and more cholesterol units, such as 210, are better for good mesomorphic properties [52].
Following this study, similar calix [4]arene-cholesterol derivatives with Schiff-base bridges (213) were synthesized (Scheme 51), and the influence of complexation behaviors on their mesomorphic properties was investigated [53]. Like the previous cholesterol-calix [4]arene compounds (210), these Schiff-base bridged compounds (213) presented mesomorphic properties with a molecular arrangement of the calixarene bowlic column and Schiff-base cholesterol units as ancillary lateral columns as well. However, upon complexation with AgClO4, no mesophase was observed, The authors concluded that short (n = 5 and 6) and medium (n = 7, 8, and 9) alkyl spacers exhibit enantiotropic SmA* and monotropic SmC* phases, whereas the conjugate possessing the longest spacer (n = 10) favors the formation of enantiotropic SmA and N* phases. A close correlation between the transition temperatures and the increase in the length of the methylene spacer was also observed, and a higher clearing point was observed for the even spacers. Further comparison studies with (S)-2MBbip-n-Chol 207 (Scheme 49) demonstrated that the triazole ring plays a crucial role in the mesophase formation, wherein apart from the molecular dipole, the subtle electrostatic interaction and van der Waals forces enhance the SmC* phase [51].
A study involving the design, synthesis, and mesomorphic properties of the first examples of cholesterol-based calixarene liquid crystals was reported in 2015 by Guo and coworkers [52]. Novel cholesterol-1,3-bis-substituted calix [4]arene 209 and cholesterol-tetra-substituted calix [4]arene 210 derivatives were synthesized by reacting cholesterol-chlorinated derivatives (208) with calix [4]arene, as depicted in Scheme 50. The authors concluded that short (n = 5 and 6) and medium (n = 7, 8, and 9) alkyl spacers exhibit enantiotropic SmA* and monotropic SmC* phases, whereas the conjugate possessing the longest spacer (n = 10) favors the formation of enantiotropic SmA and N* phases. A close correlation between the transition temperatures and the increase in the length of the methylene spacer was also observed, and a higher clearing point was observed for the even spacers. Further comparison studies with (S)-2MBbip-n-Chol 207 (Scheme 49) demonstrated that the triazole ring plays a crucial role in the mesophase formation, wherein apart from the molecular dipole, the subtle electrostatic interaction and van der Waals forces enhance the SmC* phase [51].
A study involving the design, synthesis, and mesomorphic properties of the first examples of cholesterol-based calixarene liquid crystals was reported in 2015 by Guo and coworkers [52]. Novel cholesterol-1,3-bis-substituted calix [4]arene 209 and cholesterol-tetra-substituted calix [4]arene 210 derivatives were synthesized by reacting cholesterol-chlorinated derivatives (208) with calix [4]arene, as depicted in Scheme 50. The liquid crystalline behaviors of cholesterol-calix [4]arene compounds 209 and 210 were studied, and both showed excellent mesomorphic properties of the columnar molecular arrangement of the calix [4]arene bowlic column, with cholesterol units as ancillary lateral columns. Furthermore, the authors demonstrated that compounds with longer spacers and more cholesterol units, such as 210, are better for good mesomorphic properties [52].
Following this study, similar calix [4]arene-cholesterol derivatives with Schiff-base bridges (213) were synthesized (Scheme 51), and the influence of complexation behaviors on their mesomorphic properties was investigated [53]. Like the previous cholesterol-calix [4]arene compounds (210), these Schiff-base bridged compounds (213) presented mesomorphic properties with a molecular arrangement of the calixarene bowlic column and Schiff-base cholesterol units as ancillary lateral columns as well. However, upon complexation with AgClO4, no mesophase was observed, The liquid crystalline behaviors of cholesterol-calix [4]arene compounds 209 and 210 were studied, and both showed excellent mesomorphic properties of the columnar molecular arrangement of the calix [4]arene bowlic column, with cholesterol units as ancillary lateral columns. Furthermore, the authors demonstrated that compounds with longer spacers and more cholesterol units, such as 210, are better for good mesomorphic properties [52].
Following this study, similar calix [4]arene-cholesterol derivatives with Schiff-base bridges (213) were synthesized (Scheme 51), and the influence of complexation behaviors on their mesomorphic properties was investigated [53]. Like the previous cholesterol-calix [4]arene compounds (210), these Schiff-base bridged compounds (213) presented mesomorphic properties with a molecular arrangement of the calixarene bowlic column and Schiff-base cholesterol units as ancillary lateral columns as well. However, upon complexation with AgClO 4 , no mesophase was observed, suggesting that the mesomorphic properties of compound 213 could be tuned by ion-complexation behavior [53]. Recently, novel columnar liquid crystals (LCs) based on symmetric hairpin-shaped cholesterol tetramers with Schiff-base spacers were prepared, and their mesomorphic behaviors were investigated by different techniques. The new molecules were synthesized through the reaction between a cholesterol dimer, 214, and phenylenediamines or bis-hydrazides working as spacers containing hydrogen bonds, affording compounds 215 and 216 (Scheme 52) [54]. The results indicated good hexagonal columnar liquid crystalline behaviors, with three molecules arranged as a disc of the columnar hexagonal state. In addition, the symmetric cholesterol tetramers with rigid cores or hydrogen-bonding cores strongly favored the formation of a columnar mesophase [54].
The preparation of a series of tetramers (218), based on azobenzene decorated with cholesterol units, was also recently reported. These oligomeric compounds bearing different alkyl spacers were synthesized by reacting azobenzene tetracarboxylic acid (217) with cholesteryl derivatives (200) (Scheme 53) [55]. Recently, novel columnar liquid crystals (LCs) based on symmetric hairpin-shaped cholesterol tetramers with Schiff-base spacers were prepared, and their mesomorphic behaviors were investigated by different techniques. The new molecules were synthesized through the reaction between a cholesterol dimer, 214, and phenylenediamines or bis-hydrazides working as spacers containing hydrogen bonds, affording compounds 215 and 216 (Scheme 52) [54]. Recently, novel columnar liquid crystals (LCs) based on symmetric hairpin-shaped cholesterol tetramers with Schiff-base spacers were prepared, and their mesomorphic behaviors were investigated by different techniques. The new molecules were synthesized through the reaction between a cholesterol dimer, 214, and phenylenediamines or bis-hydrazides working as spacers containing hydrogen bonds, affording compounds 215 and 216 (Scheme 52) [54]. The results indicated good hexagonal columnar liquid crystalline behaviors, with three molecules arranged as a disc of the columnar hexagonal state. In addition, the symmetric cholesterol tetramers with rigid cores or hydrogen-bonding cores strongly favored the formation of a columnar mesophase [54].
The preparation of a series of tetramers (218), based on azobenzene decorated with cholesterol units, was also recently reported. These oligomeric compounds bearing different alkyl spacers were synthesized by reacting azobenzene tetracarboxylic acid (217) with cholesteryl derivatives (200) (Scheme 53) [55]. The results indicated good hexagonal columnar liquid crystalline behaviors, with three molecules arranged as a disc of the columnar hexagonal state. In addition, the symmetric cholesterol tetramers with rigid cores or hydrogen-bonding cores strongly favored the formation of a columnar mesophase [54].
The preparation of a series of tetramers (218), based on azobenzene decorated with cholesterol units, was also recently reported. These oligomeric compounds bearing different alkyl spacers were synthesized by reacting azobenzene tetracarboxylic acid (217) with cholesteryl derivatives (200) (Scheme 53) [55]. Among the synthesized compounds, it was found that oligomers with n = 1, 5, and 8 exhibited an enantiotropic N* phase, while the other oligomers showed a monotropic N* phase, upon cooling from an isotropic state. Interestingly, oligomers with n = 1 and 8 formed spherulites in their crystalline state, dispersed for hundreds of micrometers in the case of the oligomer with n = 1. Moreover, both oligomers (n = 1 and 8) had photoisomerization in dilute solutions and Langmuir monolayers, in opposition to the liquid crystalline state, in which no photoisomerization was observed [55].
Cholesterol-based nonconventional liquid crystals have been studied by Gupta and coworkers. They reported the synthesis of novel functional discotic oligomeric materials based on 3,4,9,10tetrasubstituted perylene, one of which bore the cholesterol units of 220 (Scheme 54) [56]. The cholesterol derivative 220 was found to be a nonconventional LC at room temperature: However, a monotropic nematic (N*) phase on cooling was achieved. The authors also demonstrated that the combination of rod and disc-like moieties sufficiently perturbed the molecular shape to yield calamitic mesophases. Additionally, this hybrid material showed interesting fluorescence emission properties, making it suitable for a range of optoelectronic applications [56].
Recently, the synthesis of perylene derivatives with two (223) or four cholesterol units (225) at bay-position or both in bay-position and imide position, respectively, was reported (Scheme 55). The authors addressed the influence of the number as well as the position of the cholesterol units on the mesomorphic and photophysical properties of these new liquid crystals [57]. The authors concluded that more cholesterol units significantly lowered the mesophase temperature, created wider scopes of phase transfer temperatures, and increased the fluorescence. Furthermore, it was found that a longer spacer between perylene and cholesterol units was ideal for mesomorphic properties as well as to enhance the fluorescence of the compounds [57]. Among the synthesized compounds, it was found that oligomers with n = 1, 5, and 8 exhibited an enantiotropic N* phase, while the other oligomers showed a monotropic N* phase, upon cooling from an isotropic state. Interestingly, oligomers with n = 1 and 8 formed spherulites in their crystalline state, dispersed for hundreds of micrometers in the case of the oligomer with n = 1. Moreover, both oligomers (n = 1 and 8) had photoisomerization in dilute solutions and Langmuir monolayers, in opposition to the liquid crystalline state, in which no photoisomerization was observed [55].
Cholesterol-based nonconventional liquid crystals have been studied by Gupta and coworkers. They reported the synthesis of novel functional discotic oligomeric materials based on 3,4,9,10-tetrasubstituted perylene, one of which bore the cholesterol units of 220 (Scheme 54) [56]. Among the synthesized compounds, it was found that oligomers with n = 1, 5, and 8 exhibited an enantiotropic N* phase, while the other oligomers showed a monotropic N* phase, upon cooling from an isotropic state. Interestingly, oligomers with n = 1 and 8 formed spherulites in their crystalline state, dispersed for hundreds of micrometers in the case of the oligomer with n = 1. Moreover, both oligomers (n = 1 and 8) had photoisomerization in dilute solutions and Langmuir monolayers, in opposition to the liquid crystalline state, in which no photoisomerization was observed [55].
Cholesterol-based nonconventional liquid crystals have been studied by Gupta and coworkers. They reported the synthesis of novel functional discotic oligomeric materials based on 3,4,9,10tetrasubstituted perylene, one of which bore the cholesterol units of 220 (Scheme 54) [56]. The cholesterol derivative 220 was found to be a nonconventional LC at room temperature: However, a monotropic nematic (N*) phase on cooling was achieved. The authors also demonstrated that the combination of rod and disc-like moieties sufficiently perturbed the molecular shape to yield calamitic mesophases. Additionally, this hybrid material showed interesting fluorescence emission properties, making it suitable for a range of optoelectronic applications [56].
Recently, the synthesis of perylene derivatives with two (223) or four cholesterol units (225) at bay-position or both in bay-position and imide position, respectively, was reported (Scheme 55). The authors addressed the influence of the number as well as the position of the cholesterol units on the mesomorphic and photophysical properties of these new liquid crystals [57]. The authors concluded that more cholesterol units significantly lowered the mesophase temperature, created wider scopes of phase transfer temperatures, and increased the fluorescence. Furthermore, it was found that a longer spacer between perylene and cholesterol units was ideal for mesomorphic properties as well as to enhance the fluorescence of the compounds [57]. The cholesterol derivative 220 was found to be a nonconventional LC at room temperature: However, a monotropic nematic (N*) phase on cooling was achieved. The authors also demonstrated that the combination of rod and disc-like moieties sufficiently perturbed the molecular shape to yield calamitic mesophases. Additionally, this hybrid material showed interesting fluorescence emission properties, making it suitable for a range of optoelectronic applications [56].
Recently, the synthesis of perylene derivatives with two (223) or four cholesterol units (225) at bay-position or both in bay-position and imide position, respectively, was reported (Scheme 55). The authors addressed the influence of the number as well as the position of the cholesterol units on the mesomorphic and photophysical properties of these new liquid crystals [57]. The authors concluded that more cholesterol units significantly lowered the mesophase temperature, created wider scopes of phase transfer temperatures, and increased the fluorescence. Furthermore, it was found that a longer spacer between perylene and cholesterol units was ideal for mesomorphic properties as well as to enhance the fluorescence of the compounds [57].
A year later, Chen et al. reported the synthesis of three different perylene-based liquid crystals bearing different bay-rigid spacers (228). These new liquid crystals were synthesized starting from a perylene derivative (227) with six alkyl chains on the imides positions by coupling two phenyl (biphenyl or naphthyl)-bridging cholesterol units (226) at bay positions (Scheme 56) [58]. Investigations addressing the mesomorphic properties of these perylene-based compounds (228) demonstrated that all derivatives ordered hexagonal columnar liquid crystalline behaviors, despite the functionalization of the bay positions with aromatic spacers. Derivatives with larger and rigid aromatic spacers presented higher phase transition temperatures as well as smaller scopes of mesophase temperatures. The authors also concluded that rigid and larger aromatic groups showed stronger emission and higher fluorescence quantum yield. These results suggested that by adjusting the structures of spacers on the bay position, both mesomorphic and photophysical properties are likely to be tuned depending on the purpose of the liquid crystal [58]. [58]. Investigations addressing the mesomorphic properties of these perylene-based compounds (228) demonstrated that all derivatives ordered hexagonal columnar liquid crystalline behaviors, despite the functionalization of the bay positions with aromatic spacers. Derivatives with larger and rigid aromatic spacers presented higher phase transition temperatures as well as smaller scopes of mesophase temperatures. The authors also concluded that rigid and larger aromatic groups showed stronger emission and higher fluorescence quantum yield. These results suggested that by adjusting the structures of spacers on the bay position, both mesomorphic and photophysical properties are likely to be tuned depending on the purpose of the liquid crystal [58].
Aiming to explore the potentially interesting mesomorphic properties of liquid crystals, Champagne and coworkers reported the synthesis of a synthetic liquid crystal dimer (233) and two of its monomer analogues (231) based on cholesterol mesogens [59]. The synthesis relied on the CuAAC reaction of a cholesteryl azide (229) with α,ω-di-O-propargyl-TEG (232) and Omonopropargylated-TEG (230) linkers, as depicted in Scheme 57. Several experimental studies were carried out, showing that both monomers (231) as well as the dimer (233) formed a smectic A liquid crystalline phase with comparable layer spacing. The authors explained this feature by the formation of a bilayer structure in the case of the monomers (231) and a monolayer structure for the dimer (233). Concerning the thermal stability of the self-assembled phases, the clearing temperature increased around 10 °C from 231 (R = Ac) to 231 (R = H). Molecular modeling studies rationalized the features Aiming to explore the potentially interesting mesomorphic properties of liquid crystals, Champagne and coworkers reported the synthesis of a synthetic liquid crystal dimer (233) and two of its monomer analogues (231) based on cholesterol mesogens [59]. The synthesis relied on the CuAAC reaction of a cholesteryl azide (229) with α,ω-di-O-propargyl-TEG (232) and O-monopropargylated-TEG (230) linkers, as depicted in Scheme 57. Several experimental studies were carried out, showing that both monomers (231) as well as the dimer (233) formed a smectic A liquid crystalline phase with comparable layer spacing. The authors explained this feature by the formation of a bilayer structure in the case of the monomers (231) and a monolayer structure for the dimer (233). Concerning the thermal stability of the self-assembled phases, the clearing temperature increased around 10 • C from 231 (R = Ac) to 231 (R = H). Molecular modeling studies rationalized the features of the liquid crystalline phases based on the different chemical functional groups present in each class of materials, allowing different kinds of intermolecular interactions, such as dipole-dipole interaction, hydrogen-bonding, as well as London dispersion forces, which greatly affected the self-assembly behavior of the three cholesterol derivatives [59]. (n = 1), which only showed an SmA phase. The block copolymers showed an enantiotropic mesophase of an SmA phase except for mPEG 43 -b-P(MCC-C 1 ) 51 (236) (n = 1), with the mesophase temperature range of the copolymers (236) being greater than those of the corresponding chiral compounds (234). It was also concluded that a longer spacer tended to stabilize the mesophase more than a shorter one and showed a wide mesophase range. These new polycarbonate copolymers with longer spacers based on cholesterol exhibited mesophase states below body temperature, which makes them good candidates for drug delivery applications [60].
The synthesis of the cholesterol-triazine-BODIPY trimers 239 and 240 with one or two cholesterol units involved the reaction of cyanuric chloride-substituted BODIPY derivative 238 with an esterified cholesterol derivative (237), using different reaction conditions (Scheme 59) [61].
Molecules 2018, 23, x 33 of 68 = 1), which only showed an SmA phase. The block copolymers showed an enantiotropic mesophase of an SmA phase except for mPEG43-b-P(MCC-C1)51 (236) (n = 1), with the mesophase temperature range of the copolymers (236) being greater than those of the corresponding chiral compounds (234). It was also concluded that a longer spacer tended to stabilize the mesophase more than a shorter one and showed a wide mesophase range. These new polycarbonate copolymers with longer spacers based on cholesterol exhibited mesophase states below body temperature, which makes them good candidates for drug delivery applications [60]. showed nematic liquid crystal behavior, while the two-cholesterol unit 240 was a hexagonal columnar liquid crystal. The photophysical properties of both compounds were also addressed, and the authors concluded that both derivatives presented good fluorescence intensities with higher quantum yields and larger Stokes shifts when compared to their precursors. The authors claimed that this study reported the first examples of cholesterol-BODIPY liquid crystals, in which the introduction of a cholesterol unit was favorable for both liquid crystalline behavior and improved fluorescence [61].
The synthesis of two series of λ-shaped dicholesteryl-based conjugates, 242 and 245, containing a Schiff base core linking two cholesteryl ester units was reported. The first series of compounds was prepared based on a Williamson etherification between the Schiff-base (241) and cholesteryl bromoalkanoates (200) to afford XSB-n-Chol (n = 4-10) derivatives (242) (Scheme 60) [62]. The synthesis of SB-10-Chol (244) was slightly different and involved the alkylation of 2,4-dihydroxybenzadehyde (243) by cholesteryl bromo-decanoate (200) followed by condensation with 4-aminophenol to afford OHSB-10-Chol (245) (Scheme 60) [62]. The study of the liquid crystal properties of the conjugates 242 and 245 indicated that the compounds had enantiotropic chiral nematic behavior, with an exception for short conjugates, which formed an additional SmA phase along with the narrow intermediary TGB phase. All compounds showed mesogenic properties, as they could form oily streaks, fanshaped filaments, and Grandjean textures in the liquid crystalline state. The authors also found that long spacer compounds vitrified to form stable cholesteric glassy states instead of crystallization. Furthermore, the mesomorphic temperature range increased alongside the length of the spacer (from n = 4 to n = 10), showing an odd-even alternation on the clearing and transition temperatures [62]. The photophysical properties of both compounds were also addressed, and the authors concluded that both derivatives presented good fluorescence intensities with higher quantum yields and larger Stokes shifts when compared to their precursors. The authors claimed that this study reported the first examples of cholesterol-BODIPY liquid crystals, in which the introduction of a cholesterol unit was favorable for both liquid crystalline behavior and improved fluorescence [61].
The synthesis of two series of λ-shaped dicholesteryl-based conjugates, 242 and 245, containing a Schiff base core linking two cholesteryl ester units was reported. The first series of compounds was prepared based on a Williamson etherification between the Schiff-base (241) and cholesteryl bromo-alkanoates (200) to afford XSB-n-Chol (n = 4-10) derivatives (242) (Scheme 60) [62]. The synthesis of SB-10-Chol (244) was slightly different and involved the alkylation of 2,4-dihydroxybenzadehyde (243) by cholesteryl bromo-decanoate (200) followed by condensation with 4-aminophenol to afford OHSB-10-Chol (245) (Scheme 60) [62]. The study of the liquid crystal properties of the conjugates 242 and 245 indicated that the compounds had enantiotropic chiral nematic behavior, with an exception for short conjugates, which formed an additional SmA phase along with the narrow intermediary TGB phase. All compounds showed mesogenic properties, as they could form oily streaks, fan-shaped filaments, and Grandjean textures in the liquid crystalline state. The authors also found that long spacer compounds vitrified to form stable cholesteric glassy states instead of crystallization. Furthermore, the mesomorphic temperature range increased alongside the length of the spacer (from n = 4 to n = 10), showing an odd-even alternation on the clearing and transition temperatures [62]. All synthesized compounds presented good thermal stability. Six of them showed liquid crystal properties, in which selenide 251 and alkyl diselenides 249 (n = 2) and 249 (n = 3) exhibited an SmC* mesophase, whereas aryl diselenide 253, with higher structural rigidity, showed a chiral enantiotropic smectic A (SmA*) mesophase. Furthermore, all these new selenide-cholesterol compounds showed higher glutathione peroxidase-like activity than the standard ebselen, with selenide 249 (n = 2) the most active [63].
A series of glycosteroids (256) constituted by cholesterol and distinct glycosidic moieties were synthesized by coupling propargyl 1-S-propargyl D-glucose, D-galactose, or L-rhamnose (255) to cholesterol scaffold 254 through a CuAAC reaction (Scheme 62) [64]. This study aimed to analyze if the sugar structure as well as the heteroatom linked to the anomeric position had an impact on the liquid-crystalline properties of the glycosteroids (256). The mesomorphic temperature range found for the glycosteroids (256) was higher than that generally reported in the literature, but similar to that reported for other glycosteroids. All the studied glycosteroids (256) showed great phase stability compared to those already studied, and interestingly, glycosteroids (256) (sugar = D-glucose; X = S) showed no decomposition even at 200 °C. These results offer new possibilities in the development of new high-temperature captors or detectors [64]. All synthesized compounds presented good thermal stability. Six of them showed liquid crystal properties, in which selenide 251 and alkyl diselenides 249 (n = 2) and 249 (n = 3) exhibited an SmC* mesophase, whereas aryl diselenide 253, with higher structural rigidity, showed a chiral enantiotropic smectic A (SmA*) mesophase. Furthermore, all these new selenide-cholesterol compounds showed higher glutathione peroxidase-like activity than the standard ebselen, with selenide 249 (n = 2) the most active [63].
A series of glycosteroids (256) constituted by cholesterol and distinct glycosidic moieties were synthesized by coupling propargyl 1-S-propargyl D-glucose, D-galactose, or L-rhamnose (255) to cholesterol scaffold 254 through a CuAAC reaction (Scheme 62) [64]. This study aimed to analyze if the sugar structure as well as the heteroatom linked to the anomeric position had an impact on the liquid-crystalline properties of the glycosteroids (256). The mesomorphic temperature range found for the glycosteroids (256) was higher than that generally reported in the literature, but similar to that reported for other glycosteroids. All the studied glycosteroids (256) showed great phase stability compared to those already studied, and interestingly, glycosteroids (256) (sugar = D-glucose; X = S) showed no decomposition even at 200 °C. These results offer new possibilities in the development of new high-temperature captors or detectors [64]. All synthesized compounds presented good thermal stability. Six of them showed liquid crystal properties, in which selenide 251 and alkyl diselenides 249 (n = 2) and 249 (n = 3) exhibited an SmC* mesophase, whereas aryl diselenide 253, with higher structural rigidity, showed a chiral enantiotropic smectic A (SmA*) mesophase. Furthermore, all these new selenide-cholesterol compounds showed higher glutathione peroxidase-like activity than the standard ebselen, with selenide 249 (n = 2) the most active [63].
A series of glycosteroids (256) constituted by cholesterol and distinct glycosidic moieties were synthesized by coupling propargyl 1-S-propargyl D-glucose, D-galactose, or L-rhamnose (255) to cholesterol scaffold 254 through a CuAAC reaction (Scheme 62) [64]. This study aimed to analyze if the sugar structure as well as the heteroatom linked to the anomeric position had an impact on the liquid-crystalline properties of the glycosteroids (256). The mesomorphic temperature range found for the glycosteroids (256) was higher than that generally reported in the literature, but similar to that reported for other glycosteroids. All the studied glycosteroids (256) showed great phase stability compared to those already studied, and interestingly, glycosteroids (256) (sugar = D-glucose; X = S) showed no decomposition even at 200 • C. These results offer new possibilities in the development of new high-temperature captors or detectors [64].
Cholesterol-Based Gelators
Low molecular weight organic gelators (LMOGs) are small organic molecules that self-assemble in water or organic solvents, forming a 3D network that entraps the liquid phase, resulting in gel formation. In recent years, these classes of compounds have attracted much attention because of their range of applications, for example as alternative biomaterials for drug delivery or tissue engineering [65,66]. New generations of steroidal low molecular mass gelators (LMGs) are usually designed through the assembly of various building units such as a steroid derivative (S), a linker unit (L), and often an aromatic platform (A) around which the steroid units can be positioned through linkers. The good gelation ability of the steroidal LMGs led to the development of a series of steroid-based gelators commonly classified as ALS, arranged in A(LS)2, A(LS)3, LS, or LS2 molecular types [65].
In 2014, an interesting study was reported involving the design of an uncommon class of cholesteryl-based triangular A(LS)3-type low molecular mass gelators and the exploration of their gelation and anion-sensing applications. The design strategy was based on placing three cholesteryl derivatives using linker units around melamine or benzene-1,3,5-tricarbonyl chloride as aromatic platform precursors. The synthesis of compounds 257 and 259 involved the reaction of cholesteryl chloroformate 7 with different amines in one-or two-step procedures (Scheme 63) [67].
Cholesterol-Based Gelators
Low molecular weight organic gelators (LMOGs) are small organic molecules that self-assemble in water or organic solvents, forming a 3D network that entraps the liquid phase, resulting in gel formation. In recent years, these classes of compounds have attracted much attention because of their range of applications, for example as alternative biomaterials for drug delivery or tissue engineering [65,66]. New generations of steroidal low molecular mass gelators (LMGs) are usually designed through the assembly of various building units such as a steroid derivative (S), a linker unit (L), and often an aromatic platform (A) around which the steroid units can be positioned through linkers. The good gelation ability of the steroidal LMGs led to the development of a series of steroid-based gelators commonly classified as ALS, arranged in A(LS) 2 , A(LS) 3 , LS, or LS 2 molecular types [65].
In 2014, an interesting study was reported involving the design of an uncommon class of cholesteryl-based triangular A(LS) 3 -type low molecular mass gelators and the exploration of their gelation and anion-sensing applications. The design strategy was based on placing three cholesteryl derivatives using linker units around melamine or benzene-1,3,5-tricarbonyl chloride as aromatic platform precursors. The synthesis of compounds 257 and 259 involved the reaction of cholesteryl chloroformate 7 with different amines in one-or two-step procedures (Scheme 63) [67].
Cholesterol-Based Gelators
Low molecular weight organic gelators (LMOGs) are small organic molecules that self-assemble in water or organic solvents, forming a 3D network that entraps the liquid phase, resulting in gel formation. In recent years, these classes of compounds have attracted much attention because of their range of applications, for example as alternative biomaterials for drug delivery or tissue engineering [65,66]. New generations of steroidal low molecular mass gelators (LMGs) are usually designed through the assembly of various building units such as a steroid derivative (S), a linker unit (L), and often an aromatic platform (A) around which the steroid units can be positioned through linkers. The good gelation ability of the steroidal LMGs led to the development of a series of steroid-based gelators commonly classified as ALS, arranged in A(LS)2, A(LS)3, LS, or LS2 molecular types [65].
In 2014, an interesting study was reported involving the design of an uncommon class of cholesteryl-based triangular A(LS)3-type low molecular mass gelators and the exploration of their gelation and anion-sensing applications. The design strategy was based on placing three cholesteryl derivatives using linker units around melamine or benzene-1,3,5-tricarbonyl chloride as aromatic platform precursors. The synthesis of compounds 257 and 259 involved the reaction of cholesteryl chloroformate 7 with different amines in one-or two-step procedures (Scheme 63) [67]. This study also involved the evaluation of gelation and self-assembly properties of this new class of compounds by comparing them to the existing cholesteryl-based LMGs. The results indicated that the gelation and self-assembly properties of compounds 257 and 259 could be controlled by modification of the structural features of the A(LS) 3 -type molecule. Increasing the length of the linker units, the fibrous xerogel networks assembled into more porous fiber networks. Moreover, the authors found that the compounds 257 and 259 could be used as selective sensors for F − , and their selectivity could be enhanced by increasing the chain length of their linker units [67].
Two new cholesterol-based compounds (261) were also reported as fluoride-responsive organogels. Their design was based on the coupling of compounds in 260, bearing azo units as the chromophore and a pyrazole group as the anion acceptor, with the cholesteryl chloroformate 7 (Scheme 64) [68]. authors found that the compounds 257 and 259 could be used as selective sensors for F − , and their selectivity could be enhanced by increasing the chain length of their linker units [67]. Two new cholesterol-based compounds (261) were also reported as fluoride-responsive organogels. Their design was based on the coupling of compounds in 260, bearing azo units as the chromophore and a pyrazole group as the anion acceptor, with the cholesteryl chloroformate 7 (Scheme 64) [68]. The authors observed that structural modifications on the benzyl core of compound 261 (R = H or NO2), hydrogen bonding, hydrophobic interactions, as well as π-π stacking interactions, had considerable influence on the gel-sol transition properties. Moreover, they also found that the gel was selectively fluoride-responsive among the tested anions, expressing gel-sol transition and red-purple color changes easily detected by the naked eye [68].
Following the purposes of the selective detection of F − , a new coumarin-based supramolecular gelator (267) was designed [69]. The reported compound 267 follows a simple architecture that bears a coumarin-appended 1,2,3-triazole coupled with cholesterol, synthesized in a six-step route as depicted in Scheme 65. The coumarin moiety acts as a fluorescence signaling unit, the 1,2,3-triazole as a linker and as an anion binding site, and cholesterol as a hydrophobic surface. The authors concluded that cooperative hydrogen bonding between phenolic OH and a 1,2,3triazole ring as well as hydrophobic-hydrophobic interactions of the cholesteryl groups in compound 267 played a crucial role in the formation of an organogel. Furthermore, it was demonstrated that compound 267 organogel was sensitive for F − and HP2O7 3− detection by means of gel phase transformation as well as fluorimetrically, showing considerable changes in emission properties [69].
A novel cholesterol-based organogelator containing D-A (donor-acceptor) pairs (salicylaldehyde and naphthalimide units) (272) was synthesized [70]. The synthetic strategy relied on the introduction of the electron-rich salicylaldehyde group into a naphthalimide-based organogelator through a Schiff-base reaction (Scheme 66). This cholesterol-based organogelator (272) was found to form stable and chiral gels with different optical properties and morphologies in several organic solvents. An interesting feature of compound 272 was the changing of the color and emission color of the organogel in benzene, which varied from yellow-green to red during the thermoreversible sol-gel transformation, demonstrating for the first time solvent-controlled multiple color emission achieved in a monocomponent gel system. This feature makes the organogel 272 quite suitable for applications in optical switches, sensors, and smart materials [70]. The authors observed that structural modifications on the benzyl core of compound 261 (R = H or NO 2 ), hydrogen bonding, hydrophobic interactions, as well as π-π stacking interactions, had considerable influence on the gel-sol transition properties. Moreover, they also found that the gel was selectively fluoride-responsive among the tested anions, expressing gel-sol transition and red-purple color changes easily detected by the naked eye [68].
Following the purposes of the selective detection of F − , a new coumarin-based supramolecular gelator (267) was designed [69]. The reported compound 267 follows a simple architecture that bears a coumarin-appended 1,2,3-triazole coupled with cholesterol, synthesized in a six-step route as depicted in Scheme 65. The coumarin moiety acts as a fluorescence signaling unit, the 1,2,3-triazole as a linker and as an anion binding site, and cholesterol as a hydrophobic surface. authors found that the compounds 257 and 259 could be used as selective sensors for F − , and their selectivity could be enhanced by increasing the chain length of their linker units [67]. Two new cholesterol-based compounds (261) were also reported as fluoride-responsive organogels. Their design was based on the coupling of compounds in 260, bearing azo units as the chromophore and a pyrazole group as the anion acceptor, with the cholesteryl chloroformate 7 (Scheme 64) [68]. The authors observed that structural modifications on the benzyl core of compound 261 (R = H or NO2), hydrogen bonding, hydrophobic interactions, as well as π-π stacking interactions, had considerable influence on the gel-sol transition properties. Moreover, they also found that the gel was selectively fluoride-responsive among the tested anions, expressing gel-sol transition and red-purple color changes easily detected by the naked eye [68].
Following the purposes of the selective detection of F − , a new coumarin-based supramolecular gelator (267) was designed [69]. The reported compound 267 follows a simple architecture that bears a coumarin-appended 1,2,3-triazole coupled with cholesterol, synthesized in a six-step route as depicted in Scheme 65. The coumarin moiety acts as a fluorescence signaling unit, the 1,2,3-triazole as a linker and as an anion binding site, and cholesterol as a hydrophobic surface. The authors concluded that cooperative hydrogen bonding between phenolic OH and a 1,2,3triazole ring as well as hydrophobic-hydrophobic interactions of the cholesteryl groups in compound 267 played a crucial role in the formation of an organogel. Furthermore, it was demonstrated that compound 267 organogel was sensitive for F − and HP2O7 3− detection by means of gel phase transformation as well as fluorimetrically, showing considerable changes in emission properties [69].
A novel cholesterol-based organogelator containing D-A (donor-acceptor) pairs (salicylaldehyde and naphthalimide units) (272) was synthesized [70]. The synthetic strategy relied on the introduction of the electron-rich salicylaldehyde group into a naphthalimide-based organogelator through a Schiff-base reaction (Scheme 66). This cholesterol-based organogelator (272) was found to form stable and chiral gels with different optical properties and morphologies in several organic solvents. An interesting feature of compound 272 was the changing of the color and emission color of the organogel in benzene, which varied from yellow-green to red during the thermoreversible sol-gel transformation, demonstrating for the first time solvent-controlled multiple color emission achieved in a monocomponent gel system. This feature makes the organogel 272 quite suitable for applications in optical switches, sensors, and smart materials [70]. The authors concluded that cooperative hydrogen bonding between phenolic OH and a 1,2,3-triazole ring as well as hydrophobic-hydrophobic interactions of the cholesteryl groups in compound 267 played a crucial role in the formation of an organogel. Furthermore, it was demonstrated that compound 267 organogel was sensitive for F − and HP 2 O 7 3− detection by means of gel phase transformation as well as fluorimetrically, showing considerable changes in emission properties [69]. A novel cholesterol-based organogelator containing D-A (donor-acceptor) pairs (salicylaldehyde and naphthalimide units) (272) was synthesized [70]. The synthetic strategy relied on the introduction of the electron-rich salicylaldehyde group into a naphthalimide-based organogelator through a Schiff-base reaction (Scheme 66). This cholesterol-based organogelator (272) was found to form stable and chiral gels with different optical properties and morphologies in several organic solvents. An interesting feature of compound 272 was the changing of the color and emission color of the organogel in benzene, which varied from yellow-green to red during the thermoreversible sol-gel transformation, demonstrating for the first time solvent-controlled multiple color emission achieved in a monocomponent gel system. This feature makes the organogel 272 quite suitable for applications in optical switches, sensors, and smart materials [70]. The gelation properties of both bisamide 273 and bisamides with a cholesteryl unit attached (274) were evaluated. In aqueous DMSO, compound 274 (X = O) exhibited nongelation properties, while compound 274 (X = NH) produced a light yellow colored gel. This suggests that the heteroatom of the aromatic linker played a crucial role in gelation. The organogel formed by compound 274 (X = NH) revealed itself to be a good anion sensor, since the gel state was selectively ruptured into solution in the presence of F − and AcO − anions. Interestingly, the gel rupture induced by F − was recovered upon the addition of Fe 3+ . This feature is very useful in the visual distinction of F − from AcO − anions [71].
A different kind of fluorescent organogelator based on cholesterol containing benzothiadiazole fluorophores 276 and 278 was designed and synthesized by Sun and coworkers (Scheme 68). The authors aimed to understand the role of hydrogen bonding and π-π interactions and to study the changes of fluorescent properties in the process of gelation of cholesterol-based π-conjugated organogels [72]. The gelation properties of both bisamide 273 and bisamides with a cholesteryl unit attached (274) were evaluated. In aqueous DMSO, compound 274 (X = O) exhibited nongelation properties, while compound 274 (X = NH) produced a light yellow colored gel. This suggests that the heteroatom of the aromatic linker played a crucial role in gelation. The organogel formed by compound 274 (X = NH) revealed itself to be a good anion sensor, since the gel state was selectively ruptured into solution in the presence of F − and AcO − anions. Interestingly, the gel rupture induced by F − was recovered upon the addition of Fe 3+ . This feature is very useful in the visual distinction of F − from AcO − anions [71].
A different kind of fluorescent organogelator based on cholesterol containing benzothiadiazole fluorophores 276 and 278 was designed and synthesized by Sun and coworkers (Scheme 68). The authors aimed to understand the role of hydrogen bonding and π-π interactions and to study the changes of fluorescent properties in the process of gelation of cholesterol-based π-conjugated organogels [72]. The gelation properties of both bisamide 273 and bisamides with a cholesteryl unit attached (274) were evaluated. In aqueous DMSO, compound 274 (X = O) exhibited nongelation properties, while compound 274 (X = NH) produced a light yellow colored gel. This suggests that the heteroatom of the aromatic linker played a crucial role in gelation. The organogel formed by compound 274 (X = NH) revealed itself to be a good anion sensor, since the gel state was selectively ruptured into solution in the presence of F − and AcO − anions. Interestingly, the gel rupture induced by F − was recovered upon the addition of Fe 3+ . This feature is very useful in the visual distinction of F − from AcO − anions [71].
A different kind of fluorescent organogelator based on cholesterol containing benzothiadiazole fluorophores 276 and 278 was designed and synthesized by Sun and coworkers (Scheme 68). The authors aimed to understand the role of hydrogen bonding and π-π interactions and to study the changes of fluorescent properties in the process of gelation of cholesterol-based π-conjugated organogels [72].
The authors studied three methods of gel preparation (heating-cooling process, ultrasonic treatment, and mixed solvents, at room temperature) and found that π-π and H-bonding interactions should be the key contributors in forming gels of 276, while in gel formations of 278, only π-π interactions seemed to matter. The obtained results suggest that these two multiple-stimuli responsive luminescent gels, 276 and 278, can be used as smart soft materials sensitive to temperature, solvent, ultrasound, and Hg 2+ [72].
Recently, Panja and Ghosh reported three related works involving cholesterol conjugates bearing three different moieties (dithioacetal 280, diaminomalononitrile 281, and diazine 282 functional groups) for sensing a series of cations such as Hg 2+ , Cu 2+ , Ag 2+ , and Fe 2+ [73][74][75]. The three cholesterol conjugates were synthesized using the same three-step methodology, except for the final step, which involved the reaction of the intermediate benzaldehyde 279 with 1-dodecanethiol, diaminomalononitrile, and hydrazine to afford cholesterol conjugates 280, 281, and 282, respectively (Scheme 69). The cholesterol-dithioacetal conjugate 280 was used for the detection of Hg 2+ and incorporated two distinct components: i) A cholesterol motif to assist the self-assembly of the molecules through hydrophobic interaction; and ii) a thiol part that was used as the reaction-based recognition unit of the molecule [73].
The gelation properties of both bisamide 273 and bisamides with a cholesteryl unit attached (274) were evaluated. In aqueous DMSO, compound 274 (X = O) exhibited nongelation properties, while compound 274 (X = NH) produced a light yellow colored gel. This suggests that the heteroatom of the aromatic linker played a crucial role in gelation. The organogel formed by compound 274 (X = NH) revealed itself to be a good anion sensor, since the gel state was selectively ruptured into solution in the presence of F − and AcO − anions. Interestingly, the gel rupture induced by F − was recovered upon the addition of Fe 3+ . This feature is very useful in the visual distinction of F − from AcO − anions [71].
A different kind of fluorescent organogelator based on cholesterol containing benzothiadiazole fluorophores 276 and 278 was designed and synthesized by Sun and coworkers (Scheme 68). The authors aimed to understand the role of hydrogen bonding and π-π interactions and to study the changes of fluorescent properties in the process of gelation of cholesterol-based π-conjugated organogels [72]. The authors studied three methods of gel preparation (heating-cooling process, ultrasonic treatment, and mixed solvents, at room temperature) and found that π-π and H-bonding interactions should be the key contributors in forming gels of 276, while in gel formations of 278, only π-π interactions seemed to matter. The obtained results suggest that these two multiple-stimuli responsive luminescent gels, 276 and 278, can be used as smart soft materials sensitive to temperature, solvent, ultrasound, and Hg 2+ [72].
Recently, Panja and Ghosh reported three related works involving cholesterol conjugates bearing three different moieties (dithioacetal 280, diaminomalononitrile 281, and diazine 282 functional groups) for sensing a series of cations such as Hg 2+ , Cu 2+ , Ag 2+ , and Fe 2+ [73][74][75]. The three cholesterol conjugates were synthesized using the same three-step methodology, except for the final step, which involved the reaction of the intermediate benzaldehyde 279 with 1-dodecanethiol, diaminomalononitrile, and hydrazine to afford cholesterol conjugates 280, 281, and 282, respectively (Scheme 69). The cholesterol-dithioacetal conjugate 280 was used for the detection of Hg 2+ and incorporated two distinct components: i) A cholesterol motif to assist the self-assembly of the molecules through hydrophobic interaction; and ii) a thiol part that was used as the reaction-based recognition unit of the molecule [73]. The authors studied the sensing mechanism for Hg 2+ of the cholesterol-dithioacetal conjugate, realizing that the specific Hg 2+ -induced deprotection of the thioacetal functionality of 280 resulted in sol-to-gel transition in DMF/H2O (1:1, v/v) through the formation of precursor aldehyde 279. The authors also claimed that this was the first chemodosimeter that functions as a selective "naked-eye" Hg 2+ -detector by showing in situ sol-to-gel conversion [73].
The cholesterol-diaminomalononitrile conjugate 281 was found to form supramolecular gels in dimethylformamide (DMF)/H2O and 1,2-dichlorobenzene, as confirmed by rheological studies. In addition, the authors verified that the gel formed in DMF/H2O was more stable and robust than the one obtained from 1,2-dichlorobenzene, due to strong intermolecular forces among the gelators in DMF/H2O. Furthermore, it was also established that cholesterol-diaminomalononitrile 281 gel was selective for visual recognition of Hg 2+ and Cu 2+ ions, and for sensing hydrazine based on the dosimetric interaction of the malononitrile motif with hydrazine [74].
Concerning the cholesterol-diazine conjugate 282, the authors demonstrated that it could form nice gels with Ag + and Fe 3+ ions in a CHCl3/CH3OH mixture solvent, using the diazine moiety as a metal ion binding site. The gelator 282 was able to distinguish Ag + and Fe 3+ with the aid of tetrabutylammonium chloride, tetrabutylammonium bromide or fluoride, and ammonium thiocyanate. Furthermore, the authors proved that there was no interference of Fe 2+ ions in the detection of Fe 3+ ions, as in the case of most chemosensors and gelators [75].
The effect of different spacer lengths containing two, three, five, six, ten, or twelve carbon atoms on cholesterol-based azobenzene organogels 285 and 286 was investigated [76]. For this purpose, a series of seven azobenzene-cholesterol compounds was synthesized through esterification reactions The authors studied the sensing mechanism for Hg 2+ of the cholesterol-dithioacetal conjugate, realizing that the specific Hg 2+ -induced deprotection of the thioacetal functionality of 280 resulted in sol-to-gel transition in DMF/H 2 O (1:1, v/v) through the formation of precursor aldehyde 279. The authors also claimed that this was the first chemodosimeter that functions as a selective "naked-eye" Hg 2+ -detector by showing in situ sol-to-gel conversion [73].
The cholesterol-diaminomalononitrile conjugate 281 was found to form supramolecular gels in dimethylformamide (DMF)/H 2 O and 1,2-dichlorobenzene, as confirmed by rheological studies. In addition, the authors verified that the gel formed in DMF/H 2 O was more stable and robust than the one obtained from 1,2-dichlorobenzene, due to strong intermolecular forces among the gelators in DMF/H 2 O. Furthermore, it was also established that cholesterol-diaminomalononitrile 281 gel was selective for visual recognition of Hg 2+ and Cu 2+ ions, and for sensing hydrazine based on the dosimetric interaction of the malononitrile motif with hydrazine [74].
Concerning the cholesterol-diazine conjugate 282, the authors demonstrated that it could form nice gels with Ag + and Fe 3+ ions in a CHCl 3 /CH 3 OH mixture solvent, using the diazine moiety as a metal ion binding site. The gelator 282 was able to distinguish Ag + and Fe 3+ with the aid of tetrabutylammonium chloride, tetrabutylammonium bromide or fluoride, and ammonium thiocyanate. Furthermore, the authors proved that there was no interference of Fe 2+ ions in the detection of Fe 3+ ions, as in the case of most chemosensors and gelators [75].
The effect of different spacer lengths containing two, three, five, six, ten, or twelve carbon atoms on cholesterol-based azobenzene organogels 285 and 286 was investigated [76]. For this purpose, a series of seven azobenzene-cholesterol compounds was synthesized through esterification reactions of cholesterol derivatives of 283 (bearing different spacer lengths) with 4 -carboxy-4-methoxyazobenzene 284 carried out in the presence of N,N -dicyclohexylcarbodiimide (DCC) and dimethylaminopyridine (DMAP) in dichloromethane, as depicted in Scheme 70. Typical reversible trans-cis and cis-trans isomerization of the azobenzene units was observed upon UV-Vis irradiation, giving the compounds 285 and 286 recoverable photoresponsive properties. Differential scanning calorimetry studies revealed that the spacer length plays a crucial role in the gelation phenomenon. Interestingly, among the tested compounds, only 285 (n = 6) could form a gel, and in specific solvents such as ethanol, isopropanol, and butan-1-ol. Furthermore, the authors concluded that the solvents, intermolecular H-bonding, and van der Waals interactions affected the aggregation mode and morphology of the gels [76].
In 2016, a study was reported on liquid crystal (LC) and gelation-based self-assembly, as well as the photoresponsive behavior of a new unsymmetrical azobenzene-cholesterol based dimesogen, 288 [77]. This molecule assembles a CN group at one end and a cholesterol carbonate, fixed through an oxyethylene spacer, to the opposite end of the azobenzene unit (Scheme 71). Compound 288 presented the capacity of acting as a chiral mesogenic dye dopant to induce a high helical-twisting chiral phase in the common nematic phase of 5CB. In addition, the gels of 288 formed in organic solvents exhibited multiple stimuli-responsive behaviors upon exposure to environmental stimuli such as temperature, light, and shear forces. The photoresponsive character was also proven in solution, in LC and gel states. These properties give to compound 288 potential applications in displays, as chiral mesogenic dye dopants, photochemical molecular switches, and new versatile LMGs [77].
A new series of liquid crystal gelators (290) with photoresponsive and aggregation-induced emission (AIE) properties was synthesized by connecting cholesterol derivatives 200 and tetraphenylethylene (an important AIEgen) to a central azobenzene moiety through esterification reaction (Scheme 72) [78]. The authors included variations in the alkyl chain spacer (n = 0, 1, 3, 5) to adjust the distance between cholesterol and azobenzene, while a fixed alkyl chain was placed between azobenzene and tetraphenylethylene (Scheme 72). The liquid crystal properties of compounds in 290 were assessed, and the results showed that all compounds exhibited, in pure state, Typical reversible trans-cis and cis-trans isomerization of the azobenzene units was observed upon UV-Vis irradiation, giving the compounds 285 and 286 recoverable photoresponsive properties. Differential scanning calorimetry studies revealed that the spacer length plays a crucial role in the gelation phenomenon. Interestingly, among the tested compounds, only 285 (n = 6) could form a gel, and in specific solvents such as ethanol, isopropanol, and butan-1-ol. Furthermore, the authors concluded that the solvents, intermolecular H-bonding, and van der Waals interactions affected the aggregation mode and morphology of the gels [76].
In 2016, a study was reported on liquid crystal (LC) and gelation-based self-assembly, as well as the photoresponsive behavior of a new unsymmetrical azobenzene-cholesterol based dimesogen, 288 [77]. This molecule assembles a CN group at one end and a cholesterol carbonate, fixed through an oxyethylene spacer, to the opposite end of the azobenzene unit (Scheme 71). Typical reversible trans-cis and cis-trans isomerization of the azobenzene units was observed upon UV-Vis irradiation, giving the compounds 285 and 286 recoverable photoresponsive properties. Differential scanning calorimetry studies revealed that the spacer length plays a crucial role in the gelation phenomenon. Interestingly, among the tested compounds, only 285 (n = 6) could form a gel, and in specific solvents such as ethanol, isopropanol, and butan-1-ol. Furthermore, the authors concluded that the solvents, intermolecular H-bonding, and van der Waals interactions affected the aggregation mode and morphology of the gels [76].
In 2016, a study was reported on liquid crystal (LC) and gelation-based self-assembly, as well as the photoresponsive behavior of a new unsymmetrical azobenzene-cholesterol based dimesogen, 288 [77]. This molecule assembles a CN group at one end and a cholesterol carbonate, fixed through an oxyethylene spacer, to the opposite end of the azobenzene unit (Scheme 71). Compound 288 presented the capacity of acting as a chiral mesogenic dye dopant to induce a high helical-twisting chiral phase in the common nematic phase of 5CB. In addition, the gels of 288 formed in organic solvents exhibited multiple stimuli-responsive behaviors upon exposure to environmental stimuli such as temperature, light, and shear forces. The photoresponsive character was also proven in solution, in LC and gel states. These properties give to compound 288 potential applications in displays, as chiral mesogenic dye dopants, photochemical molecular switches, and new versatile LMGs [77].
A new series of liquid crystal gelators (290) with photoresponsive and aggregation-induced emission (AIE) properties was synthesized by connecting cholesterol derivatives 200 and tetraphenylethylene (an important AIEgen) to a central azobenzene moiety through esterification reaction (Scheme 72) [78]. The authors included variations in the alkyl chain spacer (n = 0, 1, 3, 5) to adjust the distance between cholesterol and azobenzene, while a fixed alkyl chain was placed between azobenzene and tetraphenylethylene (Scheme 72). The liquid crystal properties of compounds in 290 were assessed, and the results showed that all compounds exhibited, in pure state, Compound 288 presented the capacity of acting as a chiral mesogenic dye dopant to induce a high helical-twisting chiral phase in the common nematic phase of 5CB. In addition, the gels of 288 formed in organic solvents exhibited multiple stimuli-responsive behaviors upon exposure to environmental stimuli such as temperature, light, and shear forces. The photoresponsive character was also proven in solution, in LC and gel states. These properties give to compound 288 potential applications in displays, as chiral mesogenic dye dopants, photochemical molecular switches, and new versatile LMGs [77].
A new series of liquid crystal gelators (290) with photoresponsive and aggregation-induced emission (AIE) properties was synthesized by connecting cholesterol derivatives 200 and tetraphenylethylene (an important AIEgen) to a central azobenzene moiety through esterification reaction (Scheme 72) [78]. The authors included variations in the alkyl chain spacer (n = 0, 1, 3, 5) to adjust the distance between cholesterol and azobenzene, while a fixed alkyl chain was placed between azobenzene and tetraphenylethylene (Scheme 72). The liquid crystal properties of compounds in 290 were assessed, and the results showed that all compounds exhibited, in pure state, smectic A LC phases, enantiotropic for 290 (n = 0) and (n = 3), but monotropic for 290 (n = 1) and (n = 5). The gelation properties of compound 290 demonstrated that 290 (n = 3) and (n = 5) form stable gels in appropriate solvents or solvent mixtures, while 290 (n = 0) and (n = 1) cannot form gels in a range of solvents. An interesting feature of both 290 (n = 3) and (n = 5) LMOGs is that they have significantly enhanced emissions induced by molecular self-assembly into fibril or ribbon-like nanostructures [78]. [79]. The study of the gelation properties in various organic solvents indicated that the number and position of the substituents in the cholesteryl moieties attached to a benzene ring had a great influence on the gelation as well as in the aggregation behaviors of the A(LS)2-and A(LS)3-type LMOGs. Among these three gelators, 294 and 296 showed efficient gelation abilities even without hydrogen bond linkers, in contrast with the meta-substituted 292, which did not gelate in any tested solvent [79]. Recently, the synthesis of a new pillar [6]arene-functionalized cholesterol derivative (298), acting as an LMG, was reported in the literature [80]. In this new compound, the host-guest pillar [6]arene 300 was linked to a cholesterol unit by the long alkyl chain, as well as amide groups (Scheme 74). This new pillar [6]arene-cholesterol 298 was found to form an organogel in cyclohexane/hexan-1-ol (10:1, v/v), which was reversibly responsive to temperature, share stress, and partially host-guest interaction introduced by ferrocenyl iminium derivative 299. In the case of the addition of ferrocenyl [79]. The study of the gelation properties in various organic solvents indicated that the number and position of the substituents in the cholesteryl moieties attached to a benzene ring had a great influence on the gelation as well as in the aggregation behaviors of the A(LS) 2 -and A(LS) 3 -type LMOGs. Among these three gelators, 294 and 296 showed efficient gelation abilities even without hydrogen bond linkers, in contrast with the meta-substituted 292, which did not gelate in any tested solvent [79].
Recently, the synthesis of a new pillar [6]arene-functionalized cholesterol derivative (298), acting as an LMG, was reported in the literature [80]. In this new compound, the host-guest pillar [6]arene 300 was linked to a cholesterol unit by the long alkyl chain, as well as amide groups (Scheme 74). This new pillar [6]arene-cholesterol 298 was found to form an organogel in cyclohexane/hexan-1-ol (10:1, v/v), which was reversibly responsive to temperature, share stress, and partially host-guest interaction introduced by ferrocenyl iminium derivative 299. In the case of the addition of ferrocenyl iminium derivative 299, the organogel could be tuned into a solution and tuned back into the organogel upon addition of per-butylated pillar [6]arene 300. This interesting feature could be explained on the basis of host-guest interactions of individual 300 with cationic guest 299 that bound with pillar [6]arene-cholesterol gelator 298 [80].
The authors observed that the hydrogel formation was based on the host and guest linkage between β-cyclodextrin (β-CD) and cholesterol, and that their viscoelastic behavior depended on polymer concentration as well as the β-CD/Chol molar ratio. Those hydrogels showed very interesting self-healing capabilities, good cytocompatibility, excellent flexibility, and quick colorant diffusion. With all these features, it is anticipated that these self-healable hydrogels may have important applications in tissue engineering [81].
of acid chlorides 291, 293, and 295 with cholesterol 28 in the presence of DMAP (Scheme 73) [79]. The study of the gelation properties in various organic solvents indicated that the number and position of the substituents in the cholesteryl moieties attached to a benzene ring had a great influence on the gelation as well as in the aggregation behaviors of the A(LS)2and A(LS)3-type LMOGs. Among these three gelators, 294 and 296 showed efficient gelation abilities even without hydrogen bond linkers, in contrast with the meta-substituted 292, which did not gelate in any tested solvent [79]. Recently, the synthesis of a new pillar [6]arene-functionalized cholesterol derivative (298), acting as an LMG, was reported in the literature [80]. In this new compound, the host-guest pillar [6]arene 300 was linked to a cholesterol unit by the long alkyl chain, as well as amide groups (Scheme 74). This new pillar [6]arene-cholesterol 298 was found to form an organogel in cyclohexane/hexan-1-ol (10:1, v/v), which was reversibly responsive to temperature, share stress, and partially host-guest interaction introduced by ferrocenyl iminium derivative 299. In the case of the addition of ferrocenyl The authors observed that the hydrogel formation was based on the host and guest linkage between β-cyclodextrin (β-CD) and cholesterol, and that their viscoelastic behavior depended on polymer concentration as well as the β-CD/Chol molar ratio. Those hydrogels showed very interesting self-healing capabilities, good cytocompatibility, excellent flexibility, and quick colorant iminium derivative 299, the organogel could be tuned into a solution and tuned back into the organogel upon addition of per-butylated pillar [6]arene 300. This interesting feature could be explained on the basis of host-guest interactions of individual 300 with cationic guest 299 that bound with pillar [6]arene-cholesterol gelator 298 [80]. The authors observed that the hydrogel formation was based on the host and guest linkage between β-cyclodextrin (β-CD) and cholesterol, and that their viscoelastic behavior depended on polymer concentration as well as the β-CD/Chol molar ratio. Those hydrogels showed very interesting self-healing capabilities, good cytocompatibility, excellent flexibility, and quick colorant diffusion. With all these features, it is anticipated that these self-healable hydrogels may have important applications in tissue engineering [81].
Bioimaging Applications
Imaging techniques, particularly fluorescence imaging techniques, have become powerful tools for noninvasive visualization of biological processes in real time with high spatial resolution.
Bioimaging Applications
Imaging techniques, particularly fluorescence imaging techniques, have become powerful tools for noninvasive visualization of biological processes in real time with high spatial resolution. Methods to "see into the body" or "see into cells" are essential for the diagnosis and treatment of a disease, as well as for research into the basic processes of life. Therefore, bioimaging techniques to visualize physiological or pathophysiological changes in the body and cells have become increasingly important in biomedical sciences [82].
The synthesis of a series of BODIPY-based fluorogenic dyes was reported, involving the CuAAC reaction of a nonfluorescent BODIPY-azide, 304, with a series of nonfluorescent alkyne molecules, including O-propargylated cholesterol 20 (Scheme 75) [83]. The most interesting molecule was the cholesterol-linked dye 305, which presented red-shifted absorption and emission wavelengths and displayed its preferential accumulation at the intracellular membranes over the plasma membrane of HeLa cells. This result offers potential applications of cholesterol-BODIPY conjugate 305 in the bioimaging of cholesterol trafficking in living cells and organisms [83]. disease, as well as for research into the basic processes of life. Therefore, bioimaging techniques to visualize physiological or pathophysiological changes in the body and cells have become increasingly important in biomedical sciences [82]. The synthesis of a series of BODIPY-based fluorogenic dyes was reported, involving the CuAAC reaction of a nonfluorescent BODIPY-azide, 304, with a series of nonfluorescent alkyne molecules, including O-propargylated cholesterol 20 (Scheme 75) [83]. The most interesting molecule was the cholesterol-linked dye 305, which presented red-shifted absorption and emission wavelengths and displayed its preferential accumulation at the intracellular membranes over the plasma membrane of HeLa cells. This result offers potential applications of cholesterol-BODIPY conjugate 305 in the bioimaging of cholesterol trafficking in living cells and organisms [83]. Byrd and coworkers reported the synthesis of a crosslinker containing two independent cholesterol units, with or without a photoaffinity label, guided by computational methods based on a model for the transfer of a cholesterol molecule between two proteins, NPC1 and NPC2 [84]. The synthesis of crosslinker 314 (without a photoaffinity label) involved several steps, especially because of the demanding six-step synthetic route of one of the portions that constitutes the crosslinker 314 (Scheme 76) [84]. Byrd and coworkers reported the synthesis of a crosslinker containing two independent cholesterol units, with or without a photoaffinity label, guided by computational methods based on a model for the transfer of a cholesterol molecule between two proteins, NPC1 and NPC2 [84]. The synthesis of crosslinker 314 (without a photoaffinity label) involved several steps, especially because of the demanding six-step synthetic route of one of the portions that constitutes the crosslinker 314 (Scheme 76) [84]. disease, as well as for research into the basic processes of life. Therefore, bioimaging techniques to visualize physiological or pathophysiological changes in the body and cells have become increasingly important in biomedical sciences [82]. The synthesis of a series of BODIPY-based fluorogenic dyes was reported, involving the CuAAC reaction of a nonfluorescent BODIPY-azide, 304, with a series of nonfluorescent alkyne molecules, including O-propargylated cholesterol 20 (Scheme 75) [83]. The most interesting molecule was the cholesterol-linked dye 305, which presented red-shifted absorption and emission wavelengths and displayed its preferential accumulation at the intracellular membranes over the plasma membrane of HeLa cells. This result offers potential applications of cholesterol-BODIPY conjugate 305 in the bioimaging of cholesterol trafficking in living cells and organisms [83]. Byrd and coworkers reported the synthesis of a crosslinker containing two independent cholesterol units, with or without a photoaffinity label, guided by computational methods based on a model for the transfer of a cholesterol molecule between two proteins, NPC1 and NPC2 [84]. The synthesis of crosslinker 314 (without a photoaffinity label) involved several steps, especially because of the demanding six-step synthetic route of one of the portions that constitutes the crosslinker 314 (Scheme 76) [84]. Another cholesterol-based crosslinker (322) with a photoaffinity label was also synthesized (Scheme 78) [84]. The synthesis of such a compound involved two stages: i) The preparation of an appropriate carboxylic acid cholesterol moiety (318) (Scheme 77) [84]; and ii) the linkage between compounds 318 and 312 (previously synthesized) (Scheme 76) [84]. The authors claimed that with the appropriate connection of the two cholesterol molecules 314 and 322, both proteins (NPC1 and NPC2) are simultaneously occupied in a manner that stabilizes the protein-protein interaction, allowing detailed structural analysis of the resulting complex. Furthermore, the introduction of a photoaffinity label in one of the cholesterol moieties, 322, should allow the covalent attachment of one of the units into its respective protein-binding pocket. The compounds synthesized in this work may be interesting tools for studying the transfer of cholesterol between cholesterol-binding proteins [84].
Two cholesterol-based fluorescent lipids, 326 and 329, were synthesized using nitrobenzoxadiazole (NBD) or rhodamine B, respectively, linked by an ether alkyl chain (Scheme 79). Compounds 326 and 329 were incorporated into liposome formulations, aiming to create and validate their use as fluorescent probes for lipoplex tracking, without interfering with green fluorescent protein (GFP) [85]. The authors concluded that both compounds 326 and 329 did not interfere with the expression of GFP plasmid, obtaining live cell images without any interference. Furthermore, The authors claimed that with the appropriate connection of the two cholesterol molecules 314 and 322, both proteins (NPC1 and NPC2) are simultaneously occupied in a manner that stabilizes the protein-protein interaction, allowing detailed structural analysis of the resulting complex. Furthermore, the introduction of a photoaffinity label in one of the cholesterol moieties, 322, should allow the covalent attachment of one of the units into its respective protein-binding pocket. The compounds synthesized in this work may be interesting tools for studying the transfer of cholesterol between cholesterol-binding proteins [84].
Two cholesterol-based fluorescent lipids, 326 and 329, were synthesized using nitrobenzoxadiazole (NBD) or rhodamine B, respectively, linked by an ether alkyl chain (Scheme 79). Compounds 326 and 329 were incorporated into liposome formulations, aiming to create and validate their use as fluorescent probes for lipoplex tracking, without interfering with green fluorescent protein (GFP) [85]. The authors concluded that both compounds 326 and 329 did not interfere with the expression of GFP plasmid, obtaining live cell images without any interference. Furthermore, microscopic observations clearly showed that these fluorescent lipids had minimal self-quenching and photobleaching effects. The results indicated that the synthesized compounds 326 and 329 may be considered for the development of fluorescent probes to trace the intracellular trafficking of cholesterol-derived cationic liposomes [85]. Reibel et al. prepared radiolabeled-18 F polymer compounds based on linear PEG 332 and novel linear-hyperbranched amphiphilic polyglycerol (hbPG) 334, using cholesterol 28 as a lipid anchor via CuAAC chemistry of propargylated compounds 330 and 333 with radiolabeled- 18 The authors carefully studied the absorption and emission properties of both cholesterol conjugates 338 and 340 and their parent chromophores 337 and 339. An ICT behavior was observed for diene compounds 339 and 340, whereas for stilbene compounds 337 and 338 a remarkable AIEE behavior was detected. The lack of AIEE characteristics in dienes may be explained by the competing nonradiative losses due to double bond flexibility. Nevertheless, the most interesting conclusion of the optical properties study was that the random aggregates formed by stilbene 337 in aqueous media became highly ordered upon cholesterol conjugation 338. Furthermore, the interaction with sodium cholate stimulated the formation of self-assembled structures in nanoscale dimensions, making these conjugates the starting point for the development of several bioimaging probes [87].
In 2016, Wercholuk and coworkers synthesized a fluorescent-labeled cholesterol molecule (342) by treating cholesteryl chloroformate 7 with 4-amino-1,8-naphthalimides (341) (Scheme 82) [88]. The authors expected that such conjugates might serve one of two roles, depending on whether the toxicity of the fluorophore was retained in the conjugates: As reporters for following in vivo uptake or catabolism of cholesterol, or as "Trojan horse" antibiotics. The results pointed out that the new compounds (342) emitted blue light in nonpolar solvents, and its lipid portion incorporated into liposomal membrane bilayers quickly, leaving the fluorophore exposed to the external aqueous environment. Compounds in 342 were incubated with Mycobacterium smegmatis mc2 155, which displayed stable integration of the fluorescent-labeled cholesterols into bacterial membranes in vivo. Although fluorophores are toxic to prokaryotic cells, the new cholesterol conjugates (342) are not, and therefore they could be considered for the evaluation of cholesterol uptake in prokaryotic The authors carefully studied the absorption and emission properties of both cholesterol conjugates 338 and 340 and their parent chromophores 337 and 339. An ICT behavior was observed for diene compounds 339 and 340, whereas for stilbene compounds 337 and 338 a remarkable AIEE behavior was detected. The lack of AIEE characteristics in dienes may be explained by the competing nonradiative losses due to double bond flexibility. Nevertheless, the most interesting conclusion of the optical properties study was that the random aggregates formed by stilbene 337 in aqueous media became highly ordered upon cholesterol conjugation 338. Furthermore, the interaction with sodium cholate stimulated the formation of self-assembled structures in nanoscale dimensions, making these conjugates the starting point for the development of several bioimaging probes [87].
In 2016, Wercholuk and coworkers synthesized a fluorescent-labeled cholesterol molecule (342) by treating cholesteryl chloroformate 7 with 4-amino-1,8-naphthalimides (341) (Scheme 82) [88]. The authors expected that such conjugates might serve one of two roles, depending on whether the toxicity of the fluorophore was retained in the conjugates: As reporters for following in vivo uptake or catabolism of cholesterol, or as "Trojan horse" antibiotics. The results pointed out that the new compounds (342) emitted blue light in nonpolar solvents, and its lipid portion incorporated into liposomal membrane bilayers quickly, leaving the fluorophore exposed to the external aqueous environment. Compounds in 342 were incubated with Mycobacterium smegmatis mc2 155, which displayed stable integration of the fluorescent-labeled cholesterols into bacterial membranes in vivo. Although fluorophores are toxic to prokaryotic cells, the new cholesterol conjugates (342) are not, and therefore they could be considered for the evaluation of cholesterol uptake in prokaryotic organisms [88]. In the same year, Bernhard et al. reported an interesting paper in which they studied two strategies for the bioconjugation of bombesin (BBN), a well-known peptide, the receptor of which is overexpressed at the surface of tumor cells and which has been conjugated in several probes [89]. They used subphthalocyanines (SubPcs), which are interesting probes for optical imaging. One of these strategies involved the entrapping of SubPc into a liposome and subsequently grafting BBN to the SubPc-containing liposome to afford a biovectorized liposome. The synthesis of cholesterol derivatives 346 and 347 used in their work was achieved by the reaction of dimethylaminopropyne 344 or 3-azidodimethylpropylamine 345 with cholesterol bromo ester 343 to afford cholesterylammonium species 346 (alkynyl) and 347 (azide), respectively (Scheme 83) [89]. Once the cholesteryl-ammonium species 346 and 347 were prepared, the pre-bioconjugation strategy started from grafting the biomolecule to one liposome's component (i.e., cholesterol additive) prior to the preparation of the liposome, to afford BBN-cholesterol conjugates 348 and 349. The conjugation of BBN-azide with cholesteryl-alkyne 346 (i.e., pre-functionalization by coppercatalyzed click chemistry) was carried out in the presence of copper sulfate and sodium ascorbate as the reducing agents (Scheme 84) [89]. Alternatively, BBN-bicyclononyne and cholesteryl-azide 347 were reacted without the Cu catalyst to afford conjugate 349 (Scheme 84) [89]. This strategy was employed using liposomes containing graftable cholesterol derivatives, revealed itself as a more suitable approach in addressing the stability of SubPcs, and was achieved by copper-free clickchemistry on the outer face of the liposome. This study demonstrated that both azido-and alkynylliposomes are good entry points for a bioconjugation or biovectorization approach (on the outer face of the liposome), which offers a second chance for fluorophores with no reactive functional group available on their backbone, a way of imitating bioconjugation with a biomolecule (i.e., an indirect approach offered to achieve future site-specific targeting of tumors) [89]. In the same year, Bernhard et al. reported an interesting paper in which they studied two strategies for the bioconjugation of bombesin (BBN), a well-known peptide, the receptor of which is overexpressed at the surface of tumor cells and which has been conjugated in several probes [89]. They used subphthalocyanines (SubPcs), which are interesting probes for optical imaging. One of these strategies involved the entrapping of SubPc into a liposome and subsequently grafting BBN to the SubPc-containing liposome to afford a biovectorized liposome. The synthesis of cholesterol derivatives 346 and 347 used in their work was achieved by the reaction of dimethylaminopropyne 344 or 3-azidodimethylpropylamine 345 with cholesterol bromo ester 343 to afford cholesteryl-ammonium species 346 (alkynyl) and 347 (azide), respectively (Scheme 83) [89]. In the same year, Bernhard et al. reported an interesting paper in which they studied two strategies for the bioconjugation of bombesin (BBN), a well-known peptide, the receptor of which is overexpressed at the surface of tumor cells and which has been conjugated in several probes [89]. They used subphthalocyanines (SubPcs), which are interesting probes for optical imaging. One of these strategies involved the entrapping of SubPc into a liposome and subsequently grafting BBN to the SubPc-containing liposome to afford a biovectorized liposome. The synthesis of cholesterol derivatives 346 and 347 used in their work was achieved by the reaction of dimethylaminopropyne 344 or 3-azidodimethylpropylamine 345 with cholesterol bromo ester 343 to afford cholesterylammonium species 346 (alkynyl) and 347 (azide), respectively (Scheme 83) [89]. Once the cholesteryl-ammonium species 346 and 347 were prepared, the pre-bioconjugation strategy started from grafting the biomolecule to one liposome's component (i.e., cholesterol additive) prior to the preparation of the liposome, to afford BBN-cholesterol conjugates 348 and 349. The conjugation of BBN-azide with cholesteryl-alkyne 346 (i.e., pre-functionalization by coppercatalyzed click chemistry) was carried out in the presence of copper sulfate and sodium ascorbate as the reducing agents (Scheme 84) [89]. Alternatively, BBN-bicyclononyne and cholesteryl-azide 347 were reacted without the Cu catalyst to afford conjugate 349 (Scheme 84) [89]. This strategy was employed using liposomes containing graftable cholesterol derivatives, revealed itself as a more suitable approach in addressing the stability of SubPcs, and was achieved by copper-free clickchemistry on the outer face of the liposome. This study demonstrated that both azido-and alkynylliposomes are good entry points for a bioconjugation or biovectorization approach (on the outer face of the liposome), which offers a second chance for fluorophores with no reactive functional group available on their backbone, a way of imitating bioconjugation with a biomolecule (i.e., an indirect approach offered to achieve future site-specific targeting of tumors) [89]. Once the cholesteryl-ammonium species 346 and 347 were prepared, the pre-bioconjugation strategy started from grafting the biomolecule to one liposome's component (i.e., cholesterol additive) prior to the preparation of the liposome, to afford BBN-cholesterol conjugates 348 and 349. The conjugation of BBN-azide with cholesteryl-alkyne 346 (i.e., pre-functionalization by copper-catalyzed click chemistry) was carried out in the presence of copper sulfate and sodium ascorbate as the reducing agents (Scheme 84) [89]. Alternatively, BBN-bicyclononyne and cholesteryl-azide 347 were reacted without the Cu catalyst to afford conjugate 349 (Scheme 84) [89]. This strategy was employed using liposomes containing graftable cholesterol derivatives, revealed itself as a more suitable approach in addressing the stability of SubPcs, and was achieved by copper-free click-chemistry on the outer face of the liposome. This study demonstrated that both azido-and alkynyl-liposomes are good entry points for a bioconjugation or biovectorization approach (on the outer face of the liposome), which offers a second chance for fluorophores with no reactive functional group available on their backbone, a way of imitating bioconjugation with a biomolecule (i.e., an indirect approach offered to achieve future site-specific targeting of tumors) [89]. Among the tested compounds, cholesten DAINs 355 and 356 increased their fluorescence intensity in viscous solvents such as triglycerides. Besides, compound 355 showed good cholesterol-responsive emission, which increased linearly with the amount of cholesterol in the lipid bilayer. The responsiveness displayed by cholesten DAIN 355 to cholesterol was improved relatively to the known viscosity probes, 9-(2,2-dicyanovinyl)julolidine (DCVJ) and Laurdan [90].
Synthetic Applications
The regio-and stereoselective formation of O-glycosidic bonds between carbohydrates and steroids is still a demanding process, despite the considerable progress in carbohydrate chemistry in the last years. The direct electrochemical glycosylation of steroids is an alternative: However, it has several drawbacks. In attempting to solve the problem, Tomkiel et al. screened several derivatives of cholesterol as sterol donors in electrochemical reactions with sugar alcohols [91]. Following this work, the same authors reported in 2015 the use of 3α,5α-cyclocholestan-6β-yl alkyl and aryl ethers (364) as a cholesteryl donor in the electrochemical synthesis of glycoconjugates (363) (Scheme 87) [92]. The reaction worked well for all the tested compounds, but the best yields were achieved for ethyl, benzyl, phenyl, and tert-butyldimethylsilyl (TBDMS) ethers (51%, 50%, 58%, and 52%, respectively). Unfortunately, an isomerization side reaction was observed for the less reactive cholesteryl esters, affording the compounds in 365 (Scheme 87) [92]. To develop step-economy syntheses of cholesteryl glycosides, Davis and coworkers reported a methodology for the synthesis of α-D-cholesteryl glycosides 369 and 372, using a one-pot per-Otrimethylsilyl glycosyl iodide glycosylation (Scheme 88) [93]. The methodology relied first on the generation of glucosyl or galactosyl iodide through the reaction of per-O-TMS glucoside 366 or 370 with iodotrimethylsilane (TMSI), which was directly cannulated into a solution of cholesterol, tetrabutylammonium iodide (TBAI), and N,N-diisopropylethylamine (DIPEA), and the mixture was stirred for 2 days at room temperature. After that, the product was treated with methanol and Dowex-50WX8-200 acidic resin to remove the silyl protecting groups, affording compounds 367 and 371 (Scheme 88) [93]. These glycosides were subsequently esterified using regioselective enzymatic acylation of the 6-hydroxy group with tetradecanoyl vinyl ester 368 (Scheme 88) [93]. Following this work, the same authors reported in 2015 the use of 3α,5α-cyclocholestan-6β-yl alkyl and aryl ethers (364) as a cholesteryl donor in the electrochemical synthesis of glycoconjugates (363) (Scheme 87) [92]. The reaction worked well for all the tested compounds, but the best yields were achieved for ethyl, benzyl, phenyl, and tert-butyldimethylsilyl (TBDMS) ethers (51%, 50%, 58%, and 52%, respectively). Unfortunately, an isomerization side reaction was observed for the less reactive cholesteryl esters, affording the compounds in 365 (Scheme 87) [92].
Synthetic Applications
The regio-and stereoselective formation of O-glycosidic bonds between carbohydrates and steroids is still a demanding process, despite the considerable progress in carbohydrate chemistry in the last years. The direct electrochemical glycosylation of steroids is an alternative: However, it has several drawbacks. In attempting to solve the problem, Tomkiel et al. screened several derivatives of cholesterol as sterol donors in electrochemical reactions with sugar alcohols [91]. Following this work, the same authors reported in 2015 the use of 3α,5α-cyclocholestan-6β-yl alkyl and aryl ethers (364) as a cholesteryl donor in the electrochemical synthesis of glycoconjugates (363) (Scheme 87) [92]. The reaction worked well for all the tested compounds, but the best yields were achieved for ethyl, benzyl, phenyl, and tert-butyldimethylsilyl (TBDMS) ethers (51%, 50%, 58%, and 52%, respectively). Unfortunately, an isomerization side reaction was observed for the less reactive cholesteryl esters, affording the compounds in 365 (Scheme 87) [92]. To develop step-economy syntheses of cholesteryl glycosides, Davis and coworkers reported a methodology for the synthesis of α-D-cholesteryl glycosides 369 and 372, using a one-pot per-Otrimethylsilyl glycosyl iodide glycosylation (Scheme 88) [93]. The methodology relied first on the generation of glucosyl or galactosyl iodide through the reaction of per-O-TMS glucoside 366 or 370 with iodotrimethylsilane (TMSI), which was directly cannulated into a solution of cholesterol, tetrabutylammonium iodide (TBAI), and N,N-diisopropylethylamine (DIPEA), and the mixture was stirred for 2 days at room temperature. After that, the product was treated with methanol and Dowex-50WX8-200 acidic resin to remove the silyl protecting groups, affording compounds 367 and 371 (Scheme 88) [93]. These glycosides were subsequently esterified using regioselective enzymatic acylation of the 6-hydroxy group with tetradecanoyl vinyl ester 368 (Scheme 88) [93]. To develop step-economy syntheses of cholesteryl glycosides, Davis and coworkers reported a methodology for the synthesis of α-D-cholesteryl glycosides 369 and 372, using a one-pot per-O-trimethylsilyl glycosyl iodide glycosylation (Scheme 88) [93]. The methodology relied first on the generation of glucosyl or galactosyl iodide through the reaction of per-O-TMS glucoside 366 or 370 with iodotrimethylsilane (TMSI), which was directly cannulated into a solution of cholesterol, tetrabutylammonium iodide (TBAI), and N,N-diisopropylethylamine (DIPEA), and the mixture was stirred for 2 days at room temperature. After that, the product was treated with methanol and Dowex-50WX8-200 acidic resin to remove the silyl protecting groups, affording compounds 367 and 371 (Scheme 88) [93]. These glycosides were subsequently esterified using regioselective enzymatic acylation of the 6-hydroxy group with tetradecanoyl vinyl ester 368 (Scheme 88) [93]. This methodology involving the glycosylation of cholesterol followed by enzymatic regioselective acylation allowed expansion of the acylated α-cholesteryl glycoside inventory to include galactose analogues. The glycosylation of per-O-silylated glucose provided better αselectivity (39:1) than past syntheses (8:1 α-selectivity) and higher glycosylation yields due to the armed nature of per-O-silyl donors [93].
Mao and coworkers developed a novel glycosyl coupling reaction, involving a photoinduced direct activation mechanism of thioglycosides (373) and subsequent O-glycosylation in the absence of photosensitizer [94]. In their studies, the authors used several sugars, amino acids, and cholesterol 28 (75%) as substrates (Scheme 89). The authors showed that the activation of thioglycosides upon UV irradiation followed by the oxidation of Cu(OTf)2 led to the in situ formation of species that could undergo glycosylation to afford glycosides without the need for a photosensitizer. The proposed mechanism involved i) homolytic cleavage of a C-S bond to generate a glycosyl radical and ii) oxidation to an oxacarbenium ion promoted by Cu(OTf)2, and sequential O-glycosylation [94]. [95]. The introduction of cholesterol occurred under microwave conditions to afford the corresponding glycoconjugate 377 in 59% yield (Scheme 90). Cholesterol glycoconjugate 377 was further deacetylated using sodium methoxide to afford cholesteryl α-D-lactoside 378 in 88% yield (Scheme 90). This glycosylation method can be employed on sterically demanding nucleophiles such as cholesterol and has potential applications in accessing structurally diverse cholesteryl glycoside analogs [95]. This methodology involving the glycosylation of cholesterol followed by enzymatic regioselective acylation allowed expansion of the acylated α-cholesteryl glycoside inventory to include galactose analogues. The glycosylation of per-O-silylated glucose provided better α-selectivity (39:1) than past syntheses (8:1 α-selectivity) and higher glycosylation yields due to the armed nature of per-O-silyl donors [93].
Mao and coworkers developed a novel glycosyl coupling reaction, involving a photoinduced direct activation mechanism of thioglycosides (373) and subsequent O-glycosylation in the absence of photosensitizer [94]. In their studies, the authors used several sugars, amino acids, and cholesterol 28 (75%) as substrates (Scheme 89). The authors showed that the activation of thioglycosides upon UV irradiation followed by the oxidation of Cu(OTf) 2 led to the in situ formation of species that could undergo glycosylation to afford glycosides without the need for a photosensitizer. The proposed mechanism involved i) homolytic cleavage of a C-S bond to generate a glycosyl radical and ii) oxidation to an oxacarbenium ion promoted by Cu(OTf) 2 , and sequential O-glycosylation [94]. This methodology involving the glycosylation of cholesterol followed by enzymatic regioselective acylation allowed expansion of the acylated α-cholesteryl glycoside inventory to include galactose analogues. The glycosylation of per-O-silylated glucose provided better αselectivity (39:1) than past syntheses (8:1 α-selectivity) and higher glycosylation yields due to the armed nature of per-O-silyl donors [93].
Mao and coworkers developed a novel glycosyl coupling reaction, involving a photoinduced direct activation mechanism of thioglycosides (373) and subsequent O-glycosylation in the absence of photosensitizer [94]. In their studies, the authors used several sugars, amino acids, and cholesterol 28 (75%) as substrates (Scheme 89). The authors showed that the activation of thioglycosides upon UV irradiation followed by the oxidation of Cu(OTf)2 led to the in situ formation of species that could undergo glycosylation to afford glycosides without the need for a photosensitizer. The proposed mechanism involved i) homolytic cleavage of a C-S bond to generate a glycosyl radical and ii) oxidation to an oxacarbenium ion promoted by Cu(OTf)2, and sequential O-glycosylation [94]. In 2015, Davis and coworkers reported the synthesis of cholesteryl-α-D-lactoside 378 via generation and trapping of stable β-lactosyl iodide 376. The iodide derivative 376 was prepared quantitatively under non-in situ anomerization and metal-free conditions by reacting commercially available β-per-O-acetylated lactose 375 with trimethylsilyl iodide [95]. The introduction of cholesterol occurred under microwave conditions to afford the corresponding glycoconjugate 377 in 59% yield (Scheme 90). Cholesterol glycoconjugate 377 was further deacetylated using sodium methoxide to afford cholesteryl α-D-lactoside 378 in 88% yield (Scheme 90). This glycosylation method can be employed on sterically demanding nucleophiles such as cholesterol and has potential applications in accessing structurally diverse cholesteryl glycoside analogs [95]. In 2015, Davis and coworkers reported the synthesis of cholesteryl-α-D-lactoside 378 via generation and trapping of stable β-lactosyl iodide 376. The iodide derivative 376 was prepared quantitatively under non-in situ anomerization and metal-free conditions by reacting commercially available β-per-O-acetylated lactose 375 with trimethylsilyl iodide [95]. The introduction of cholesterol occurred under microwave conditions to afford the corresponding glycoconjugate 377 in 59% yield (Scheme 90). Cholesterol glycoconjugate 377 was further deacetylated using sodium methoxide to afford cholesteryl α-D-lactoside 378 in 88% yield (Scheme 90). This glycosylation method can be employed on sterically demanding nucleophiles such as cholesterol and has potential applications in accessing structurally diverse cholesteryl glycoside analogs [95]. A new efficient method for the synthesis of cholesteryl glucosides starting from sucrose 379 was recently developed [96]. This method lays down a five-step synthetic route that involves the initial protection of disaccharide 379 hydroxy groups, and upon acidic hydrolysis at its anomeric center, the pyranosyl moiety 381 is converted into trichloroacetimidate derivative 383 (Scheme 91). Scheme 91. Synthesis of cholesteryl glucoside starting from sucrose. Reagents and conditions: a) BnBr, NaH, DMF, rt, 4.5 h; b) conc. HCl, acetone, reflux, 1.5 h; c) trichloroacetonitrile, NaH, CH2Cl2, rt, 4 h; d) TMSOTf, 4 Å molecular sieves, CH2Cl2, rt, 1.5 h; e) Pd(OH)2, EtOH/cyclohexene (2:1), reflux.
The final two steps rely on the formation of the glycosidic bond to cholesterol 28 followed by the removal of the protecting groups, affording the desired cholesteryl glucoside 384 (Scheme 91). The authors claimed that the major advantage of this strategy was the use of the readily available and cheap sucrose 379 as starting material. In addition, the methodology proved to be fast, cost-saving, and high-yielding, representing a competitive preparation method for these natural compounds [96].
In 2014, Algay and coworkers extensively explored the versatility of nitrile oxide alkyne cycloaddition (NOAC) chemistry for the formation of cholesterol conjugates anchored by way of a polar, aromatic, metabolically stable isoxazole nucleus [97]. The first series of compounds produced in this paper involved i) the microwave-assisted formation of propargyl ethers (386) in 62%-70% yield (Scheme 92a), and ii) the reaction of cholesterol propargyl ethers (386) with phenyl nitrile oxide (generated in situ from benzaldehyde oxime upon exposure to an ethanolic solution of chloramine-T) (Scheme 92b). This last reaction was carried out at room temperature or under microwave heating depending on the length of the spacing between the bulky lipid and the reacting alkyne, affording isoxazoles (387) in fair to excellent yields (35%-91%) [97]. The authors extended a bit further this reaction to prepare biologically relevant cholesterol fluorescent probes such as steroid-coumarin (391) (75%) and steroid-azobenzene conjugates (389) (56%) (Scheme 92). It is known that long-chain hydrophilic linkers are very attractive for bioconjugation and therefore, in this paper, the authors also synthesized three new ether-linked isoxazole-cholesterol conjugates (396) in 29%-58% yield (Scheme 92) [97]. A new efficient method for the synthesis of cholesteryl glucosides starting from sucrose 379 was recently developed [96]. This method lays down a five-step synthetic route that involves the initial protection of disaccharide 379 hydroxy groups, and upon acidic hydrolysis at its anomeric center, the pyranosyl moiety 381 is converted into trichloroacetimidate derivative 383 (Scheme 91). A new efficient method for the synthesis of cholesteryl glucosides starting from sucrose 379 was recently developed [96]. This method lays down a five-step synthetic route that involves the initial protection of disaccharide 379 hydroxy groups, and upon acidic hydrolysis at its anomeric center, the pyranosyl moiety 381 is converted into trichloroacetimidate derivative 383 (Scheme 91). Scheme 91. Synthesis of cholesteryl glucoside starting from sucrose. Reagents and conditions: a) BnBr, NaH, DMF, rt, 4.5 h; b) conc. HCl, acetone, reflux, 1.5 h; c) trichloroacetonitrile, NaH, CH2Cl2, rt, 4 h; d) TMSOTf, 4 Å molecular sieves, CH2Cl2, rt, 1.5 h; e) Pd(OH)2, EtOH/cyclohexene (2:1), reflux.
The final two steps rely on the formation of the glycosidic bond to cholesterol 28 followed by the removal of the protecting groups, affording the desired cholesteryl glucoside 384 (Scheme 91). The authors claimed that the major advantage of this strategy was the use of the readily available and cheap sucrose 379 as starting material. In addition, the methodology proved to be fast, cost-saving, and high-yielding, representing a competitive preparation method for these natural compounds [96].
In 2014, Algay and coworkers extensively explored the versatility of nitrile oxide alkyne cycloaddition (NOAC) chemistry for the formation of cholesterol conjugates anchored by way of a polar, aromatic, metabolically stable isoxazole nucleus [97]. The first series of compounds produced in this paper involved i) the microwave-assisted formation of propargyl ethers (386) in 62%-70% yield (Scheme 92a), and ii) the reaction of cholesterol propargyl ethers (386) with phenyl nitrile oxide (generated in situ from benzaldehyde oxime upon exposure to an ethanolic solution of chloramine-T) (Scheme 92b). This last reaction was carried out at room temperature or under microwave heating depending on the length of the spacing between the bulky lipid and the reacting alkyne, affording isoxazoles (387) in fair to excellent yields (35%-91%) [97]. The authors extended a bit further this reaction to prepare biologically relevant cholesterol fluorescent probes such as steroid-coumarin (391) (75%) and steroid-azobenzene conjugates (389) (56%) (Scheme 92). It is known that long-chain hydrophilic linkers are very attractive for bioconjugation and therefore, in this paper, the authors also synthesized three new ether-linked isoxazole-cholesterol conjugates (396) in 29%-58% yield (Scheme 92) [97]. Another series of isoxazole-cholesterol conjugates (401) was also prepared, starting from cholesterol chloroformate 7 and bearing an amidocarbamate linker following the four-step synthetic route depicted in Scheme 93 [97]. The nontrivial synthesis of aryl ethers of natural alcohols drove the authors to test the NOAC chemistry in the assembly of aryl ether cholesterol conjugates [97]. Therefore, isoxazole-linked aryl cholesterol ether 404 was prepared from the aldehyde-functionalized aryl ether 402 through subsequent oximation and cycloaddition reactions, as depicted in Scheme 94. Another series of isoxazole-cholesterol conjugates (401) was also prepared, starting from cholesterol chloroformate 7 and bearing an amidocarbamate linker following the four-step synthetic route depicted in Scheme 93 [97]. Another series of isoxazole-cholesterol conjugates (401) was also prepared, starting from cholesterol chloroformate 7 and bearing an amidocarbamate linker following the four-step synthetic route depicted in Scheme 93 [97]. The nontrivial synthesis of aryl ethers of natural alcohols drove the authors to test the NOAC chemistry in the assembly of aryl ether cholesterol conjugates [97]. Therefore, isoxazole-linked aryl cholesterol ether 404 was prepared from the aldehyde-functionalized aryl ether 402 through subsequent oximation and cycloaddition reactions, as depicted in Scheme 94. The nontrivial synthesis of aryl ethers of natural alcohols drove the authors to test the NOAC chemistry in the assembly of aryl ether cholesterol conjugates [97]. Therefore, isoxazole-linked aryl cholesterol ether 404 was prepared from the aldehyde-functionalized aryl ether 402 through subsequent oximation and cycloaddition reactions, as depicted in Scheme 94.
Finally, the authors used the potential of NOAC chemistry to prepare a steroidal glycoconjugate, 407, and the selective tethering of one or two cholesterol units, 409 and 410, respectively, to a thymidine skeleton was demonstrated by trapping of the same dipole by 5 -protected mono-or bis-propargylated thymidines (Scheme 95) [97].
In 2016, Alarcón-Manjarrez and coworkers reported the synthesis of two dimeric steroidal terephthalates, 415 and 416, from epimeric 4,5-seco-cholest-3-yn-5-ols 413 and 414, using a five-step synthetic route with cholesterol 28 as a starting material [98]. The synthetic route first involved the Oppenauer oxidation of cholesterol 28, followed by epoxidation, to afford a mixture of epoxides (411) (α:β = 1:4) (Scheme 96). Then, an Eschenmoser-Tanabe fragmentation followed by carbonyl group reduction provided the epimeric alkynols 413 and 414 in a 1:2 ratio (Scheme 96). Finally, the treatment of each one of the epimeric alkynols 413 and 414 with terephthaloyl chloride led to the symmetrical axial and equatorial dimers 415 and 416, respectively (Scheme 96) [98]. The nontrivial synthesis of aryl ethers of natural alcohols drove the authors to test the NOAC chemistry in the assembly of aryl ether cholesterol conjugates [97]. Therefore, isoxazole-linked aryl cholesterol ether 404 was prepared from the aldehyde-functionalized aryl ether 402 through subsequent oximation and cycloaddition reactions, as depicted in Scheme 94. In 2016, Alarcón-Manjarrez and coworkers reported the synthesis of two dimeric steroidal terephthalates, 415 and 416, from epimeric 4,5-seco-cholest-3-yn-5-ols 413 and 414, using a five-step synthetic route with cholesterol 28 as a starting material [98]. The synthetic route first involved the Oppenauer oxidation of cholesterol 28, followed by epoxidation, to afford a mixture of epoxides (411) (α:β = 1:4) (Scheme 96). Then, an Eschenmoser-Tanabe fragmentation followed by carbonyl group reduction provided the epimeric alkynols 413 and 414 in a 1:2 ratio (Scheme 96). Finally, the treatment of each one of the epimeric alkynols 413 and 414 with terephthaloyl chloride led to the symmetrical axial and equatorial dimers 415 and 416, respectively (Scheme 96) [98]. The authors proceeded to crystallographic analysis of the compounds and concluded that the facial hydrophobicity of the steroidal skeletons had crucial influence on the crystal packing in which the dimeric molecules were forced to accommodate these fragments only with a few hydrogen-bonding interactions. This feature originated a cisoid conformation for 415 and a linear conformation for 416 [98]. Shibuya et al. reported in 2016 the synthesis of (24S)-hydroxycholesterol (24S-OHChol) esters, which are involved in neuronal cell death, through catalysis with acyl-CoA:cholesterol acyltransferase-1 (ACAT-1) [99]. The authors studied the esterification of (24S)-OHChol 417 with cis-oleoyl chloride under basic conditions and obtained mono-oleates 418 and 419 and bis-oleate 420 in 39%, 9%, and 20% yields, respectively (Scheme 97). The protection of (24S)-OH with a trifluoroacetyl group was also attempted, affording mono-trifluoroacetates 421 and 422 in 33% and 14% yields, respectively, and the bis-trifluoroacetate 423 in 21% yield (Scheme 97) [99]. The authors took advantage of the mono-trifluoroacetate 421 to prepare the stearoyl and palmitoyl esters 427 and 428 in 68% and 75% yields, respectively, as depicted in Scheme 98 [99]. Finally, the authors also reported the use of esters of unsaturated long-chain fatty acids, such as linoleic (LA), arachidonic (AA), and docosahexaenoic (DHA), to react with cholesterol derivative 422 in order to prepare linoleate 430, arachidonoate 431, and docosahexaenoate 432 esters, in 52%, 74%, and 66% yields, respectively, in a two-step synthetic route, as depicted in Scheme 99 [99].
Cholesterol derivatives can also be used as starting materials for the synthesis of fused nitrogen heterocycles. This was the case for 4-cholesten-3-one 350, which was involved in the preparation of A-ring dehydropiperazine 443 (90% yield) through a microwave-assisted annulation reaction with ethylenediamine 442 in the presence of basic alumina (Scheme 101) [101].
The proposed mechanism should encompass the initial oxidation of the allylic protons of the conjugated ketone via enolate intermediate to afford a diketo intermediate. Then, the condensation with ethylenediamine followed by a Michael addition and autoxidation reactions afforded the dehydropiperazine derivatives [101].
arachidonoate 431, and docosahexaenoate 432 esters, in 52%, 74%, and 66% yields, respectively, in a two-step synthetic route, as depicted in Scheme 99 [99]. two-step synthetic route, as depicted in Scheme 99 [99]. of MgO NPs can be rationalized on this basis since they are known as a highly effective heterogeneous base catalyst for Michael addition and Knoevenagel condensation reactions with Mg 2+ (Lewis acid) and O 2− (Lewis base) sites along with various cationic and anionic vacancies in the lattice [102].
Cholesterol derivatives can also be used as starting materials for the synthesis of fused nitrogen heterocycles. This was the case for 4-cholesten-3-one 350, which was involved in the preparation of A-ring dehydropiperazine 443 (90% yield) through a microwave-assisted annulation reaction with ethylenediamine 442 in the presence of basic alumina (Scheme 101) [101]. The proposed mechanism should encompass the initial oxidation of the allylic protons of the conjugated ketone via enolate intermediate to afford a diketo intermediate. Then, the condensation with ethylenediamine followed by a Michael addition and autoxidation reactions afforded the dehydropiperazine derivatives [101].
Recently, Ansari and coworkers reported an efficient and green synthetic method for the preparation of steroidal pyridines [102]. Such methodology relied on the utilization of MgO NPs as a heterogeneous, mild, and reusable catalyst, in a multicomponent one-pot protocol, taking advantage of the usefulness of the microwave irradiation as an alternative heating source. The series of substituted fused pyridines (444) were obtained in 80%-89% yield from the reaction of steroidal ketones (164) with malononitrile/methylcyanoacetate, benzaldehyde, and ammonium acetate in ethanol using MgO NPs as a catalyst (Scheme 102) [102]. One of the key mechanistic steps in this kind of multicomponent reaction is the standard Knoevenagel condensation of benzaldehyde and malononitrile/methyl cyanoacetate. The effect of MgO NPs can be rationalized on this basis since they are known as a highly effective heterogeneous base catalyst for Michael addition and Knoevenagel condensation reactions with Mg 2+ (Lewis acid) and O 2− (Lewis base) sites along with various cationic and anionic vacancies in the lattice [102].
The authors found that the actual catalyst for this reaction was indeed hydriodic acid, which is formed in situ from the reaction of iodine with water. Carrying the reaction under anhydrous conditions, it was proven that iodine itself did not promote the reaction, as generally assumed. Using this methodology, the authors prepared a library of optically active steroidal naphthoquinolines (453) in acceptable yields (40%-53%) [105]. Scheme 103. Synthesis of cholesterol-fused pyrimidines. Reagents and conditions: a) NH4OAc, silica gel (60-120 mesh), MW, 120 °C, 6 min. The authors' mechanism was based on: i) microwave-assisted reaction of ammonia (released from decomposition of ammonium acetate) with 2-hydroxymethylene-3-ketosteroid to afford a βaminoketoimine intermediate; ii) their condensation reaction with benzaldehydes to afford a diamine intermediate; and iii) cyclization and subsequent auto-oxidation to give the cholesterol-fused pyrimidines [103].
In 2015, Schulze and coworkers developed a new method for the synthesis of model asphaltene compounds. The reported methodology was based on a multicomponent cyclocondensation reaction of 2-aminoanthracene 452 with aromatic aldehydes and 5-α-cholestan-3-one 448 (Scheme 105) [105]. The authors found that the actual catalyst for this reaction was indeed hydriodic acid, which is formed in situ from the reaction of iodine with water. Carrying the reaction under anhydrous conditions, it was proven that iodine itself did not promote the reaction, as generally assumed. Using this methodology, the authors prepared a library of optically active steroidal naphthoquinolines (453) in acceptable yields (40%-53%) [105].
Miscellaneous
The design of (supra)molecular switches and machines has a key feature that relates to the control of mechanical motions at the molecular level. In this field, rotaxanes have attracted much attention because they offer the possibility of restricting the freedom of motion to some well-defined pathways, such as the translational motion of a rotaxane's ring along its axis in a shuttling manner. The synthesis of a novel nonsymmetrical bistable pH-sensitive rotaxane with a cholesterol stopper at one end and a tetraphenylmethane group at the other end (457), has been reported [106]. The synthesis of both terminal ends was challenging, and therefore we only describe here the final step, which consisted of joining both axes of the nonsymmetrical rotaxane, the alkyne 454, and the azide 455 through CuAAC chemistry, affording compound 456 (Scheme 106). The formation of a pHsensitive bistable rotaxane 457 was achieved by methylation of the triazole ring using methyl iodide
Miscellaneous
The design of (supra)molecular switches and machines has a key feature that relates to the control of mechanical motions at the molecular level. In this field, rotaxanes have attracted much attention because they offer the possibility of restricting the freedom of motion to some well-defined pathways, such as the translational motion of a rotaxane's ring along its axis in a shuttling manner. The synthesis of a novel nonsymmetrical bistable pH-sensitive rotaxane with a cholesterol stopper at one end and a tetraphenylmethane group at the other end (457), has been reported [106]. The synthesis of both terminal ends was challenging, and therefore we only describe here the final step, which consisted of joining both axes of the nonsymmetrical rotaxane, the alkyne 454, and the azide 455 through CuAAC chemistry, affording compound 456 (Scheme 106). The formation of a pH-sensitive bistable rotaxane 457 was achieved by methylation of the triazole ring using methyl iodide (Scheme 106). The authors verified that the crown ether part changed its preferred position on the axis because of the protonation state of a secondary amine. More specifically, the crown ether was located around the secondary ammonium ion as the best binding site in the protonated form. On the other hand, NMR analysis showed that upon deprotonation of the ammonium ion, the triazolium ion became the better binding site, which caused the ring to shuttle along the axis toward this position (Scheme 106) [106].
Molecules 2018, 23, x 57 of 68 polyethylene glycol monomethyl ether (mPEG-OH) 460 (Scheme 107). The authors evaluated the behavior of these copolymers in aqueous media, concluding that they self-assembled to form unique nanostructures, including disk-like micelles. The experimental results also suggested that the prepared copolymers can be used as inexpensive steric stabilizers for liposomes, making them suitable for several biomedical applications [107]. Recently, a cholesterol-modified poly(L-cysteine) copolymer, 466, that can undergo unusual micelle-to-vesicle transformation of polypeptides triggered by oxidation, was synthesized following a three-step protocol starting from cholesteryl 3-bromopropylcarbamate 462 (Scheme 108) [108]. The thioether groups in the side chains of 466 were further oxidized to the corresponding sulfone derivative 467 (Scheme 108). The authors demonstrated that oxidation of the thioether groups in the side chains could change the packing characteristics of cholesterol groups and the peptide backbone, Venkataraman and coworkers reported the two-step synthesis of cholesterol-functionalized aliphatic N-substituted 8-membered cyclic carbonate monomer 459 (Scheme 107) [107]. Cholesterol-based monomer 459 was employed in organocatalytic ring-opening polymerization to produce PEGylated amphiphilic diblock copolymers (using a commercially available macroinitiator), polyethylene glycol monomethyl ether (mPEG-OH) 460 (Scheme 107). The authors evaluated the behavior of these copolymers in aqueous media, concluding that they self-assembled to form unique nanostructures, including disk-like micelles. The experimental results also suggested that the prepared copolymers can be used as inexpensive steric stabilizers for liposomes, making them suitable for several biomedical applications [107].
Recently, a cholesterol-modified poly(L-cysteine) copolymer, 466, that can undergo unusual micelle-to-vesicle transformation of polypeptides triggered by oxidation, was synthesized following a three-step protocol starting from cholesteryl 3-bromopropylcarbamate 462 (Scheme 108) [108]. The thioether groups in the side chains of 466 were further oxidized to the corresponding sulfone derivative 467 (Scheme 108). The authors demonstrated that oxidation of the thioether groups in the side chains could change the packing characteristics of cholesterol groups and the peptide backbone, resulting in the transformation of a β-sheet to an α-helix conformation, combined with an interesting morphological transition from micelle-like structures to vesicles. Moreover, changing the secondary structure as well as the morphology endowed the polymer assemblies with excellent specificity for controlled payload release and improved cell interaction in response to ROS. These interesting formulations had excellent anticancer properties both in vitro and in vivo [108]. Recently, a cholesterol-modified poly(L-cysteine) copolymer, 466, that can undergo unusual micelle-to-vesicle transformation of polypeptides triggered by oxidation, was synthesized following a three-step protocol starting from cholesteryl 3-bromopropylcarbamate 462 (Scheme 108) [108]. The thioether groups in the side chains of 466 were further oxidized to the corresponding sulfone derivative 467 (Scheme 108). The authors demonstrated that oxidation of the thioether groups in the side chains could change the packing characteristics of cholesterol groups and the peptide backbone, resulting in the transformation of a β-sheet to an α-helix conformation, combined with an interesting morphological transition from micelle-like structures to vesicles. Moreover, changing the secondary structure as well as the morphology endowed the polymer assemblies with excellent specificity for controlled payload release and improved cell interaction in response to ROS. These interesting formulations had excellent anticancer properties both in vitro and in vivo [108]. These new steroid dimers (471 and 472) were shown to interact in vitro with the human erythrocyte membrane, changing the discoid erythrocyte shape, which resulted in induced stomatocytosis or echinocytosis. The authors also demonstrated that these new dimers were capable of interfering with membrane phospholipid asymmetry and loosening the molecular packing of phospholipids in the bilayer at sublytic concentrations. Moreover, the dimers 471 and 472 possessed a higher capacity for changing the erythrocyte membrane structure and its permeability than steroids alone did [109]. Gramine [N-(1H-indol-3-ylmethyl)-N,N-dimethylamine] is a well-known indole derivative and is often used as synthon for the preparation of a large variety of substituted indoles with important biological activities. In this context, Kozanecka and coworkers reported the use of gramine (470) to synthesize cholesterol (471) and cholestanol dimers (472) consisting of two molecules of sterols connected by an N(CH 3 ) 2 group (Scheme 109) [109].
These new steroid dimers (471 and 472) were shown to interact in vitro with the human erythrocyte membrane, changing the discoid erythrocyte shape, which resulted in induced stomatocytosis or echinocytosis. The authors also demonstrated that these new dimers were capable of interfering with membrane phospholipid asymmetry and loosening the molecular packing of phospholipids in the bilayer at sublytic concentrations. Moreover, the dimers 471 and 472 possessed a higher capacity for changing the erythrocyte membrane structure and its permeability than steroids alone did [109].
A new multifunctional pyridine derivative was synthesized and studied as an efficient initiator for the polymerization of diethylvinylphosphonate (DEVP). The authors used a new pyridine compound (473) in the thiol-ene click reaction (a well-established coupling method) to link together poly-DEVP and thiocholesterol 95 (Scheme 110) [110].
Compound 474 exhibited good thermal response and low cytotoxicity against human embryonic renal cell lines (HEK-293) and immortalized human microvascular endothelial cells (HMEC-1). It was concluded that the introduction of the thiocholesterol anchor unit was advantageous regarding toxicity when compared to polymers without functionalization. The thiocholesterol conjugate 474 is interesting for many applications, since it is water-soluble, thermo-responsive, and biocompatible [110].
Gramine [N-(1H-indol-3-ylmethyl)-N,N-dimethylamine] is a well-known indole derivative and is often used as synthon for the preparation of a large variety of substituted indoles with important biological activities. In this context, Kozanecka and coworkers reported the use of gramine (470) to synthesize cholesterol (471) and cholestanol dimers (472) consisting of two molecules of sterols connected by an N(CH3)2 group (Scheme 109) [109]. These new steroid dimers (471 and 472) were shown to interact in vitro with the human erythrocyte membrane, changing the discoid erythrocyte shape, which resulted in induced stomatocytosis or echinocytosis. The authors also demonstrated that these new dimers were capable of interfering with membrane phospholipid asymmetry and loosening the molecular packing of phospholipids in the bilayer at sublytic concentrations. Moreover, the dimers 471 and 472 possessed a higher capacity for changing the erythrocyte membrane structure and its permeability than steroids alone did [109].
A new multifunctional pyridine derivative was synthesized and studied as an efficient initiator for the polymerization of diethylvinylphosphonate (DEVP). The authors used a new pyridine compound (473) in the thiol-ene click reaction (a well-established coupling method) to link together poly-DEVP and thiocholesterol 95 (Scheme 110) [110]. Compound 474 exhibited good thermal response and low cytotoxicity against human embryonic renal cell lines (HEK-293) and immortalized human microvascular endothelial cells (HMEC-1). It was concluded that the introduction of the thiocholesterol anchor unit was advantageous regarding toxicity when compared to polymers without functionalization. The thiocholesterol conjugate 474 is interesting for many applications, since it is water-soluble, thermo-responsive, and biocompatible [110].
To take advantage of the important biological properties of cholesterol and glutathione for the cells, a cholesterol-glutathione (Chol-GSH) bioconjugate (478) was designed and used as a model amphiphilic biomolecule to make a co-assembly with lysozyme using a dialysis-assisted approach [111]. The synthetic route toward the Chol-GSH bioconjugate 478 involved a five-step reaction sequence, including esterification, 1,3-dipolar cycloaddition, and thiol-disulfide exchange reactions (Scheme 111). The authors applied a dialysis-assisted method of Ch-GSH and lysozyme to prepare bioactive self-assembled structures, which showed that hydrophobic cholesterol located in the walls, and hydrophilic GSH and lysozyme on the inner and outer surfaces. This result was explained based on the electrostatic interaction between GSH and lysozyme, which provided a driving force for the selfassembly, maintaining the bioactivity of lysozyme in the self-assembly process [111]. To take advantage of the important biological properties of cholesterol and glutathione for the cells, a cholesterol-glutathione (Chol-GSH) bioconjugate (478) was designed and used as a model amphiphilic biomolecule to make a co-assembly with lysozyme using a dialysis-assisted approach [111]. The synthetic route toward the Chol-GSH bioconjugate 478 involved a five-step reaction sequence, including esterification, 1,3-dipolar cycloaddition, and thiol-disulfide exchange reactions (Scheme 111).
Conclusions
The authors applied a dialysis-assisted method of Ch-GSH and lysozyme to prepare bioactive self-assembled structures, which showed that hydrophobic cholesterol located in the walls, and hydrophilic GSH and lysozyme on the inner and outer surfaces. This result was explained based on the electrostatic interaction between GSH and lysozyme, which provided a driving force for the self-assembly, maintaining the bioactivity of lysozyme in the self-assembly process [111]. Compound 474 exhibited good thermal response and low cytotoxicity against human embryonic renal cell lines (HEK-293) and immortalized human microvascular endothelial cells (HMEC-1). It was concluded that the introduction of the thiocholesterol anchor unit was advantageous regarding toxicity when compared to polymers without functionalization. The thiocholesterol conjugate 474 is interesting for many applications, since it is water-soluble, thermo-responsive, and biocompatible [110].
To take advantage of the important biological properties of cholesterol and glutathione for the cells, a cholesterol-glutathione (Chol-GSH) bioconjugate (478) was designed and used as a model amphiphilic biomolecule to make a co-assembly with lysozyme using a dialysis-assisted approach [111]. The synthetic route toward the Chol-GSH bioconjugate 478 involved a five-step reaction sequence, including esterification, 1,3-dipolar cycloaddition, and thiol-disulfide exchange reactions (Scheme 111). The authors applied a dialysis-assisted method of Ch-GSH and lysozyme to prepare bioactive self-assembled structures, which showed that hydrophobic cholesterol located in the walls, and hydrophilic GSH and lysozyme on the inner and outer surfaces. This result was explained based on the electrostatic interaction between GSH and lysozyme, which provided a driving force for the selfassembly, maintaining the bioactivity of lysozyme in the self-assembly process [111].
Conclusions
In this review, the role of cholesterol-based compounds in different research areas such as drug delivery, biological activities, liquid crystals, gelators, bioimaging, and purely synthetic applications
Conclusions
In this review, the role of cholesterol-based compounds in different research areas such as drug delivery, biological activities, liquid crystals, gelators, bioimaging, and purely synthetic applications was highlighted. In the drug delivery field, several examples of cholesterol derivatives were highlighted due to their applications in preclinical and clinical liposomal drug formulations to decrease membrane fluidity and provide favorable drug retention properties. Furthermore, in the last few years, some series of new cholesterol derivatives have also been developed for pharmacological applications as anticancer, antimicrobial, or antioxidant agents. In the bioimaging field, cholesterol has been used as a lipid anchor attached to fluorophores to study cellular membrane trafficking, imaging of cholesterol density, and liposome tracing, among many other bioimaging applications. The fact that cholesterol conjugates have much scientific interest in the field of materials science due to their liquid crystal phase behavior, as well as the ability to promote self-organization and hydrophobic interactions in aqueous media (gelation properties), was also demonstrated in this review. In this review, a general perspective was given of the main applications of cholesterol derivatives in several research fields, but also a concise perspective of the advances in their synthetic chemistry. Therefore, we described the synthetic pathway for different cholesterol derivatives alongside the corresponding application of the new compounds to furnish a general view from the synthetic and biological aspects of the most recently reported cholesterol-based compounds.
|
2019-01-22T22:24:06.701Z
|
2018-12-29T00:00:00.000
|
{
"year": 2018,
"sha1": "e48b0d7e55d3a42b78761a32314ef6cceb2e921f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/1/116/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e48b0d7e55d3a42b78761a32314ef6cceb2e921f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
199347105
|
pes2o/s2orc
|
v3-fos-license
|
Product authenticity and product attachment in tourism shopping context: Exploring the antecedents of intention to choose silver craft products
Shopping is one of important activities in tourism industry. The purpose of this research is to analyze the importance of product authenticity and product attachment on non-branded local iconic product. The influence of product authenticity and attachment were further examined on their impact to intention to choose silver crafts product. Quantitative approach and survey method were chosen to achieve the research objective. The proposed conceptual framework was tested on 225 adult respondents who have purchased silver crafts from Kotagede Yogyakarta. Empirical data were tested using PLS-SEM technique. By focusing on consumers’ intention to choose iconic local product (silver crafts), this study demonstrates that product authenticity and product attachment have positive impacts on intention to choose. Product attachment also mediates the product authenticity and intention to choose. The contribution of this study is the research on product authenticity and attachment is tested on non-branded products particularly iconic product of a region. Product authenticity and attachment are commonly measured in branded products. Iconic local product is commonly associated with territory image thus alternative strategies for local products could be considered by combining product authenticity and product attachment.
Introduction
Tourism is now becoming a big business. People visit place to place for works as well as for leisure. Different places offer different attractions and experiences. For a country to be successful in its tourism industry, it must understand and be specific on what to sell and whom to target. The trend in tourism market is changing from time to time. Tourism market demands uniqueness. If previously leisure and culture are dominating tourism industry, now MICE (meetings, incentives, conferencing and exhibitions) and slow tourism (tourism where tourists stay longer to engage more with local people and local life) are in favor (Dickinson & Lumsdon, 2010;Meng & Choi, 2016). Tourists with different backgrounds have different needs and wants and they want to experience new things in the new places. They can be segmented according to the activities/reasons to travel (enjoying natural beauty, doing adventure, joining events/sports, and involving in culture/history or shopping). Tourists can also be segmented according to places, seasons, customers' characteristics and benefits they seek (Morgan, Pritchard, & Pride, 2007). Despite many reasons why people travel, shopping is always one of important activities in tourism (Mocanu, 2014;Roostika, Wahyuningsih, & Haryono, 2015).
Shopping can be one of the main reasons to travel even though shopping commonly not the main purpose for people to travel. Many countries or destinations offer their iconic products. These iconic products are often difficult to find globally or if so, they are very expensive to buy in tourists' home country. The varieties of local products can be found more easily in the place where the products were made. The concept of country-of-origin (COO) has well been documented in the marketing area where the influence of place of origin may affects consumers to choose the products (Basfirinci, 2013;Roth & Diamantopoulos, 2009). For example, people go to South Korea for buying local brand cosmetics. People go to Paris for fashion brand and to Milan for leather shoes and bag. In a more micro context, the term territory-of-origin (TOO) is also increasingly of interest where a territory may gain reputation because of iconic product or vice versa. These products can be agricultural products, culinary or local crafts (Zhang & Merunka, 2015). Compared to countries, territories seem to representing the local community and local non-branded products with unique features (Iversen & Hem, 2008). TOO can create differentiation to the local markets. So far, only few studies have observed the local products which employ its territory image to justify the quality and authenticity of the product (Zhang & Merunka, 2015).
Tourism is regarded as tertiary need where authenticity of the offerings is important. Tourism industry provides a lot of emotional experiences. Tourists would love to pay premium to get the authenticity of the products. Authenticity is often associated with meanings generated by the place of origin. These association covers history, tradition and culture, originality, sincerity, honesty, and uniqueness (Grayson & Martinec, 2004;Beverland, 2006;Napoli, Dickinson, Beverland, & Farrelly, 2014;Iversen & Hem, 2008). Creating emotional bonds between tourists and the destinations is important in order to maintain long-term relationships. Maintaining the long-term relationships is a marketing challenge for tourism agencies, however if well organized, strong bond will lead to customer positive behaviors (Park, MacInnis, Priester, Eisingerich, & Iacobucci, 2010;Thomson, MacInnis, & Park, 2008). Marketers have acknowledged that the ability to build emotional bonds between a person to a product can be a key challenge for success in long-term relationships with customers (Han & Sung, 2008). Attachment is a construct in marketing which is originally developed to understand the emotional bond between person to person or person to product (Ainsworth, 1973). A strong product-customer bond (product attachment) is argued to increase customers' willingness to repurchase (Matzler, Pichler, Fuller, & Mooradian, 2011).
In creating and maintaining the strong emotional attachment between tourists and local products, differentiation through authenticity could be an alternative. This study investigates the perceptions of product Product authenticity and product attachment in tourism shopping context: Exploring the antecedents of intention
International Journal of Research Studies in Management 51
authenticity and its influence on product attachment and intention to choose. More specifically, non-branded silver crafts industry from Kotagede Yogyakarta is chosen as the study object since authenticity in silver products is of outmost importance in creating differentiation. Other important aspects are the reputation of Kotagede as a center of silver craftsmen and Yogyakarta as culture city may strengthen the attachment between visitors/buyers and Kotagede Silver crafts.
Prior research works are limited in providing analysis in product authenticity and product attachment where the product is the local iconic product. This study contributes to understanding how authenticity and attachment may lead to intention to choose local product. Authenticity and attachment are commonly discussed in terms of brand, not a product. They are also commonly discussed for a well-known branded product. By focusing on local non-brand iconic product instead of branded product, this research analyzes product authenticity and attachment's impact on intention to choose. This study will be useful for alternative strategy development in marketing local products.
Kotagede
Kotagede is a regency located 5 km southeast of Yogyakarta city. Kotagede was the center of Islamic Mataram Kingdom. Despite many silver centers (Table 1), Kotagede is one of Indonesia's reputable silver craft center. It has become one of important tourists destination when travelling to Yogyakarta. The making process of silver jewelry in Kotagede remains traditional with typical Mataram Kingdom design. In the past, silver craft was developed to fulfill the needs of jewelry and other accessories to the King. In the 16th century, with the increase in silver crafts demand, the Dutch Government built a special institution to maintain the quality of silver craft. The silversmith can be found everywhere in Kotagede. Most Kotagede's silver craft ornaments are influenced by batik cloth motives. Silver crafts prices vary depending not only on size and weight, but also the artistic carving, complexities and difficulties. This industry is up and down since the demand is changeable. Currently, the industry is not as profitable as in where silver craft market was at peak. The silver industry in Kotagede Yogyakarta is still surviving even though the number of craftsmen is decreasing. Alternative strategies should be taken to increase market attractiveness in silver crafts from Kotagede.
Intentions to choose
According to intention-behavior theories, individual's behaviours are determined by their intentions. Theory of reasoned action (TRA) and theory of planned behaviour (TPB) are among the most famous intention-behavior theories. These theories argue that intention is an individual's readiness to purchase a certain product. It can be a situation where people perform certain behaviors. According to Fishbein and Ajzen (1975), intention is said to be themost effective way to perform actual behaviour (Fishbein & Ajzen, 1975).When intention is getting stronger, the more actual behaviour will be performed. Purchase intentions is interpreted as consumer's willingness to purchase a certain product or services (Shao, Baker, &Wagner, 2004). Past studies have identified that purchase intentions determine the actual purchase behaviour (Van der Heijden, Verhagen, & Creemers, 2003). In the mobile phone industry, Madan and Yadav (2018) research found that positive behavioural intentions lead to actual product purchase. In the tourism and hospitality context, intention-behaviour theories has been validated across different empirical researches such as intentions to visit world cultural heritage sites (Shen, Schüttemeyer, & Braun 2009), intentions to visit museums (Yamada & Fu, 2012), behavioral intentions in medical tourism (Lee, Han, & Lockyer 2012), and intentions to chose for destination (Lam & Hsu, 2006). Behavioral intentions in tourism study include intention to recommend, intention to visit /revisit, intention to support, and many others (Aziz, Husin, Hussin, & Afaq, 2019).
Authenticity and intentions to choose
The word authenticity came from the Greek word authentikos (Assiouras, Liapati, Kouletsis, & Koniordos, 2015). In Latin word, authenticus is described as trustworthy (Cappannelli & Cappannelli, 2004). Authenticity is also translated as honesty and simplicity (Boyle, 2003). Some authors also refer to genuineness, tradition, originality and cultural (Ballantyne, Warren, & Nobbs, 2006). Different authors have differently conceptualize authenticity (Lu, Gursoy, & Lu, 2015). Authenticity is considered as an antecedent and consequence in tourism studies. Authenticity is an antecedent since it is able to motivate, create interest and to drive tourist activities (Grayson & Martinec, 2004;Kolar & Zabkar, 2010). Consumers tend to develop their own interpretations of product or service authenticity even though they are not familiar with the product/service. For example, different customer background may develop different authenticity perception on particular traditional food. If the customer knows well and familiar with the food, he or she might develop higher criteria for authenticity. Perception about food authenticity also closely related to cultural background. Authenticity is a critical marketing tool to achieve competitive advantage by designing effective promotion strategies (Lu et al., 2015).
The relationship between the concept of authenticity and behavioral intentions in tourism sector was initially proposed by MacCannell (1973) in explaining tourists' motivation. He argues that tourists are searching for authentic experiences that they cannot find in modern life or in their daily life. The marketing literature links the value of authenticity with the strength of the brand. The richer the product authenticity is perceived by the customer the stronger the emotion are build by the individual to the product (Assiouras et al. 2015). In the branding study, customer search for authenticity of the brand to determine relevancy and value from the brand (Beverland, 2006). Gilmore and Pine (2007) acknowledged that authenticity has currently overtaken quality when customer makes purchasing criterion. The importance of authenticity in consumer behavior study has been acknowledged by both academics and practitioners (Gilmore & Pine, 2007;Newman & Dhar, 2014). In a wide range of consumption activities such as in the luxury wines industry (Beverland, 2006) and tourist attractions industry (Grayson & Martinec, 2004), authenticity takes important role. The antecedents and consequence of authenticity can be assigned from TPB behaviors-intentions theories. Customer would give a positive respond to a brand when he or she perceives the brand as having authentic content (Rose & Wood, 2005).When consumers have committed to a particular authentic brand, they will voluntarily convey positive word-of-mouth to others/public. By using brand authenticity as a proxy, we hypothesize that: H1: Product authenticity has impact on intention to choose
Product attachments and intention to choose
The construct of attachment was originally developed to understand a deep emotional bonds that occur between a person to other person or a person to an object across time (Bowlby, 1980;Roostika, Thamrin, Retnaningdiah, & Pratomo 2018). Theoretically, attachment is assigned as "a psychological connectedness Product authenticity and product attachment in tourism shopping context: Exploring the antecedents of intention
International Journal of Research Studies in Management 53
between human beings" (Bowlby, 1979: 194) and is studied to explain its effect on individuals' behaviors. In psychology, the degree of emotional attachment can predict whether one's emotional bond to one object can further explain future interaction with that object (Bowlby, 1979). A highly attached persons would be willing to stay and to sacrifices higher to their chosen person (Bowlby, 1980). Similarly, in themarketing context, attachment can explain the logic when consumers are strongly attached to a product/figure/brand, he or she would be willing to make an investment by building higher commitment and stronger relationship with the product/figure/brand (Thomson et al., 2008). This psychological condition can explain how attached customers can be committed and loyal to certain products or services. Attachment to a product or brand is often occurring simultaneously. Attachment to a brand is often emotionally or affection (Thomson et al., 2008). Emotional attachments may predict individual character to interact with certain product or service (Assiouras et al, 2015). Customer who experiences a strong emotional bond with a product/brand will have higher possibilities to tolerate the brand (Pimentel & Reynolds, 2004). When feeling attached, customer builds stronger willingness to promote the product/brand such as contributing on positive word-of-mouth (Pimentel & Reynolds, 2004). Therefore we hypothesize: H2: Product attachment has impact on intention to choose
Product authenticity and product attachment
Product attachments can be predicted by some antecedents (Assiouras et al., 2015). For someone, a product or a company might relate to one's past experiences (such as childhood. For example, place where someone was born or was coming from (one's city, state or country or culture) or products that were used since little or used by family may cause attachment (Oswald, 1999). Product authenticity provides the necessary connection to someone with their tradition, history of the country and place, which potentially lead to product and consumers' bond/attachment to the product. Another factor that may cause strong emotional product attachment is the trust that a consumer develops from experiencing a product. Trust in a marketing context can be defined as the fulfillment of expectation perceived by customer from the firm/producers. Expectations are believed to offer a sense of confidence where consumer believes that the product will consistently act as consumer's best interests (Rempel et al., 2001). Trust is a variable that evolves over time. Product authenticity may increase consumer's trust as well as strengthen emotional attachment. Therefore, the following hypothesis is proposed: H3: Product authenticity has impact on product attachment Based on the theories and hypotheses proposed, the research model is as shown in Figure 1.
Research instrument
To obtain the objective of the study, a quantitative research method was taken. Survey method was chosen through distribution of questionnaires. The questionnaires were developed from previous studies, mainly from Assiouras et al., (2015); Napoli et al., (2014); Park et al., (2010). Likert scale was adopted ranging from 1 very disagree to 5 very agree. Purposive sampling method was applied and survey distribution was carried out using mix of paper based and google form questionnaire. Respondents must be those who understand about the silver crafts from Kotagede Yogyakarta and have been at least purchasing a silver product (jewelry or non-jewelry).
Respondents' profile
The data was successfully collected from 225 respondents with the following profile as listed in Table 2. A total of 122 male and 103 female respondents submitted the questionnaires. The analysis showed that majority of silver owner are those whose aged between 26-35 years old. Silver as jewelry are not a favorite choice for young generations. However, silver does is more acknowledged and sought by older generations such as those whose age are over 35 years old. Silver products either as jewelry or non-jewelry are commonly non-manufacture products, so they are relatively expensive. From the demographic data, it can be said that silver is sought by middle age person and those who have earned their own income.
Structural equation modelling
This research applies partial least squares structural equation modelling (PLS-SEM). PLS by far is considered as one of the most prominent SEM techniques (Ali, Rasoolimanesh, Sarstedt, Ringle, & Ryu, 2018), and already used in a variety of fields, including tourism, marketing, strategic management and consumer behaviour research (Hair, Sarstedt, Ringle, & Mena, 2012). PLS-SEM is widely used as it considered effective in explaining complicated relationships (Ali et al., 2018), enable research with non-normal data distributions and analyzing complex models with many formative or reflective measure (Hair et al., 2012). Due to the mediation model and non-normally data distribution, PLS-SEM was operated by using the Smart PLS software. Two steps approach were taken, firstly measurement model and secondly the structural model
Measurement model -
The assessment of measurement model is conducted for reliability and validity analyses. To measure construct's reliability, the results of composite reliability were identified. The results should be higher than .7 (Chin, 2010). The item's reliability is shown from the outer loadings where the tress hold should be higher than .50 (Hair et al., 2012). Convergent validity is shown from AVE values where the minimum value is .5. As seen in Table 3, all the item loadings were above .5 except OP9. Item OP9 is deleted Product authenticity and product attachment in tourism shopping context: Exploring the antecedents of intention
International Journal of Research Studies in Management 55
and is not used for further analysis. The AVE values were all above .5 as seen in Table 3 (product authenticity is lowest with value of .509, intention to choose was .819 and product attachment was .743). The composite reliability (CR) is achieved when each construct satisfied the minimum threshold of .7. Table 3 described the composite reliability of all constructs. The result from the composite reliability demonstrates convergent validity for all constructs in the study. Discriminant validity was assessed using AVE square root and cross loadings. Fornell and Larcker (1981) provides criterion as guidance for threshold value. Discriminant validity is shown when the square root of AVE of each construct higher than the correlation with other construct. Table 5 shows that the square root of AVE of all construct has value higher than its correlations with other constructs. Similarly, discriminant validity can be tested from the Cross loadings as shown in Table 4. Discriminant validity is achieved when each item load to its assigned construct higher than to other constructs. Since the reliability and validity show satisfactory results in the measurement model, thus all items are analysis in the structural model to perform the proposed hypotheses.
Structural model -After the measurement model was assessed, the structural model assessment was conducted. The structural model in this research explains the relationships among the reflective constructs. As shown in Figure 2, R 2 for intention to choose was 36.3 per cent and product attachments were 17.3 per cent. This suggests that the explanatory power of the model is sufficient. The results from PLS-SEM method to explain the relationship among the constructs as proposed in the hypothesis are summarized in Figure 2 and Table 6. Using PLS-SEM algorithm, the significant values of the relationships among the constructs was performed by bootstrapping with 2.000 samples. H1 shows the relationship between product authenticity and intention to choose which is supported by having the value of β=.228, p<.01 and t statistics at 2.519. H2 proposes the relationship between product attachment and intention to choose. H2 is also supported with moderate β=.471, p<.01 and t statistics at 4.245. H3 proposes the relationship between product authenticity and product attachment. The relationship was supported with β =.416, p<.01 and t statistics at 5.581. Hence, all the hypotheses for this current research are supported. The results of this study have important implications for silver producers, government and tourism managers, which are prepared in the Product authenticity and product attachment in tourism shopping context: Exploring the antecedents of intention
International Journal of Research Studies in Management 57
following contribution section.
Discussions
This study investigated consumer behaviors in the silver craft industry in Kotagede Yogyakarta. Shopping is an important part of tourism industry. With the ease in having access to many worldwide destinations, tourism industry is developing fast. The growth of tourism sector causes many other sectors to grow such as local craft sector, culinary, travel and hotels, and many others. As previously explained, country reputation may have impact on related country's product in the global market. Similarly, with the same logic, territory reputation may have impact on the local products. Every region must understand their uniqueness for differentiation strategy. Yogyakarta as one of major destination in Indonesia has many uniqueness in terms of crafts, culture, food, way of life, as well as geographical conditions. This study focuses on silver industry as one of the iconic products from Yogyakarta. Silver from Kotagede has experienced in ups and down. During 1970 to 2000, this sector had its market boom providing significant wealth to silver craftsmen and sellers. Silver from Kotagede Yogyakarta is very famous across the country. Silver Kotagede has unique design and mostly are still manually made. The processes of making silver products are very difficult. It needs skills and perseverance in the making. Silver design from Kotagede is different from silver products made from other regions. Due to the dynamic market change in tourism industry; factors that may explain how silver market would positively response is important.
This research tries to analyze product authenticity and attachment in silver industry. As a handmade craft product, consumers appreciate the authenticity. Product authenticity in this study covers some perceptions such as the origin, the handmade aspect, the craftsmen skills, the ingredients, the design and moral value of the silver products from Kotagede Yogyakarta. This research found that product authenticity has significant positive impact on intentions to choose. This can be interpreted that the higher the silver products from Kotagede Yogyakarta is authentically perceived, the more as it has higher potential to be chosen. Previous studies have identified product authenticity and behavioral intentions relationships e.g MacCannell (1973) in tourism sector. Additionally, Newman and Dhar (2014) also have acknowledged the importance of product authenticity for explaining consumer behavior. This study identified that silver is bought by those who commonly have earned their own income. In the condition where majority of the Kotagede silver designs are very classic, old style and rather expensive, this explains why few-younger generation chooses silver crafts. Emphasizing the value of authenticity makes market for older generation consumers respond and appreciate better to silver products. For younger market, silver producers and craftsmen should introduce the new and modern design to suit with the young generations fashion trend and preferences.
The relationship between the concept of attachment and intention to choose in silver industry is supported. H2 in this research is supported. Attachment in this study covers questionnaires such as giving self-identity, helping to communicate, and many others. Attachment theorists principally concerned with the psychological connectedness between human beings and to other products/services. In psychology, it is understood that the emotional attachment to one object can predict the character with that object (Bowlby, 1979). A highly attached persons would be willing to sacrifices for their chosen person/products. In the marketing context, consumers who are strongly attached to a product would be willing to make an investment by building relationship with the product, including willingness to pay premium (Thomson et al., 2008). This psychological condition is similarl to how attached customers can be committed and loyal to certain products or services. When feeling attached, customer would love to promote the produc/brand with positive word-of-mouth (Pimentel & Reynolds, 2004. In the silver industry, attachment is positioned as mediating variable. It is believed that people need to experience with products/services first before further giving their attachments to particular products/services. By owning or wearing silver crafts, people may start valuing authenticity to the products than if fitted, they would be willing to stay attached with the products. Here, hypothesis two is supported. This means that after someone attached, it is expected that they will consider choosing silver products. The higher the attachment the more possibility to choose silver crafts. Finding from this study is supported by previous research such as Assiouras et al., (2015) and Pimentel dan Reynolds (2004).
Finding from SEM-PLS analysis shows that product authenticity positively influences product attachment. This finding supports H3 for product authenticity and product attachment relationship. Assiouras et al., (2015) argue that some antecedents variables may predict emotional attachment. Product attachment can be caused by sense of product originality, place originality and family influence. Finding from this study is in align with previous research findings (Morhart, Malar, Guevremont, Girardin, & Grohmann, 2015;Assiouras et al., 2015). As previously explained, product authenticity may create individual's connection with the tradition, history, place, talent, uniqueness, ingredients and honesty. Silver center Kotagede has long been very reputable on their unique design where can be found nowhere else. The unique design may explain the perception of emotional attachment to silver crafts from Kotagede. Similarly, the reputation of the territory (Kotagede) as place with skillful silver craftsmen can also be the reason for attachment on local silver crafts product. Not all silver crafts center in Indonesia is as famous as Kotagede. Where the place/territory is not strong in reputation, product attachment is not always easy to build.
Currently silver crafts sales in Kotagede are not as high as in the era of 1970-2000. For this situation, stakeholders in silver industry, particularly producers and governments should be aware of the change in fashion trend and market preferences. The young generation is less interested in wearing and purchasing silver products. According to demographic finding, it can be viewed that majority of respondents who purchase silver are those around 35 years old and older. Despite maintaining the classical design which still explains the majority market share, producers need to innovate in either product design, product use, product benefits and different promotion approaches particularly to the young market. In general, the results of the study suggest that both product authenticity and product attachments positively affect intention to choose. This finding should be taken into considerations for designing product strategy, promotion strategy, price strategy as well as distribution strategy (offline and online).
Conclusion and contribution
With respect to consumers' intention to choose silver crafts in Kotagede Yogyakarta, this study conclude that product authenticity and product attachments are antecedents of intention to choose. Both variables are positively influence intention to choose silver crafts. Theoretical contributions from this study is in providing the evidence on the importance of product authenticity and product attachment variables to decision to choose local iconic product/non-branded product (Kotagede silver). There are still limited researches on product authenticity and product attachment instead of brand authenticity and brand attachments. Local craft products are commonly non-brand or local brand. Research on product authenticity and product attachments are expected to enrich its capacity to explain intentions-behaviors theories in different product characteristics. Similarly, the use of TOO approach is still limited and it can be alternative marketing strategy for local products. For practitioners in local industry, despite owing local brand, managers, producers and government should consider what makes silver crafts is authentically perceived. Building elements that increase the perception of authenticity would increase possibility to be chosen. Similarly, product authenticity influences customer attachments. Where finding new customers is expensive, maintaining current customers is important by building emotional attachments. The reputation of the territory or place (TOO) can be used to build attachment in local iconic crafts industry such as Kotagede and its silver crafts.
|
2019-08-03T00:34:37.581Z
|
2019-07-08T00:00:00.000
|
{
"year": 2019,
"sha1": "0ab2d3be372e5c5e335dc91b5a5b63d53b04a0e2",
"oa_license": null,
"oa_url": "http://consortiacademia.org/wp-content/uploads/IJRSM/IJRSM_v8i1/4005_ijrsm_final.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2d7ba209bcb33c0ba9ecf5309e68b7e1079444e5",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
119026714
|
pes2o/s2orc
|
v3-fos-license
|
General relativistic study of astrophysical jets with internal shocks
We explore the possibility of formation of steady internal shocks in jets around black holes. We consider a fluid described by a relativistic equation of state, flowing about the axis of symmetry ($\theta=0$) in a Schwarzschild metric. We use two models for the jet geometry, (i) a conical geometry and (ii) a geometry with non-conical cross-section. Jet with conical geometry is smooth flow. While the jet with non-conical cross section undergoes multiple sonic point and even standing shock. The jet shock becomes stronger, as the shock location is situated further from the central black hole. Jets with very high energy and very low energy do not harbour shocks, but jets with intermediate energies do harbour shocks. One advantage of these shocks, as opposed to shocks mediated by external medium is that, these shocks have no effect on the jet terminal speed, but may act as possible sites for particle acceleration. Typically, a jet with energy $1.8~c^2$, will achieve a terminal speed of $v_\infty=0.813c$ for jet with any geometry. But for a jet of non-conical cross-section for which the length scale of the inner torus of the accretion disc is $40\rg$, then in addition, a steady shock will form at $\rsh \sim 7.5\rg$ and compression ratio of $R\sim 2.7$. Moreover, electron-proton jet seems to harbour the strongest shock. We discuss possible consequences of such a scenario.
INTRODUCTION
The denomination relativistic jets was first used in the extra-galactic context by Baade & Minkowski (1954), while observing knots in optical waveband in galaxy M87. However, it is after the advent of radio astronomy, that the relativistic jets have been recognized as a very common phenomenon in astrophysics and is associated with various kinds of astrophysical objects such as active galactic nuclei (AGN e.g., M87), gamma ray bursts (GRB), young stellar objects (YSO e.g., HH 30, HH 34), X-ray binaries (e.g.,SS433, Cyg X-3, GRS 1915+105, GRO 1655-40) etc. In this paper, we discuss about jets which are associated around X-ray binaries like GRS1915+105 (Mirabel & Rodriguez 1994) and AGNs like 3C273, 3C345 (Zensus et al. 1995), M87 (Biretta 1993). Since jets are unlikely to be ejected from the surface of compact objects, therefore, it has to originate from accreting matter.
High luminosity and broad band spectra from AGNs, favour the model of accretion of matter and energy on to a super massive (10 6−9 M ⊙ ) black hole as the prime mover. The same line of reasoning also led to the conclusion that X-ray binaries harbour neutron stars or stellar mass ( < ∼ 10M ⊙ ) black holes at the centre, while the secondary star feeds the compact object. Many of the black hole (BH) X-ray binaries are observed to go through a stage, in which relativistic twin jets are ejected and resemble scaled down version of AGN or quasar jets; as a result, these sources have been coined micro-quasars (Mirabel et al. 1992).
Accretion of matter on to a compact object explains the luminosities and spectra of AGNs and micro-quasars, but can accretion be linked with the formation of jets? Interestingly, simultaneous radio and X-ray observations of micro-quasars show a very strong correlation between the spectral states of the accretion disc and the associated jet states (Gallo et. al. 2003;Fender et al. 2010;Rushton et al. 2010), which reaffirms the fact that jets do originate from the accretion disc. Although this type of spectral state changes and the related change in jet states have not been observed for AGNs, however, the very fact that the timescales of AGNs and micro-quasars can be scaled by the central mass (McHardy et. al. 2006), we expect similar correlation of spectral and jet states in AGNs too. Working with the central idea that jets are launched from accretion discs, a natural question arises, i.e., which part of the disc is responsible for jet generation. Recent observations have shown that jets originate from a region which is less than 100 Schwarzschild radius (r s ) around the unresolved central object (Junor et. al. 1999;Doeleman et. al. 2012), which implies that the entire disc may not participate in formation of jets, but only the central region of the disc is responsible.
Again by invoking the similarity of AGNs and micro-quasars (McHardy et. al. 2006), one may conclude that the jet originates from a region close to the central compact object for micro-quasars too. This also finds indirect support from observations of various micro quasars. Observations showed that the jet activity starts when the object is in low-hard state (LHS i. e., when the disc emission maximizes in hard, or, high energy X-rays but the over all luminosity is low). The jet strength increases as accretion disc becomes luminous in the intermediate hard states (IHS) and eventually with a surge in disc luminosity and after relativistic ejections, the accretion disc goes into high soft state (HSS i. e., disc emission maximizes in soft X-rays but luminous). No jet activity has been observed in HSS. This cycle is repeated as some kind of hysteresis and is known as hardness-intensity-diagram or HID (Fender et al. 2004). This indirectly suggests that the component of the disc which emits hard X-rays, or a compact corona, may also be responsible for jet activity. Since hard X-rays are emitted by hot electron clouds closer to the compact object, it indirectly points out that the jet originates from a region close to the BH even for micro-quasars.
Earliest model of accretion disc was the Keplerian disc or KD (disc with Keplerian angular momentum distribution and optically thick along the radial direction, see Shakura & Sunyaev 1973;Novikov & Thorne 1973). KD explained the thermal part of the BH accretion disc spectra, but could not explain the non-thermal part of it. This deficiency in the KD, prompted the emergence of many accretion disc models like thick disc (Paczyński & Wiita 1980), advection dominated accretion flow (Narayan et al. 1997), advective discs (Liang & Thompson 1980;Fukue 1987a;Chakrabarti 1989;Chattopadhyay & Chakrabarti 2011). Whatever, may be the accretion disc models, observation of strong correlation of the X-ray data (arising from convergent flow and therefore accretion) and radio data (originating from the outflowing jets) in microquasars, and the associated timing properties, also imposed some constraints on the disc-jet system. In micro quasars it was observed that the hard photons oscillate in a quasi periodic manner and which evolves from low values in LHS to high values in the intermediate states, and disappears in the HSS. This suggests that a compact corona is favourable, than an extended one, since it would be easier to oscillate a compact corona than an extended one. It was also suggested that the extended corona cannot explain the spectra of X-ray binaries in the LHS (Dove et. al. 1997;Gierlinski et. al. 1997). So from observations, one can summarize about three aspects of jet and accretion discs: they are (i) direct observation of inner region of M87 jet shows, the jet base is close to the central object; (ii) corona emits non-thermal emission, and the HSS state has very weak or no signature of corona, as well as, the existence of jets and (iii) the corona is also compact in size. So it is quite possible that, corona is probably the base of the jet, or, a significant part of the jet base.
Unfortunately, the accretion disc around a BH has not been resolved and only the jet has actually been observed, therefore, studying the visible part of the jet is also an integral part of reconstructing the entire picture, and hence jets, especially the AGN jets are intensely investigated. Generally, it is assumed that jets from AGNs are ultra-relativistic with terminal Lorentz factors γ ∞ > ∼ 10. However, with the discovery of more AGNs and increasingly precise observations, these facts are slowly being challenged now. On one hand, for BL Lac PKS 2155-304 the estimated terminal Lorentz factor is truly relativistic i. e., easily γ ∞ > ∼ 10 (Aharonian et al. 2007), on the other hand, spectral fitting of NGC 4051 requires a more moderate range of Lorentz factors (Maitra et. al. 2011). Infact, for FRI type jet 3C31, which has well resolved jet and counter jet observations, the jet terminal speed is v ∞ ∼ 0.8c → 0.85c (c the speed of light) and then slowing down to ∼ 0.2c at few kpc due to entrainment with ambient medium (Laing & Bridle 2002). This implies that jets of AGNs comes at a variety of strength, length and terminal speeds (v ∞ ), somewhat similar to those around microquasars. For example in microquasars, SS433 jet shows a quasi-steady jet with v ∞ ∼ 0.26c (Margon 1984), while GRS1915+105 jet is truly relativistic (Mirabel & Rodriguez 1994). Not only that, terminal speeds estimated for a single micro-quasar may vary in different outbursts (Miller et. al. 2012). In other words, astrophysical jets around compact objects are relativistic, but the terminal speed may vary from being mildly relativistic to ultra-relativistic. Therefore, acceleration mechanism must be multi-staged and may be result of many accelerating processes like magnetic fields and radiation driving (Sikora & Wilson 1981;Ferrari et al. 1985;Fukue 1996Fukue , 2000Fukue et al. 2001;Vlahakis & Tsinganos 1999;Chattopadhyay & Chakrabarti 2002a,b;Chattopadhyay et al. 2004;Chattopadhyay 2005;Vyas et al. 2015).
Estimation of bulk jet speeds from AGN jets are mostly inferred from complicated observational data. Often, the presence of bright jet in comparison to dim counter jet, are believed to constrain the bulk speed of the jet (Wardle & Aaron 1997). It has also been noted above that bulk speed required to fit the observed data, also gives us an estimate of the bulk speed of the jet (Laing & Bridle 2002). Many of these jets have knots and hot spots, which are regions of enhanced brightness. Some of these knots exhibit superluminal speeds , which give us an estimate of the bulk speed of the knots and the under-lying jet speed. These knots and hot spots are thought to arise due to the presence of shocks in the jet beam, due to its interaction with the ambient medium. These shocks then create high energy electrons by shock acceleration and produce non-thermal, high energy photons.
By analyzing multi-wavelength data of the M87 jet, Perlman & Wilson (2005) concluded that external shocks fit this general idea pretty well. From the accumulated knowledge of hydrodynamic simulations, we know that these shocks form as a result of interaction with the ambient medium (Marti et. al. 1997) and therefore they form at large distances from the central object. However, internal shocks by faster jet blobs following slower ones, may catch up and produce internal shocks close (∼ 100r s ) to the central BH (Kataoka et. al. 2001), which was seen in some simulations of accretion-ejection system (Lee et. al. 2016). However, it is not plausible to form shocks closer to the jet base by this process, because the leading blob and the one following, both posses different but relativistic speeds, so a significant distance has to be traversed before they collide. Is it at all possible to form shocks in the jet much closer to the central object?
As jets are supposed to originate close to the central object and from the accreting matter, therefore the jet base should be subsonic. Since the jets are observed when they are actually far from the central object and traveling at a very high speed, therefore, one can definitely say that these jets are transonic (transits from subsonic to supersonic) in nature. We know conical flows are smooth monotonic functions of distance, or in other words, cross the sonic point only once (Michel 1972;Blumenthal & Mathews 1976;Chattopadhyay & Ryu 2009).
Since the base of the jet is very hot, it would be expanding very fast, and there would be very little resistance to the outflowing jet to force it to deviate from its conical trajectory.
However, if the jet is flowing through intense radiation field, then the jet would see a fraction of the radiation field approaching it, and might get slowed down, to form multiple sonic point (Vyas et al. 2015) and even shock (Ferrari et al. 1985). It is also to be noted that, most of the theoretical investigations on jets were conducted in special relativistic regime, including Ferrari et al. (1985) and Vyas et al. (2015). The reason being that, the distances from the central BH at which the jets are observed, the effect of gravity is negligible but the bulk speed is relativistic. And in order to limit the forward expansion of the jet, a Newtonian form of gravitational potential was added adhoc in the special relativistic equations of motion (Ferrari et al. 1985;Fukue 1996;Chattopadhyay 2005;Vyas et al. 2015). Newtonian (or, pseudo-Newtonian) gravitational potential is incompatible with special relativity which can be argued from the equivalence principle itself. But, if we choose not to be too fussy, even then, gluing special relativity and gravitational potentials destroy the constancy of Bernoulli parameter in absence of dissipation, in other words, we compromise one of the constants of motion. Moreover, Ferrari et al. (1985) obtained spiral type sonic points, and we know that spiral type sonic points are obtained in presence of dissipation. Is this the direct fall out of combining Newtonian potential with special relativity? Therefore, we chose Schwarzschild metric, in order to consider gravity properly. But in keeping with most of the investigations of jet (Fukue 1987b;Memola et. al. 2002;Falcke 1996), no particular accretion disc model What would be the possible effect of composition in the jet. In this paper, we would address these issues in details.
In section 2 we present simplifying assumptions, governing equations. In section 3, we outline the process of generating solutions along with detailed discussion on nature of sonic points and shock conditions. Then we present results in section 4 and finally conclude the analysis in section 5.
ASSUMPTIONS, GOVERNING EQUATIONS AND JET GEOMETRY
Since the present study is aimed at studying jet starting very close to the central object, general relativity is invoked. We choose the simplest metric, i.e.,Schwarzschild metric, which describes curved space-time around a non-rotating BH, and is given by where r, θ and φ are usual spherical coordinates, t is time and M B is the mass of the central black hole. In the rest of the paper, we use geometric units where G = M B = c = 1, so that the units of length and time are r g = GM B /c 2 and t g = GM B /c 3 , respectively. In this system of units, the Schwarzschild radius, or the radius of the event horizon is r s = 2.
Although in the rest of the paper, we express the equations in the geometric units (until specified otherwise), we choose to retain the same representation for the coordinates as in equation (1). The fluid jet is considered to be in steady state (i.e., ∂/∂t = 0). Further, as the jets are collimated, we consider an on axis (i.e., u θ = u φ = ∂/∂θ = 0) and axis-symmetric (∂/∂φ = 0) jet. Effects of radiation and magnetic field as dynamic components are being ignored for simplicity. If the jet is very hot at the base, then radiation driving near the base will be ineffective (Vyas et al. 2015). In powerful jets, magnetic fields are likely to be aligned with the local velocity vector and hence the magnetic force term will not arise (similar to coronal holes, see Kopp & Holzer 1976). Therefore, up to a certain level of accuracy, and in order to simplify our treatment, we ignore radiation driving and magnetic fields in the present paper and effects of these will be dealt elsewhere. In the advective disc model, the inner funnel or post-shock disc (PSD) acts as the base of the jet (Chattopadhyay & Das 2007;Das et. al. 2014;Lee et. al. 2016) and also the Comptonizing corona (Chakrabarti & Titarchuk 1995). The shape of the PSD is torus (see simulations Das et. al. 2014;Lee et. al. 2016) and its dimension is about > ∼ few×10r g . Therefore, launching the jet inside the torus shaped funnel, simultaneously satisfies the observational requirement that the corona and the base of the jet be compact. Having said so, we must point out that, we actually do not obtain the jet input parameters (E &Ṁ) from advective accretion disc solutions, but the input parameters are supplied. This implies, any accretion disc model with compact torus like hot corona will satisfy the underlying disc model.
However, advective disc model with PSD gets a special mention because possibility of hot electron distribution close to the central object, is inbuilt to the model. The only role of the disc considered here, is to confine the jet flow boundary at the base, for one of the jet model (M2) considered in this paper. Since the exact method of how the jet originates from the disc is not being considered in this paper, so the jet input parameters are actually free parameters independent of the disc solutions. This paper is an exploratory study of the role of jet flow geometry close to the base, on jet solutions and therefore we present all possible jet solutions.
Observations show that the core temperatures of powerful AGN jets are estimated to be quite high (Moellenbrock et al. 1996). So the jets are hot to start with in this paper too. The advective disc model, as in most disc models, do come with a variety of inner disc temperatures. Simulations of advective discs for high viscosity parameter produced T > ∼ 10 12 K in the PSD (Lee et. al. 2016). Moreover in presence of viscous dissipation in curved space-time, the Bernoulli parameter (−hu t ) may increase by more than 20% of its value at large distance and produce very high temperatures in the PSD . For highly rotating BHs too, the temperatures of the inner disc easily approaches 10 12 K. It must also be remembered that inner regions of the accretion disc can be heated by Ohmic dissipation, reconnection, turbulence heating or MHD wave dissipation may heat up the inner disc or the base of the jet (Beskin 2003). High temperatures in the accretion disc can induce exothermic nucleosynthesis too (Chakrabarti et. al. 1987;Hu & Peng 2008). All these processes taken together in an advective disc will produce very hot jet base. We do not specify the exact processes that will produce very hot jet base, but would like to emphasize that it is quite possible to achieve so. One may also wonder, that if the jets are indeed launched from the disc, how justified is it to consider non-rotating jets. Phenomenologically speaking, if jets have a lot of rotation then it would not flow around the axis of symmetry and therefore, either it has to be launched with less angular momentum or, has to loose most of the angular momentum with which it is launched. It has been shown that viscous transport removes significant angular momentum of the collimated outflow close to the axis (Lee et. al. 2016).
Since the jet is launched with low angular momentum and it is further removed by viscosity or by the presence of magnetic field, therefore, the assumption of non-rotating hot jet is quite feasible. Incidentally, similar to this study, there are many theoretical studies of jets which have been undertaken under similar assumptions of non-rotating, hot jets at the base (Fukue 1987b;Memola et. al. 2002;Falcke 1996).
Geometry of the jet
Observations of extra-galactic jets show high degree of collimation, so it is quite common in jet models to consider conical jets with small opening angle. We considered two models of jets, the first model being a jet with conical cross-section and we call this model as M1.
However, the jet at its base is very hot and subsonic, and since the pressure gradient is isotropic, the jet will expand in all directions. The walls of the inner funnel of the PSD will provide natural collimation of the jet flow near its base. If the base of the jet is very energetic, then it is quite likely to become transonic within the funnel of the PSD (see Fig. 1). In a magneto-hydrodynamic simulations of jets, Kudoh et al. (2002) showed that the jet indeed flows through the open field line region, but at a certain distance above the torus shaped disc, the jet surface is pushed towards the axis and beyond that the jet expands again, resulting into a converging-diverging cross-section. The cross-section adopted for Model M2 in this paper, is inspired by these kinds of jet simulations. We ensured that the variation of the geometry of M2 is smooth and slowly varying, such that the solutions are not crucially dependent on the particular shape of the flow geometry.
The general form of jet cross section or, A(r) is For the first model M1: where, C and m(= tan θ 0 ) are the constant intercept and slope of the jet boundary with the equatorial plane, with θ 0 being the constant opening angle with the axis of the jet. The value of the constant intercept might be C = 0 or C = 0, either way, the solutions are qualitatively same and we take C = 0.
The second model M2 mimics a geometry whose outer boundary is the funnel shaped surface of the accretion disc at the jet base i. e., dA/dr > 0 (see, Fig. B1a). As the jet leaves the funnel shaped region of PSD, the rate of increase of the cross-section gets reduced dA/dr ∼ 0. It again expands and finally becomes conical at large distances, where dA/dr ∝ r. The functional form of r and θ of the jet geometry for the model M2 is given by, where, k 1 = 5x sh /π, k 2 = x sh /2, d 1 = 0.05, d 2 = 0.0104, m ∞ = 0.2 and n = 5. A schematic diagram of the geometry of the M2 jet model and disc is shown in Fig. 1, where x sh is the shock in accretion, or in other words, the length scale of the inner torus like region. In Eq.
(4), k 1 , k 2 are parameters which influence the shape of jet geometry at the base, while d 1 , d 2 , m ∞ and n are constants, which together with k 1 and k 2 , shape the jet geometry. Here, we assume at large distances the jet is conical and m ∞ (= tan θ ∞ ) is the gradient which corresponds to the terminal opening angle to be 11 • . The size of the PSD or x sh influences the jet geometry and the jet geometry at the base is shaped by the shape of the PSD as shown in appendix (B). A typical jet geometry for a given set of accretion solution is plotted in Fig. B1.
Equations
were n e − is the electron number density and f is given by Here, non-dimensional temperature is defined as Θ = kT /(m e c 2 ), k is the Boltzmann constant and ξ = n p + /n e − is the relative proportion of protons with respect to the number density of electrons. The mass ratio of electron and proton is η = m e /m p + . It is easy to see that by putting ξ = 0, we generate EoS for relativistic e − − e + plasma (Ryu et al. 2006).
The expressions of the polytropic index N, adiabatic index Γ and adiabatic sound speed a are given by This EoS is an approximated one, and the comparison with the exact one shows that this EoS is very accurate (Appendix C of Vyas et al. 2015
Equations of motion
The energy momentum tensor of the jet matter is given by where the metric tensor components are given by g αβ and u α represents four velocity.
The equations of motion are given by The first of which is energy-momentum conservation equation and second is continuity equation. From the above equation, the i th component of the momentum conservation equation is obtained by operating the projection tensor on the first of equation (9), i.e., Similarly, the energy conservation equation is obtained by taking For an on-axis jet, equations (10, 11) becomes; While the second of equation (9) when integrated becomes mass outflow rate equation, Here, A is the cross-section area of the jet. The differential form of the outflow rate equation is, By using equation (14), pressure p can be given as Here A is the cross section of the jet (section 2.1) and τ = (2 − ξ + ξ/η). Equations (12-13), with the help of equation (15), are simplified to and Here the three-velocity v is given by i.e., u r = √ g rr γv and γ 2 = −u t u t is the Lorentz factor. All the information like jet speed, temperature, sound speed, adiabatic index, polytropic index as functions of spatial distance, can be obtained by integrating equations (17-18).
A comparison with De Laval Nozzle helps us to understand the nature of the equations and the expected nature of the solutions. The non-relativistic version of De Laval Nozzle (DLN) is obtained by considering v << 1 and r →large in equation (17), which means the first and the third term in r. h. s drop off and γ → 1, but for the special relativistic (SR) version there is no constrain on γ. In the subsonic regime (v < a), the jet will accelerate if dA/dr < 0 (converging cross-section), both in the non-relativistic and in SR version of DLN problem. While in the supersonic regime (v > a) the jet accelerate if dA/dr > 0 (expanding cross-section), and forms the sonic point where dA/dr = 0. Therefore, a pinch in the flow geometry (i. e., a convergent-divergent cross section) makes a subsonic flow to become transonic. However, in presence of gravity, the first and the third terms cannot be ignored. The third term is the purely gravity term, while the first term is the coupling between gravity and the thermal term. Therefore, the gravity and the flow's thermal energy now compete with the cross-section term to influence the jet solution. The third term is negative, the first term is positive definite, the middle term may either be positive or negative.
However, near the horizon the third dominates the other two and even for a conical flow (dA/dr > 0), the r. h. s of equation (17) will be negative. Hence a subsonic jet will accelerate up to the sonic point r c , where r. h. s becomes zero. For r > r c the flow is supersonic and still the jet accelerates because at those distances the r. h. s of equation (17) is positive.
Therefore, unlike the typical DLN problem, in presence of gravity, it is not mandatory that dA/dr = 0 so that an r c may form, i. e., gravity ensures the formation of the sonic point.
But if the magnitude of dA/dr changes drastically, even without changing sign, the interplay with the gravity term may ensure formation of multiple sonic points.
A physical system becomes tractable when solutions are described in terms of their constants of motion. Using a time like Killing vector ζ ν = (1, 0, 0, 0), the equation of motion (equation 9) becomes, Integrating the above equation, we obtain the negative of the energy flux as a constant of motion, The relativistic Bernoulli equation or, the specific energy of the jet, is obtained by dividing equation (19) by equation (14) E Here, h = (e + p)/ρ = (f + 2Θ)/τ is the specific enthalpy of the fluid and u t = −γ (1 − 2/r).
The kinetic power of the jet is the energy flux through the cross-section If only the entropy equation (13) is integrated, then we obtain the adiabatic relation which is equivalent to p ∝ ρ Γ for constant Γ flow , where, k 1 = 3(2 − ξ)/4, k 2 = 3ξ/4, k 3 = (f − τ )/(2Θ) and C is the constant of entropy.
If we substitute ρ from the above equation into equation (14), we get the expression for entropy-outflow rate , Equations (23) and (20) are measures of entropy and energy of the flow that remain constant along a streamline. However, at the shock, there is a discontinuous jump ofṀ.
Sonic point conditions
Jets originate from a region in the accretion disc, which is close to the central object, where, the jet is subsonic and very hot. The thermal gradient being very strong, works against the gravity and power the jet to higher velocities and in the process, v crosses the local sound speed a at the sonic point r c , which makes the jet supersonic. The sonic point is also a critical point because, at r c equation (17) The dv/dr| c is calculated by employing the L'Hospital's rule at r c and solving the resulting quadratic equation for dv/dr| c . The quadratic equation can admit two complex roots leading to the either O type (or 'centre' type) or 'spiral' type sonic points, or two real roots but with opposite signs (called X or 'saddle' type sonic points), or real roots with same sign (known as nodal type sonic point).
So for a given set of flow variables at the jet base, a unique solution will pass through the sonic point(s) determined by the entropyṀ and energy E of the flow. Model M1 is independent of the shock location x sh in the accretion disc, but model M2 depends on x sh .
In Figs. (2a, b), we plot the sonic point properties of the jet for model M1,and in Figs. (2c,d) we plot the sonic point properties of M2. Each curve is plotted for x sh = 12 (long-short dash, magenta), 10 (dash, red), 7 (dash-dot, blue), 3 (solid, black). The Bernoulli parameter of the jet E is plotted as a function of r c in Fig. (2a, c). In Fig. (2b, d),Ṁ is plotted as a function of E at the sonic points. The jet model M1 is a conical flow and therefore, is independent of the accretion disc geometry. So, curves corresponding to various values of x sh coincide with each other in Figs. 2a and b. Moreover, both E andṀ are monotonic functions of r c . In other words, a flow with a given E will have one sonic point, and the transonic solution will correspond to one value of entropy, orṀ. The situation is different for model M2. As x sh is increased from 3, 7, 10, 12, the E versus r c plot increasingly deviates from monotonicity and produces multiple sonic points in larger range of E (Fig. 2c). For small values of x sh the jet cross-section is very close to the conical geometry and therefore, multiplicity of sonic points is obtained in a limited range of E. It must be noted that, for a given (17) is zero, a sonic point is formed. Since A −1 dA/dr = 2/r is always positive, the r. h. s becomes zero only due to gravity, and therefore, there is only one r c for M1.
For M2, the cross-section near the base expands faster than a conical cross-section, therefore, the first two terms in r.h.s of equation (17) competes with gravity. As a result, the jet rapidly accelerates to cross the sonic point withing the funnel like region of PSD.
But as the jet crosses the height of PSD, the expansion is arrested and at some height A −1 dA/dr ∼ 0. If this happens closer to the jet base then the gravity will again make the r.
h. s of the equation (17) i. e., A −1 dA/dr ∼ 0. At those distances, the gravity again becomes dominant than the other two terms, which reduce the r. h. s of equation (17) and makes it zero to produce multiple sonic points. In Fig. (2c), the maxima and minima of E is the range which admits multiple sonic points. We plotted the locus of the maxima and minima with a dotted line, and then divided the region as 'p', 'q' and 'r'. Region 'p' harbours inner X-type sonic point, region 'q' harbours O-type sonic point and region 'r' harbours outer X-type sonic points. Figure (2d) is the knot diagram (similar to 'kite-tail' for accretion, see
Shock conditions
One of the major outcomes of existence of multiple sonic points in the jet, is the possibility of formation of shocks in the flow. At the shock, the flow makes a discontinuous jump in density, pressure and velocity. The relativistic Rankine-Hugoniot conditions relate the flow quantities across the shock jump and they are (Taub 1948;Chattopadhyay & Chakrabarti 2011) [ρu r ] = 0, and The square brackets denote the difference of quantities across the shock, i.e.
[Q] = Q 2 − Q 1 with Q 2 and Q 1 being the quantities after and before the shock respectively.
Dividing equation (27) by equation (26) and then simplifying, we obtain It merely states that the energy remains conserved across the shock. Further, dividing (28) by (26) We check for shock conditions (equations 29, 30) as we solve the equations of motion of the jet. The strength of the shock is measured by two parameters compression ratio (R) and shock strength (S). R and S are ratios of densities and Mach numbers (M) across the shock (at r = r sh ). In relativistic case, according to equation (26) R is obtained as, where, + and − stands for quantities at post-shock and pre-shock flows, respectively. Similarly, S is defined as,
RESULTS
In this paper, we study relativistic jets flowing through two types of geometries: (i) model M1: conical jets and (ii) model M2: jets through a variable and non-conical cross-section, as described in section 2.1.
Model M1 : Conical jets
Inducting equation (3) into (2), we get a spherically outflowing jet, which, for C = 0 gives a constant θ(= θ 0 = 11 • ). The jet geometry is such that A ∝ r 2 . points. For higher values of E, the jet terminal speed is also higher (Fig. 3a). Higher E also produce hotter flow. So at any given r, Γ is lesser for higher E (Fig. 3b)). Since the jet is smooth and adiabatic, so E (Fig. 3c) andṀ (Fig. 3d) remain constant. The variable nature of Γ is clearly shown in Fig. (3b), which starts from a value slightly above 4/3 (hot base) and at large distance it approaches 5/3, as the jet gets colder. As discussed in sections 2.2.2 and 3.1, for all possible parameters, this geometry gives smooth solutions with only single sonic point until and unless M1 jet interacts with the ambient medium.
Model M2
In this section, we discuss all possible solutions associated with the jet model M2. The sonic point analysis of the jet model M2 showed that for larger values of x sh , multiple sonic point may form in jets for a larger range of E (Fig. 2c). In Fig. (4a), we plot the multiple sonic point region (PQRSTP) bounded by the dotted line. Dotted lines are same as those on and has higher entropy. An α type solution is the one which makes a closed loop at r < r c and has one subsonic and another supersonic branches starting from r c outwards extending up to infinity (Fig. 4d). The jet matter starting from the base, can only flow out through the inner sonic point (solid), but cannot jump onto the higher entropy solution because shock conditions are not satisfied (Fig. 4e). However, for E = 1.28, the entropy difference between inner and outer sonic points are exactly such that, matter through the inner sonic point jumps to the solution through outer sonic point at the jet shock or r sh (Fig. 4f). Solution for E = 1.265 is on PR and produces inner and outer sonic points with the same entropy (Fig. 4g). Figures (4c -g) are parameters lying in the TPRST. For flows with even lower energy E = 1.256, the entropy condition of the two physical sonic points reverses. In this case, the entropy of the inner sonic point is higher than the outer one. So, although multiple sonic points exist but no shock in jet is possible (Fig. 4h) and the jet flows out through the outer (17) equal to zero. It is to be remembered that, jets with higher values of E implies hotter flow at the base, which ensures greater thermal driving which makes the jet supersonic within few r g of the base. However, after the jet becomes supersonic (v > a), the jet accelerates but within a short distance beyond the sonic point the jet decelerates (thin, solid line). This reduction in jet speed occurs due to the geometry of the flow. In Fig. (6g), we have plotted the corresponding cross section of the jet. The jet rapidly expands in the subsonic regime, but the expansion gets arrested and the expansion of the jet geometry becomes very small A −1 dA/dr ∼ 0. Therefore the positive contribution in the r. h. s of equation (17) reduces significantly which makes dv/dr 0. Thus the flow is decelerated resulting in higher pressure down stream (thin solid curve of Fig. 6d). This resistance causes the jet to under go shock transition at r sh = 8.21. The shock condition is also satisfied at r sh = 17.4, however this outer shock can be shown to be unstable (see, Appendix A and also Nakayama 1996;Yang & Kafatos 1995;Yuan et. al. 1996). We now compare the shocked M2 jet in Fig. (6a, d, g) with two other jet flows, (i) a jet of model M2 but with low energy E = 1.02 (Fig. 6b, e, h); and (ii) a jet of model M1 and with the same energy E = 1.28 (Fig. 6c, f, i). In the middle panels, E = 1.02 and therefore the jet is much colder.
Reduced thermal driving causes the sonic point to form at large distance (open circle in Fig. 6b). The large variations in the fractional gradient of A occurs well within r c . At r > r c A −1 dA/dr → 2/r, which is similar to a conical flow. Therefore, the r. h. s of equation (17) does not become negative at r > r c . In other words, flow remains monotonic. The pressure is also a monotonic function (Fig. 6e) and therefore no shock transition occurs. In order to complete the comparison, in the panels on the right (Fig. 6c, f, i), we plot a jet model of M1, with the same energy as the shocked one (E = 1.28). Since fractional variation of the cross section is monotonic i. e., at r > r c , A −1 dA/dr = 2/r (Fig. 6i), the all jet variables like v (Fig. 6c) and pressure (Fig. 6f) remains monotonic. No internal shock develops.
Therefore to form such internal shocks in jets, the jet base has to be hot in order to make it supersonic very close to the base. And then the fractional gradient of the jet cross section needs to change rapidly, in order to alter the effect of gravity, so that the jet beam starts resisting the matter following it and form a shock. Figures (6a-i) showed that departure of the jet cross section from conical geometry is not enough to drive shock in jet. It is necessary that the jet becomes transonic at a short distance form the base and a significant fractional change in jet cross-section occurs in the supersonic regime of the jet. Since the departure of the jet cross-section from the conical one, depends on shape of the inner disc, or in other words, the location of the shock in accretion, we study the effect of accretion disc shock on the jet solution. We compare jet solutions (i. e., v versus r) for various accretion shock locations for e. g., x sh = 90 (Fig. 7a), x sh = 40 (Fig. 7b), x sh = 15 (Fig. 7c) and x sh = 9 (Fig. 7d). In Figs we show how the jet solution changes for different values of x sh but keeping E = 1.8 of the jet same. For a large value of x sh = 90 (Fig. 7a), the jet cross section near the base diverges so much that the jet looses the forward thrust in the subsonic regime and the sonic point is formed at large distance. The geometry indeed decelerates the flow, but being in the subsonic regime such deceleration do not accumulate enough pressure to break the kinetic energy and therefore no shock is formed. As the expansion of the cross-section is arrested, the jet starts to accelerate and eventually becomes transonic at large distance from the BH.
At relatively smaller value of x sh (= 40), the thermal term remains strong enough to negate gravity and form the sonic point in few r g . For such values of x sh , the fractional expansion of the jet cross-section drastically reduces or, A −1 dA/dr ∼ 0, when the jet is supersonic.
Therefore, in this case the jet suffers shock (Fig. 7b). Infact, for E = 1.8 the jet will under go shock transition, if the accretion disc shock location range is from x sh = 30-60. For even smaller value of accretion shock location x sh = 15, because the opening angle of the jet is less, the thermal driving is comparatively more than the previous case. The jet becomes supersonic at an even shorter distance. The outer sonic point is available, but because the shock condition is not satisfied, shock does not form in the jet (Fig. 7c). As the shock in accretion is decreased to x sh = 9, the thermal driving is so strong that it forms only one sonic point, overcoming the influence of the geometry (Fig. 7d). Although, due to the fractional change in jet geometry, the nature of jet solutions have changed, but jets launched with same Bernoulli parameter achieves the same terminal speed independent of any jet geometry. This In Figs. (7e, f), the jet shock strength (equation 32) and the jet shock location r sh are plotted as functions of accretion shock location x sh .
In Figs. (8a, b), the shock compression ratio R (equation 31) of the jet and the jet shock location r sh as a function of E is plotted. Each curve represents the accretion shock location x sh = 40 (solid) and x sh = 60 (dashed). It shows that for a given E, the jet shock r sh and strength S decrease with the increase of x sh . From Fig. (4a) it is also clear that for larger values of x sh , jet shock may form in larger region of the parameter space. The compression ratio of the jet is above 3 in a large part of the parameter space, therefore shock acceleration would be more efficient at these shocks. It is interesting to note the contrast in the behaviour of the jet shock r sh with the accretion disc shock. In case of accretion discs, the shock strength and the compression ratio increases with decreasing shock location x sh . But for the shock in jet, the dependence of R and S on r sh is just the opposite i. e., R and S decreases with decreasing as well as the mass of the compact object. In Fig. (10b), v ∞ is plotted as a function of E.
From equation (33) it is clear that v ∞ is a function of E only, except, when there are other accelerating mechanism acting on the jet. So v ∞ in this figure is true for both M1 and M2 jets. However, if we assume x sh = 40 for M2 jet and ξ = 1.0, then the shaded region LMNOL corresponds to jets which undergo steady shock transitions.
It might be fruitful to explore, whether these internal shocks satisfies some observational features. One may recall that the charged particles oscillates back and forth across a shock with horizontal width L and in each cycle, its energy keeps increasing. After successive oscillations, the particle escapes the shock region with enhanced energy known as e-folding energy and is given by (Blandford & Eichler 1987) where B sh is the magnetic field at the shock and L = 2GM B r sh tan θ/c 2 . The typical magnetic field estimates near the horizon vary from 10mG (Laurent et al. 2011) to 10 4 G (Kogan & Lovelace 1997). For M B ∼ 14M ⊙ (Cygnus X-1), the e-folding energy for the shock obtained in Fig. 7b is obtained to be 16MeV -1.6T eV (per electron charge in statC) depending on different magnetic field estimates mentioned above. The energy associated with high energy tail of 400keV − 2MeV in Cygnus X-1 (Laurent et al. 2011) can easily be explained by the internal jet shocks discussed in this paper. The spectral index of shock accelerated particles is And for the same set of jet parameters we obtained R = 1.87 or q = 2.15, therefore, even the estimated spectral index of 2.2 ± 0.4 of such observational estimates (Laurent et al. 2011) can be given a theoretical basis.
DISCUSSION AND CONCLUDING REMARKS
In this paper, we investigated the possibility of finding steady shocks in jets very close to the compact object. Since the jets exhibit mildly relativistic to ultra relativistic terminal speeds, therefore, one has to describe the flow in the relativistic regime. The jets traverse massive length scales originating very close to the central BH to distances more than hundred thousand Schwarzschild radii, therefore gravitational effects need to be considered. And since relativity and any form of Newtonian gravity is incompatible, therefore, the jet has to be described as fluid flow in the general relativistic limit. In the present paper, we investigated jets in Schwarzschild metric and have used a relativistic equation of state (as opposed to the Newtonian polytropic one) to describe the thermodynamics of the jet. Since jets are collimated and flows about the axis of symmetry, and as pointed out in section 2, the jet is generated with low angular momentum and it is likely to be further reduced by other physical processes like viscosity, we consider non rotating jets as an approximation.
In this paper we have studied two jet models. The first model M1 was the conical jet or radial outflow and the flow geometry is that of a cone. The second model M2 was assumed from physical argument and some evidence found in previous jet simulations. Since the jet base is supposed to be very hot therefore, it is expected that the jet would expand in all directions, only to be mechanically held by the funnel shaped inner surface of the toruslike inner disc. The expansion of the jet cross-section gets arrested above the inner torus like corona and then expands again at larger distances from the BH. We assumed a jet flow geometry which follows this pattern. It should also be remembered that, in this paper the accretion disc plays an auxiliary role. We did not compute any jet parameters from the accretion disc. We just used an approximate accretion disc solution to define the flow geometry at the base of the jet, and then used the outer boundary of the inner disc, or, x sh as a parameter which defines the departure of the jet geometry from the conical cross-section.
And to do that, we fitted the temperature distribution of the PSD by an approximated function, which would determine the inner surface of the torus part of the inner disc or PSD. That inner surface was considered as the outer boundary of the jet geometry close to the BH. The accretion disc solution used is exactly same as our previous paper Vyas et al. (2015), although unlike the previous paper, no other input from the accretion disc was used in the jet solution. One must remember that accretion solutions change for different values of disc parameters, although, in this paper the disc solutions do not influence jet solutions.
Since we have ignored other external accelerating mechanism or any dissipation, so E is a constant of motion. An adiabatic jet is a fair assumption until and unless the jet interacts with the ambient medium. Given these assumptions, terminal speed (v ∞ ) of the jet is solely governed by E and would not depend on either the jet geometry, or, the composition of the flow. However, the jet geometry influences the solutions at finite r from the BH. While M1 model showed monotonic smooth solutions, whose terminal speed increased with the increasing E. But for M2 model, depending on the jet energy E and the accretion disc parameter x sh , we obtained very fast, smooth jets flowing out through one inner X-type sonic point; and for other combinations of E & x sh , we obtained jets with multiple sonic points and even shocks. For very low E of course, weak jets flow out though the outer X-type sonic point.
c 0000 RAS, MNRAS 000, 000-000 In connection to Fig. 6 in section 4.2, it has been clearly explained that, in order to obtain steady state internal shocks, the jet material in the supersonic regime has to oppose the flow following it. That can happen if the expansion of the flow geometry drastically decrease when the jet is in the supersonic regime, making gravity relatively stronger. The effect would not be so effective at distances were gravity is itself weak. Therefore the necessary condition is that, the expansion of the jet geometry decreases drastically in few to few× 10r g . Along with this, E has to be high enough, so that the enhanced thermal driving makes the flow supersonic before the region where A −1 dA/dr ∼ 0. One of the advantages of a shock driven by the jet geometry is that, it has no impact on either the kinetic power of the jet or, its terminal speed. So if E is high enough produce very strong jet with relativistic terminal speed, then in addition one may have shock jump in the jet without compromising the v ∞ .
It should be understood, that in presence of other accelerating mechanism, E will not remain constant and would increase outwards. Therefore, the terminal speeds obtained here may be considered as the minimum that can be achieved for the given input parameters. The jet kinetic power (equation 21) obtained also depends onṀ and E, so output would depend on the central mass and the mass supply. In our estimation of L j we assumed higher mass outflow rate at the base, but general relativistic estimates of such mass loss is around few percent of the accretion rate, which can easily explain the estimates for Cygnus X-1 jet (Russel et. al. 2007) Interestingly, for low values of x sh , the jet geometry of M2 differs slightly from the conical one. Therefore, the jet shock is obtained in a very small range of E. For higher x sh , the range of E which can harbour jet shocks also increases. However, at same E, the jet shock r sh is formed at larger distance from the BH, if x sh is formed closer to the BH. This is very interesting, because in accretion disc as the shock moves closer to the BH, the shock becomes stronger. In addition, smaller value of x sh implies higher values of r sh and higher r sh means stronger jet shock. That means a very hot gas may surround the BH, not only in the equatorial plane but also around the axis as well. For the current objectives the detailed spectral analysis is beyond the scope of the paper and has not been done for such a scenario as yet, we would like to do that in future.
The assumptions made in this paper, were to simplify the jet problem and study all possible relativistic jet solutions possible in the presence of strong gravity. However, few things definitely would influence the conclusions of this paper positively. Consideration of radiation driving for intense radiation field would surely affect the jet solution. It would surely accelerate the jets (see, Vyas et al. 2015), but in addition radiation drag in presence of an intense and isotropic radiation field may drive shocks, which means even M1 jet may harbour shocks. Moreover, the jet geometry need not be conical at large distances, and the jet geometry may be pinched off by other mechanisms, which might create more shocks.
Furthermore, entrainment of the jet at large distances might qualitatively modify the conclusions of this paper. Even then, the result of this paper might be important in many other ways. From Figs. (6,7,8) it is clear that, farther the shock forms, stronger is the shock. Since the base of the jet is much hotter than even the post-shocked region, then any surge in mass outflow rate, or, energy/momentum enhancement in the base will try to drive the jet away from its stable position. If this driving is too strong, it will render the shock unstable and the shock would travel out along the jet beam. This should have two effects, the unshocked jet material will be shock heated, moreover, since by the nature of jet shocks, it becomes stronger for larger r sh , therefore a shock traveling out would shock-accelerate jet particles quite significantly. And this would happen comparatively close to the jet base. However, if the driving is not too strong, this might lead to quasi periodic oscillations. Such shock oscillation model has not been properly probed even in micro-quasar scenario. This might even be interesting for QPOs in blazars. It must be remembered that, previously, we did not obtain standing internal shocks in jets around non-rotating BHs ) for geometries assumed. The main aim of this paper is to show that, if jet geometries with non-spherical geometry is considered, then internal shocks may form close to the jet base and such solutions may address few observational features as well. And the assumptions made in this paper was aimed at reducing the frills, which might obfuscate the real driver of such shocks in jets.
Drawing concrete conclusions, one may say, M2 jets with energies E = 1.5-2.5 c 2 may harbour shocks in the range r sh = 5.5-10r g from the central BH. The shocks are in the range of compression ratio R = 1.6-4.0. The terminal speeds of these jets are between v ∞ = 0.745-0.916c.
ACKNOWLEDGMENT
The authors acknowledge the anonymous referee for helpful suggestions to improve the quality of this paper. Now using equation (14) and differentiating equation (A2) followed by some algebra, one where, A s = √ g rr γ 3 2Θg rr u 2 + (f + 2Θ) + 2Θγ 2 Nuv u 2 (N + 1) + g rr (A4) and B s = γv r 2 √ g rr (f + 2Θ) − 2Θg rr u 2 − (f + 2Θ)u 2 + 2Θg rr 1 uA dA dr (A5) + 4Θ ur 2 − 2Θ Nu u 2 (N + 1) + g rr Using equation (A3) in (A1), we obtain Now, the stability of the shock depends on the sign of ∆. If ∆ < 0 for finite and small δr there is more momentum flux flowing out of the shock than the flux flowing in so the shock keeps shifting towards further increasing r, and is unstable. On the other hand if ∆ > 0, then the change due to δr leads to the further decrease in ∆, and the shock is stable.
One finds that equations (A4) and (A6) A s has positive value. Now the stability of the shock can be analyzed under two broad conditions.
• Condition 1. The shock is significantly away from middle sonic point, or the absolute magnitude of v ′ is significantly more than 0. We find that |B s | << |A s | and hence the stability of the shock depends upon the sign of v ′ . Equation (A1) shows that the shock is stable (or ∆ > 0) if v ′ 1 > 0 and subsequently v ′ 2 < 0. Hence the inner shock is stable and the outer shock is unstable.
• Condition 2. If the shock is close to the middle sonic point, v ′ 1 ≈ v ′ 2 ≈ 0. So only second term consisting B s contributes to the stability analysis and one obtains that ∆ < 0 for both inner and outer shocks and the shock is always unstable.
Finally, the general rule for stability of the shock is, • If the post shock flow is accelerated then the shock is unstable and if the post shock flow is decelerated the shock is stable unless the shock is very close to the middle sonic point where it is unstable. Figure B1. (a) Accretion disc height Ha is plotted with ra with solid line (black online). Shock location is at x sh = ra = 40, shown by long dashed line. The jet width rsinθ is over plotted with red dotted line. Black hole resides at ra = 0 with shown as black sphere with rs = 2 (b) Fitted Θa.
PSD
In the paper all the stable shocks are shown by thick solid vertical lines and unstable shocks are represented by thin dotted vertical lines.
APPENDIX B: APPROXIMATED ACCRETION DISC QUANTITIES
The jet geometry of M2 model was taken according to equation (4). The inner part of the accretion disc or PSD shapes the jet geometry near the base. If the local density, four velocity, pressure and dimensionless temperature in the accretion disc are p a , u r a , ρ a and Θ a , respectively, and the angular momentum of the disc is λ, then the local height H a of the post shock region for a advective disc is given by (Chattopadhyay & Chakrabarti 2011) where, r a is the equatorial distance from the black hole. We obtain Θ a in an approximate way following Vyas et al. (2015). The u r a is obtained by solving geodesic equation. Since u r a is known at every r a , and accretion rate is a constant, so ρ a is known. We also know that Θ a and ρ a are related by the adiabatic relation. So supplying Θ a , ρ a and u r a at the x sh , we know Θ a for all values of r a . We plot the Θ a in Fig. (B1b) for x sh = 40 in filled dots. We obtain an analytic function to the variation of Θ a with r a . The obtained best fit is Here, a t = 0.391623, b t = 2.30554, c t = 2.22486 and d t = 0.0225265. This fit is shown in
|
2017-04-25T09:04:49.000Z
|
2017-04-20T00:00:00.000
|
{
"year": 2017,
"sha1": "e6ecf26786158ccb566a8b0b683cf2c50acc4a6d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1704.06177",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8717e1e49565a79ca8c7d4e2338610e12f8640dc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
232383422
|
pes2o/s2orc
|
v3-fos-license
|
Modelling Stretch Blow Moulding of Poly (l-lactic acid) for the Manufacture of Bioresorbable Vascular Scaffold
Stretch blow moulding (SBM) has been employed to manufacture bioresorbable vascular scaffold (BVS) from poly (l-lactic acid) (PLLA), whilst an experience-based method is used to develop the suitable processing conditions by trial-and-error. FEA modelling can be used to predict the forming process by the scientific understanding on the mechanical behaviour of PLLA materials above the glass transition temperature (Tg). The applicability of a constitutive model, the ‘glass-rubber’ (GR) model with material parameters from biaxial stretch was examined on PLLA sheets replicating the biaxial strain history of PLLA tubes during stretch blow moulding. The different stress–strain relationship of tubes and sheets under equivalent deformation suggested the need of re-calibration of the GR model for tubes. A FEA model was developed for PLLA tubes under different operation conditions, incorporating a virtual cap and rod to capture the suppression of axial stretch. The reliability of the FEA modelling on tube blowing was validated by comparing the shape evolution, strain history and stress–strain relationship from modelling to the results from the free stretch blow test.
Introduction
Bioresorbable vascular scaffolds (BVSs) from poly (l-lactic acid) (PLLA) was considered to be a new-generation cardiovascular medical device for its ability of decomposing into lactic acid and being absorbed inside the body after the remodelling of an artery [1][2][3]. The bioresorbable behaviour offers a big advantage over the permanent metal scaffolds by providing the option of interventional treatment on the occasions of further formation of plaque [4]. A concern on using PLLA BVSs was raised for the thick struts (of 150 µm) rather than metal scaffolds (of 80 µm) due to the weak mechanical performance [5]. This disadvantage resulted in a big profile of scaffolds, leading to difficult deployment and high risk of plaque formation by disturbing the blood flow [6], which significantly restricted the clinical applications [7,8]. In order to enhance the mechanical strength and ductility of BVSs, the morphology of PLLA, i.e., orientation and crystallisation, could be re-organised in a controlled way. PLLA tubes usually have non-organised state of material morphology prepared by extrusion or solution casting [9,10]. By performing stretch blow moulding (SBM), the PLLA tubes are heated above the glass transition temperature (T g ) then biaxially deformed inside a mould [11][12][13]. The PLLA material experiencing SBM gains the orientation and crystallisation of the morphology, and the stiffness, strength and ductility are significantly improved, which can be further machined by a femtosecond laser [11,13,14]. In contrast to the broad knowledge of SBM in the packaging industry, e.g., for plastic bottles, the fabrication of PLLA BVSs by SBM is at an early stage, with poor understanding. The trial-and-error tests were used to acquire the optimal processing condition, resulting in a big cost of time and expenses in the development of a new product.
by Gel Permeation Chromatography (GPC). The pellets were dried at 80 • C for 12 h to remove the moisture before processing. PLLA pellets were extruded into sheets (thickness: 1 mm) by a single-screw extruder (Collin E25M, Dr. Collin GmbH, Maitenbeth, Germany) and on a CR 136/350 chill stack to quench the extruded sheet. PLLA tubes (outer diameter: 4 mm; wall thickness: 1 mm) were manufactured by a different single-screw extruder (Killion KN150, Davis-Standard, CT, USA) and quenched inside a water bath. The barrels of the extruders had different temperature settings for the processing of sheets and tubes ( Table 1). The application of the quenching process following extrusion was used to acquire an amorphous state of material confirmed by differential scanning calorimetry (DSC). The detailed information on the manufacturing process of PLLA sheets and tubes can be found in the previous studies [26,27]. The M w of manufactured products were measured to be 13.38 × 10 4 g·mol −1 (sheet) and 13.91 × 10 4 g·mol −1 (tube) by GPC, which implied a minor degradation from extrusion by a similar M w . The average hoop strain (ε h ) and axial strain (ε a ) history on the middle layer of the wall thickness of PLLA tubes was extracted from the previous study [26], which was replicated by a biaxial stretch of PLLA sheets ( Figure 1). Square sheet samples with size of 76 × 76 mm were prepared and installed on a biaxial stretch testing machine by four groups of grips [27]. The sheet samples were heated to a processing temperature of 72 and 77 • C respectively, by two air heaters above and below the sheet, where the temperature was controlled by the measurement with two thermocouples near the surfaces of the sheet. At each temperature, a time-resolved equivalent hoop strain and axial strain from the tube blowing process were applied along two in-plane directions (X and Y) of the sheet samples ( Figure 1), which was controlled and provided by two servomotors. Due to the monotonic displacement control of the testing machine [38], for the case of negative strain, a zero strain value without stretch was provided similar to the application of constant-width (CW) stretch [39]. The hoop and axial stress were calculated by the measurement of forces along two directions with two load cells based on the incompressibility of materials [40]. A tube parison was prepared for the free stretch blow test (Figure 2a), which had an effective length of 20 mm. The two ends of the tube parison were pre-stretched uniaxially (at 100 • C) to introduce pre-orientation. During the forming process, the pre-orientation prevented the two ends from inflation, whilst the forming occurred along an effective length (of 20 mm). The experimental setup on free stretch blow of PLLA tubes in the previous study was briefly illustrated (Figure 2b) [26]. A fixture with a bore of 6 mm was used to occupy the bottom cone region, allowing the tube parison to pass through whilst restricting the local inflation inside the bore. A hollow rubber cylinder (HRC) was used to cover the top cone region to restrict its inflation. The application helped produce a homogenous geometry by restricting the deformation at the local inhomogeneous region. By removing a mould, the surface strain of the PLLA tubes was measured by digital image correlation (DIC). The average strain on the middle layer of the wall thickness was calculated and the corresponding time-resolved hoop stress and axial stress at middle length were computed by the pressure vessel theory by neglecting the dynamic effect [26]. Four blowing cases, defined as T72SIMP6, T72SEQP6, T77SIMP6 and T77SEQP6, were supplied in the free stretch blow tests to indicate the processing temperature (72 • C, 77 • C), operating sequence (SIM, SEQ) and pressure (6 bar). The processing temperature was provided by performing the test in a temperature-controlled water bath. A linear axial stretch (of 60 mm) was applied by a stepper motor at a nominal speed of 25 mm·s −1 and a constant pressure (of 6 bar) was supplied. An initial axial stretch (of 6 mm) within 0.3 s was provided to overcome the sagging of tubes. The operation sequence was defined by the time delay between the onset of axial stretch and start of pressure supply, with a delay of 0.3 s for SIM and a delay of 1.3 s for SEQ [26].
Consitutive Model and Finite Elment Analysis
A constitutive model known as the 'glass-rubber' (GR) model was used [35][36][37], where the total stress tensor (σ) was composed of a deviatoric bond-stretching stress tensor (S b ) with glassy response and a deviatoric conformation stress tensor (S c ) with rubbery response, plus a hydrostatic stress (σ m ) (Equation (1)): For the bond-stretching stress, a Maxwell network was used to divide the deviatoric strain rate (D) into elastic part (D e ) and viscous part (D v ) (Equation (2)). The deviatoric elastic strain rate (D e ) was expressed by the generalised Hookean law by a shear strain rate ( ) and shear modulus (G b ) (Equation (3)). The deviatoric viscous strain rate (D v ) was introduced by the non-Newtonian law by a viscosity (µ) (Equation (4)). The nonlinearity of the viscosity was built by multiplying the reference value (µ * 0 ) with factors from temperature (a T ), effective stress (a σ ) and structural evolution (a s ) (Equation (5)).
The equivalent total strain rate (D) along the other conformational Maxwell network was expressed by a hyper-elastic strain rate (D n ) and viscous slippage strain rate (D s ) (Equation (6)). The hyper-elasticity based on Edwards-Villgis entropy (A c ) was used to calculate the deviatoric conformational stress (S c k ) by the network stretch (λ k ), volume change (J) and hydrostatic stress (p) at three principal directions (Equation (7)). The viscous strain rate was based on a non-linear Newtonian viscous flow by a slippage viscosity (γ) (Equation (8)). The nonlinearity was built by introducing the factors from temperature (β T ) and slippage stretch (β λ ) to the reference value (γ * 0 ) (Equation (9)). Due to the lack of analytical solution for the conformational stress and the consequent strain rate, a Newton-Raphson process was employed to solve the Equations (6)- (9), where Jacobin matrices were presented by a numerical perturbation method [27]: A three-dimensional (3D) FEA model using four-node shell elements ('S4R') with a preliminary size of 0.8 mm along the main body (1/5 of the diameter) was built in Abaqus, where smaller-size mesh was used at local regions ( Figure 3a). In this study, the global deformation behaviour of the FEA model showed no sensitivity to the mesh size. The FEA model comprised a half of the tube parison and a fully constrained rigid bottom fixture. Symmetric boundary conditions were applied on the edges of the parts within the plane. The top cone region was defined to be rigid to consider the constraint from the HRC. Frictionless contact was built between the surface of the tube and the bore of the bottom fixture. A stress-strain relationship of point at the middle length of the tube outside of the fixture was investigated, which was compared with the experimental results [26]. One phenomenon observed in the blowing test was the suppression of axial stretch due to the higher axial deformation rate than the motor (of 1.3 s −1 ) [26]. To consider this effect, a virtual cap and rod was proposed in the model by applying the axial stretch on the rod and transferring to the cap, which was similar to the operation in stretch blow moulding of PET bottles [30,31,41]. This application introduced the possibility of the separation between the cap and rod to simulate the suppression of axial stretch from the motor. The virtual cap was defined as a rigid zone to transfer axial motion only. The rod was defined as a rigid part with an upward motion to build a general contact with the cap. The GR model was implemented into a user subroutine ('VUMAT') of Abaqus to perform an explicit analysis. The temperature was defined in the model and a displacement (D) and pressure load (P) were applied by the recording results from the free stretch blow test (Figure 3b). In contrast to a total stretch (of 60 mm) provided by the stepper motor [26], a smaller effective linear displacement (of 33 mm) was applied on the virtual rod by observing the axial movement of the HRC, implying a constant speed of 13 mm·s −1 within a processing time of 2.5 s. A pressure (of 6 bar) was supplied at 0.3 and 1.3 s in a linear way by the measurement for the SIM and SEQ processes respectively, which was applied on internal surface of the tube parison but excluding the surface of the virtual cap and cone region.
Strain History of the Replicative Biaxial Test
The hoop and axial strain history on the middle layer of the PLLA tubes at four blowing cases (T72SIMP6, T72SEQP6, T77SIMP6 and T77SEQP6) was replicated on the sheet samples by biaxial stretch test (Figure 4). At 72 • C, the nonlinearity of strain history was found to be similar between the blowing and biaxial test within a duration of 2.5 s (Figure 4a). For the SIM process (T72SIMP6), the initial hoop and axial strain rate before inflation (at 1.1 s) was 0.5 s −1 in the blowing test, in contrast to that of 0.4 s −1 in the biaxial test. The maximum hoop strain rate during deformation was observed to be 2.8 s −1 (blowing) and 2.3 s −1 (biaxial) for the tubes and sheets respectively, where a low offset of strain (of 0.2) of sheets was found for the biaxial test. This offset was shown to be weakened between the blowing and biaxial tests for the SEQ process (T72SEQP6), implying a well agreeable strain history. For the SIM process at 77 • C (T77SIMP6), an early onset of inflation process was observed with the accomplishment of forming within 1.3 s (Figure 4b). This instant strain change was replicated in the biaxial stretch test by a maximum strain rate of 10 s −1 , where a more evident nonlinear increase of axial strain was observed for both tube blowing and biaxial tests. In the SEQ process (T77SEQP6), the inflation process was delayed by a total duration of 2.5 s, where a significant negative hoop strain (of 0.3) was introduced due to the decrease of diameter by the prolonged uniaxial stretch. This process was simulated by zero hoop strain in the biaxial testing. The maximum strain rate of biaxial stretch at this condition reached 17 s −1 within the capability of the biaxial testing machine (of 32 s −1 ).
Stress-Strain Relationship of Tubes and Sheets
The stress response from blowing and biaxial tests at 72 • C was compared by plotting against the nominal strain ( Figure 5). For T72SIMP6 (Figure 5a), an initial coincident hoop and axial stress-strain relationship was found for both tubes (blowing) and sheets (biaxial), corresponding to the coincident hoop and axial strain history. At the inflation stage, a dominating axial stress of tubes (blowing) was replicated by the biaxial test, which was attributed to a secondary axial stretch by a smaller axial strain rate (of 0.4 s −1 ) than hoop strain rate (of 2.5 s −1 ). Despite a small strain of PLLA sheets with an offset of 0.2, the stress-strain relationship indicated a steeper tendency than PLLA tubes, implying a softer material response of the tubes produced from the same material. By changing the strain history from SIM to SEQ (Figure 5b), the stress-strain relationship was shown to be different by a divergent hoop and axial path. Compared to the SIM process, an initial smoother axial but steeper hoop stress-strain relationship was observed for both tubes and sheets, which was attributed to the prolonged uniaxial stretch and subsequent enhanced secondary hoop stretch by a deliberately delayed supply of pressure. Despite the similar tendency of the stress-strain relationship, evident decayed PLLA tubes with softer material response than sheets were observed along both axial and hoop directions. The stress-strain relationship of PLLA sheets and tubes was further compared as the temperature increased to 77 • C ( Figure 6). For T77SIMP6 (Figure 6a), the influence of processing temperature in the blowing test of PLLA tubes can be replicated in the biaxial test of PLLA sheets, implying a narrower range of initial coincidence between axial and hoop stress-strain relationship within a strain (of 0.2) compared to that of T72SIMP6 (of 0.8). The subsequent enlarged gap between hoop and axial stress indicated a higher axial stress in the later stage, which was attributed to an enhanced secondary axial stretch. Compared to the tube blowing with continuous increase of hoop stress, a slight decreasing stage of hoop stress was observed between the strain of 2.2 and 2.5 in the biaxial test. The reason behind this was the inhomogeneity of strain rate by a decrease after rapid inflation, which was replicated by decreasing the speed of the motor, thus introducing a dynamic effect of the load cell. This effect behaved more evidently in the SEQ process (T77SEQP6) by a marked decrease of axial and hoop stress after rapid inflation when the strain reached 1.4 (axial) and 2.3 (hoop) (Figure 6b). For both the SIM and SEQ processes, there was more evident softer behaviour of PLLA tubes (blowing) along the hoop direction compared to the behaviour of sheets (biaxial), where there was less influence from the secondary stretch.
Modelling Replicative Biaxial Stretch
The material parameters of the GR model calibrated in the previous study were based on the biaxial testing data of sheets ('sheet model') [27]. The reference temperature of the conformational viscosity (T s * ) was defined to be 75 • C, thus there was no conformational slippage occurred at 72 • C. The sheet model was used to model the response of PLLA sheets under nonlinear strain history in the replicative biaxial stretch at 72 • C (Figure 7). A minor deviation was observed between the modelling and biaxial test at two conditions (T72SIMP6, T72SEQP6), demonstrating the capability of material modelling to capture the behaviour of materials experiencing nonlinear strain history and inhomogeneous strain rate. In the SIM process (Figure 7a), the modelling captured the behaviour of materials by an initial coincidence of hoop and axial stress-strain relationship within a strain limit (of 0.6), followed by a steeper axial response. A dramatic strain hardening behaviour was observed beyond the hoop strain of 1.4 and axial strain of 0.8 in the modelling, indicating the cessation of reaching the maximum extensibility of materials. By changing the operational sequence to SEQ (Figure 7b), the influence from the strain history of sheet materials was very well captured in the modelling by predicting a divergent hoop and axial stress-strain relationship of the biaxial test. As processing temperature increased to 77 • C, the conformational slippage occurred, where a slippage viscosity and critical slippage stretch (of 1.12) was employed in the GR model [27]. For T77SIMP6 (Figure 8a), the influence of processing temperature on behaviour of the PLLA sheet was captured by the modelling, implying the characteristics of higher axial stress due to the enhanced secondary axial stretch. A monotonic steep increase of stress indicated the strain hardening in the modelling, whilst there was a decrease of stress in the experimental test due to the dynamic effect of speed decrease of the motor. A similar effect was found in the SEQ process (Figure 8b), where the material response was captured before and during the rapid inflation by modelling. The modelling captured the crossing point between hoop and axial stress-strain relationship, which was attributed to the transition from secondary hoop deformation to secondary axial deformation, implying the applicability of the GR model in dealing with complex deformation by a preliminary lower hoop strain rate (before inflation) and subsequent higher strain rate (during inflation). The sheet model has shown its appropriateness in modelling PLLA sheets whilst the different behaviour of PLLA tubes with softer material response revealed its inappropriateness in modelling tubes. There is a need to re-calibrate the material parameters of the GR model by assuming that the softness was mainly contributed by the conformational network to define a new model for tubes ('tube model'). At a low temperature of 72 • C without conformational slippage, the stress-strain data of PLLA tubes was used to fit 3 material parameters (N s , α, η) in the Edwards-Vilgis (EV) hyper-elastic model, which were shown to be different with that in the sheet model ( Table 2). The performance of the tube model was examined by comparing to the experimental results of PLLA tubes at 72 • C from two operational sequences (Figure 9). It showed that a minor modification of three parameters significantly weakened the strain hardening behaviour compared to the sheet model. It captured the tendency of material response under SIM and SEQ mentioned before and showed a consistent stress-strain response by a small deviation with the experimental results.
Modelling the Influence of Temperature
By using the tube model, the process simulation on the free stretch blow test was conducted by FEA at the condition of T72SIMP6 ( Figure 10). The shape evolution of the tube parison from modelling and experiment was compared at six different time points (at 0.2, 0.6, 1.0, 1.4, 1.8 and 2.2 s) (Figure 10a). The forming process was very well captured by the FEA simulation with identical evolution behaviour, where there was a slow resting process without evident change of diameter within 1.0 s and a dramatic increase of diameter from 1.4 s. The simulation displayed a stable diameter of PLLA tubes beyond 1.8 s, and the further axial stretch between 1.8 and 2.2 s contributed the single increase of the axial length.
The hoop and axial strain history from the FEA simulation was compared to that from the blowing test for T72SIMP6 (Figure 10b). After the supply of pressure (at 0.3 s), a slow linear increase of hoop strain occurred at a strain rate of 0.7 s −1 (FEA) and 0.5 s −1 (test). When it arrived at 1.1 s, a rapid inflation was discovered by a maximum hoop strain rate of 2.8 s −1 (FEA) and 3.2 s −1 (test). The time for the cease of rapid inflation was found to be 1.6 s with a hoop strain of 1.8 by FEA and test. After that, the strain rate decreased towards 0, resulting in a final hoop strain of 2.1 (FEA) and 2.0 (test), respectively.
By plotting the stress response against strain, the stress-strain relationship of point at the middle length in FEA showed a good consistence with the testing results (Figure 10c). Corresponding to the strain history, the characteristics of the stress response were simulated by FEA, implying an initial coincidence between hoop and axial stress and a subsequent higher axial stress due to the secondary axial stretch. A slightly lower final stress was exhibited in FEA by a stress state of 18 (hoop) and 12 MPa (axial) in contrast to the result of 20 (hoop) and 16 MPa (axial) in the blowing test.
The influence of processing temperature was modelled by increasing the processing temperature from 72 to 77 • C ( Figure 11). By the shape evolution (T77SIMP6) (Figure 11a), the FEA simulation showed an early inflation (at 0.6 s). A 'banana'-shaped tube was observed during inflation (at 1.0 s) in the blowing test, whilst it was not explicitly shown in FEA simulation by a straight configuration. Instead, a separation between rod and cap in FEA was indicated (at 1.0 s). This implicit behaviour represented the suppression of axial stretch from the motor due to the higher axial deformation from inflation. It further explained that the curved shape of the tube was attributed to the existence of top constraint, which cannot be simulated by the FEA model with axisymmetric boundary conditions. The FEA modelling indicated the decrease of separation distance (at 1.4 s), corresponding to the recovery of the straightness of the tube in the blowing test. A full contact between the rod and cap was found to be at later stage (after 1.8 s) in FEA than the blowing test (at 1.4 s). By the strain history (T77SIMP6) in FEA (Figure 11b), the influence of increased temperature was indicated by an earlier inflation (at 0.6 s) than that at 72 • C (at 1.0 s). The maximum hoop and axial strain rate during inflation was observed to be 19 s −1 (hoop) and 14 s −1 (axial) in the FEA simulation, in contrast to the result of 13.6 s −1 (hoop) and 3.5 s −1 (axial) in the blow test, which implied a bigger deviation of the axial strain rate. After the rapid inflation, an instant cease of strain growth was shown in FEA, revealing the occurrence of the critical slippage stretch. In contrast, a slow but continuous increase of strain was discovered in the blowing test, which was attributed to a creeping effect of tubes under the pressure load which was not incorporated in the constitutive model [27].
In spite of the non-identical strain history, the stress-strain relationship in FEA showed a similar tendency to the result of the blowing test (Figure 11c). There was a good agreement between FEA and blowing test along the hoop direction within a strain regime of 2.7. The stress response beyond, i.e., creeping process from 2.7 to 3.3, cannot be provided by FEA due to the arrest of slippage with infinite material stiffness [27]. The secondary axial stretch was predicted by FEA, which built a steeper stress-strain relationship along the axial direction. There was a slightly lower axial stress in the FEA simulation, which was caused by the weakened secondary effect during inflation by a lower rate difference (of 5 s −1 ) than that in the blowing test (of 10 s −1 ).
Modelling the Influence of Sequence
The influence of operation sequence was induced by delaying the onset of pressure supply from 0.3 (SIM) to 1.3 s (SEQ), which was examined by the FEA simulation at 72 • C ( Figure 12). For T72SEQP6 (Figure 12a), the shape evolution in FEA showed a continuous decrease of tube diameter, implying the effect of the persisted uniaxial stretch before 1.4 s. There was no evident increase of diameter until 1.8 s for both FEA and blowing test. A slightly less effective axial stretch was indicated in FEA from 1.4 s than that in the blowing test. The forming process in FEA and blowing test finished at 2.2 s, implying a final diameter similar to the result in the SIM process (T72SIMP6).
The strain history at T72SEQP6 from FEA showed a consistence with the blowing test (Figure 12b). The onset of the increase of diameter was found at 1.3 s in FEA, which was in accordance with the pressure supply in the blowing test. During the inflating process, the maximum hoop strain rate was observed to be 4.5 s −1 (FEA) and 4.2 s −1 (test) respectively, which behaved higher than that in the SIM process (of 3.0 s −1 ). By FEA, the increase of hoop strain rate was attributed to a more elevated effective hoop stress (of 8 MPa) than that (of 5 MPa) in the SIM process by the initial higher uniaxial stretch before inflation, which implicated the role of axial stretch in activating the blowing process. The stress-strain relationship in the SEQ process (T72SEQP6) was captured by the FEA simulation, indicating a crossing point of hoop and axial stress-strain response that existed in the blowing test (Figure 12c). The FEA simulation indicated an initial higher hoop stress than axial stress within a strain level of 1.0, which was attributed to the secondary hoop stretch by the delayed pressure supply in the SEQ process. The secondary effect lasted until the hoop and axial strain reached an equivalent level of 1.0, when the hoop inflation started to dominate the deformation. By FEA, the crossing point suggested the transition to the secondary axial stretch by showing a steeper axial stress-strain relationship beyond the strain of 1.0.
At a high temperature level of 77 • C (T77SEQP6), the effect from sequence of operation was investigated by FEA simulation (Figure 13). The simulation result showed an extended initial axial stretch by a continuous decrease of diameter before 1.0 s (Figure 13a). This tendency remained in FEA until after 1.4 s, whilst there was an early partial inflation observed in the blowing test. In FEA, a weak separation between the virtual rod and cap was found at 1.8 s by a small distance compared to the SIM process (T77SIMP6). The corresponding testing result showed a straight blown tube (at 1.8 s), where the 'banana'shaped tube-forming process in the SIM process was avoided. The comparison indicated the capability of FEA in predicting the tendency, but not in a precise way. By the strain history (T77SEQP6), a good agreement between FEA and blowing test was observed (Figure 13b). A better prediction of the overall strain evolution was displayed by FEA in contrast to its performance in the SIM process (T77SIMP6). A linear increase of axial strain at a rate of 0.4 s −1 was displayed by FEA before 1.5 s. The maximum hoop strain rate during inflation was found to be 26.2 s −1 (FEA) and 16.8 s −1 (test), respectively. A lower axial and hoop strain beyond rapid inflation was indicated in FEA, whilst the deviation with the blowing test was improved due to the weakened creeping effect by the delayed pressure supply.
Similar to the SIM process (T77SIMP6), the FEA simulation in the SEQ process (T77SEQP6) showed a consistent stress-strain relationship with the result from the blowing test (Figure 13c). Compared to the SEQ process at 72 • C, a similar crossing point between hoop and axial stress was displayed at a strain of 0.8 in both the FEA and blowing test, indicating a transition of the secondary effect from the initial secondary hoop stretch (before inflation) to the secondary axial stretch (beyond inflation). This behaviour was very poor in the SIM process within a strain of 0.2 (T77SIMP6), which implied that the manipulation of operation sequence by delaying the pressure supply can help prevent the curve-shaped products, highlighting the need of predicting the deformation behaviour by FEA.
Discussion
By the understanding on mechanical behaviour of PLLA materials above T g [27], FEA modelling on stretch blow moulding of PLLA tubes was developed for the manufacture of BVSs. The different mechanical behaviour of PLLA tubes and sheets highlighted the effect of processing history (extrusion) of raw materials on the mechanical performance of products for subsequent manufacture and the need of experimental characterisation on the behaviour of tubes [26]. The applicability of the GR model was demonstrated by the successful modelling of the sheet products under more complex nonlinear strain history than has been studied [42], which showed a good adaptability for tube products with a minor modification of the material parameters. The validity of process simulation by FEA was shown by a successful prediction of the shape evolution, strain history and stressstrain relationship, implying a big potential of FEA modelling to replace the trial-and-error method to acquire optimal processing condition, which will accelerate the development of the new-generation BVSs.
A softer material response of PLLA tubes than sheets was observed by the replicative biaxial test, addressing the need for the direct investigation on the deformation behaviour of tubes in the forming process [26]. The finding differed with the previous application of the replicative biaxial stretch of PET materials with an agreeable mechanical response between PET preform and sheet at slow strain rate [39]. It can be explained that the raw PLLA materials experienced the different processing history (extrusion) with different temperature, equipment, and product shape. It has been known that the PLLA material was very sensitive to the thermal history and hydrolysis during processing [43], the degradation of which will be displayed by the decayed molecular weight, with no evidence in the current study by the similar M w of manufactured products. The environmental factor of the water bath in the forming process should not be criticised for the shorter duration than the time scale of hydrolysis [44][45][46]. It has been evaluated by applying a uniaxial stretch on tubes after being heated at dry and wet environment respectively, for a similar time scale (of 8 min), revealing no evident influence of the water bath [47]. Another possibility on the different mechanical performance was the pre-orientation of the material during the extrusion process by stretching the tube along the axis (machine direction) [26], thus introducing the weakened performance of the hoop direction (transverse direction) [48,49]. To prove this assumption, the morphology of the tube and sheet products after extrusion needs to be investigated by more advanced characterisation methods, e.g., Fourier-transform infrared spectroscopy (FTIR), X-ray diffraction (XRD), etc. [50,51].
Due to the different mechanical response, it was inappropriate to model PLLA tubes by the GR model with the material parameters calibrated from the biaxial testing of sheets [27]. A modification by weakening the conformational stress of the GR model could well-predict the lessened hardening behaviour by the influence of material parameters in modelling PET materials [34]. This application assumed that the difference of material behaviour was attributed to the morphological arrangement, e.g., orientation [23,25]. The GR model performed successfully by capturing the nonlinear stress-strain behaviour of tubes and the dependence on temperature and sequence of operation in SBM. One disadvantage of it was the incapability of modelling the creeping process beyond the rapid inflation in the free stretch blow test. To incorporate this effect, more factors related with the strain rate and the mode of deformation need to be used to define an evolutional critical conformational stretch rather than the single dependence on temperature [52]. Another possible approach is using more parallel Maxwell networks to build a wide range of relaxation spectrum [53,54].
Similar to the stretch blow moulding of PET bottles [30,55], the suppression of axial stretch occurred in the forming process due to the rapid axial inflation activated by pressure. The similar forming characteristics proposed a non-direct modelling approach by applying a virtual stretch rod and cap to capture this behaviour, which were real objects in SBM of PET bottles [30]. One simplification in the FEA model was the exclusion of the prestretched tube end by assuming a direct transfer of the linear stretch from the motor. The FEA modelling helped gain the insight into the forming stability in an implicit way, i.e., the separation of virtual rod and cap representing the suppression of axial stretch. The simplifications of the FEA model had the limitations to describe the exact occurrence and recovery of forming stability, i.e., the 'banana'-shaped tube. Despite the calibration of the model within a strain limit (of less than 16 s −1 ) [27], the GR model provided a reasonable extrapolation of process simulation by FEA, indicating the physical-based formulation of the mathematical expression [35][36][37]. The lack of modelling on the creeping process led to the incompetence of the FEA modelling to capture the slow continuous increase of strain after rapid inflation, which can be prevented in the SBM by the existence of a mould.
The calibration of the GR model was based on a broad processing temperature (of between 70 and 100 • C) [27], whilst the FEA modelling was validated at a low-temperature region (of between 70 and 80 • C), with a narrow window of 5 • C difference. The processing temperature was selected to be within the biggest transition of viscosity of the two Maxwell networks in the GR model [27], where the material showed a very low viscosity beyond 80 • C. The process simulation had its practicality as there had been the operation of stretch blow moulding within this temperature window [13,16,56]. Since the processing condition covering higher temperature has been suggested [12,16,28], the applicability of the FEA modelling needs to be further addressed by the experiment at elevated forming temperature. As the forming process is a load-controlled deformation, the magnitude of the pressure influences the deformation behaviour significantly, which is not covered in the current study. The FEA modelling together with the experimental investigation of the behaviour of PLLA tubes at wider processing conditions need to be studied in the future work.
|
2021-03-29T05:24:01.724Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "3bd70fb3b9121aa6e162685908c85455dd8a7b15",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/13/6/967/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3bd70fb3b9121aa6e162685908c85455dd8a7b15",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221091912
|
pes2o/s2orc
|
v3-fos-license
|
Collaboration Around Rare Bone Diseases Leads to the Unique Organizational Incentive of the Amsterdam Bone Center
In the field of rare bone diseases in particular, a broad care team of specialists embedded in multidisciplinary clinical and research environment is essential to generate new therapeutic solutions and approaches to care. Collaboration among clinical and research departments within a University Medical Center is often difficult to establish, and may be hindered by competition and non-equivalent cooperation inherent in a hierarchical structure. Here we describe the “collaborative organizational model” of the Amsterdam Bone Center (ABC), which emerged from and benefited the rare bone disease team. This team is often confronted with pathologically complex and under-investigated diseases. We describe the benefits of this model that still guarantees the autonomy of each team member, but combines and focuses our collective expertise on a clear shared goal, enabling us to capture synergistic and innovative opportunities for the patient, while avoiding self-interest and possible harmful competition.
1
In the field of rare bone diseases in particular, a broad care team of specialists embedded in multidisciplinary clinical and research environment is essential to generate new therapeutic solutions and approaches to care. Collaboration among clinical and research departments within a University Medical Center is often difficult to establish, and may be hindered by competition and non-equivalent cooperation inherent in a hierarchical structure. Here we describe the "collaborative organizational model" of the Amsterdam Bone Center (ABC), which emerged from and benefited the rare bone disease team. This team is often confronted with pathologically complex and under-investigated diseases. We describe the benefits of this model that still guarantees the autonomy of each team member, but combines and focuses our collective expertise on a clear shared goal, enabling us to capture synergistic and innovative opportunities for the patient, while avoiding self-interest and possible harmful competition.
INTRODUCTION
Rare bone diseases (RBD) have, until recently, been a largely neglected area in healthcare. Their rarity and heterogeneity have unfortunately hindered their exploration at both clinical and scientific levels, even though more than 500 of the ∼7,000 rare diseases are bone disorders (1,2). The estimated incidence of RBD can vary, from around 15.7/100000 births for skeletal dysplasias (3) which are the most common, to ultra-rare disorders of which only a few patients exist in the world, such as spondylo-ocular syndrome (4). However, in the last decade, the urgency to study and treat RBD has been boosted by the greater appreciation of the socioeconomic consequences associated with their chronic nature and severity, and by the wider availability of genetic diagnostics, patient advocacy, and the development of new pharmaceutical treatment options.
The focus of the Amsterdam UMC initially included the rare bone diseases (RBD) fibrodysplasia ossificans progressiva, osteogenesis imperfecta, fibrous dysplasia and hereditary osteoporosis, but encountered several obstacles. RBD are often extremely challenging to treat; clinical decisions are hindered by their complexity and lack of knowledge about their underlying pathology. Because standard treatment protocols do not exist for RBD, and off-label medications are typically required, a broad team of medical specialists is needed to design the right treatment approach for the individual patients. Ideally, because these diseases are so rare, such a team would be embedded in a multidisciplinary academic setting to facilitate urgently needed clinical and preclinical research. This provides access to research-oriented colleagues who have knowledge and affinity with relevant RBD and increases the likelihood of new insights and scientific breakthroughs to ultimate benefit the patients. Critical to maximizing progress is full collaboration between many different disciplines in a structure, where not only clinicians, but also clinical and basic researchers can efficiently interact across specialities and facilities. Such team structures and broad collaborative networks can be challenging to set up in academic centers due to other interests, competition, or non-equivalent cooperation (5).
Collaborative Organizational Model
Different opinions exist about the ideal organizational structure to facilitate successful cooperation of professionals from a wide variety of disciplines. Nonetheless, in most medical and research organizations, the traditional hierarchical pyramid still dominates. Such rigidly structured organizations that are managed "top-down" often fail to provide an optimum environment for self-motivation, creativity, engagement, and empathy, all important requirements for effective collaboration amongst colleagues (6)(7)(8)(9)(10).
An alternative approach supports a less rigid hierarchy and the promotion of organic development of collaboration between colleagues in a culture of equality (3)(4)(5)(6)(7). Fundamental to this is the recognition of the specific and complementary skills of each individual team member. There is increasing support of the idea that teams containing like-minded people with mutual and aligned interests can provide the basis for transparent, fair, and fruitful collaboration. Organizational models like this can achieve shared goals by stimulating an engaged, unforced and valued workforce mentality, in which individuality and freedom to show initiative is safeguarded (6)(7)(8)(9)(10). In such a model the aim is not the integration of all departments but an efficient collaboration between relevant partners driven by their balanced skills that are required to solve specific clinical or research questions. The overall goal is to improve patient care and to stimulate innovative research. The process is further enhanced by the critical input of patients in care and research. This kind of model is referred to in the literature as "collaborative organization, " and is considered an effective means of advancing both efficiency and innovation (6-10).
AMSTERDAM BONE CENTER
The Amsterdam Bone Center (ABC) was formed in late 2016, as a successful example of such "collaborative organizational model." The ABC was an initiative of various clinical disciplines and researchers who wanted to pool their specialized skills, knowledge and experience across boundaries and their day to day scope, with the common goal of achieving new approaches to the diagnosis, care, and effective treatment of patients with RBD. Most of RBD treatment is still based on generic medical protocols which provide symptomatic relief, but effective future therapies that result in the recovery of the affected tissue will need detailed understanding of the underlying disease pathology, which is a challenging task. As a consequence, the ABC was initially focussed on RBD. Although the number of patients affected by some diseases was very limited, the level of required adapted complex care was very high. This resulted in extensive networking with many clinical departments such as plastic surgery, maxillofacial surgery, orthopedic surgery, thoracic surgery, traumatology, anaesthesiology, rehabilitation, urology, ear nose and throat surgery, audiology, ophthalmology, clinical genetics, rehabilitation, psychiatry, physiotherapy, social work, dietetics, gypsum master, cardiology, lung disease, nuclear medicine, radiology, neurology, neurosurgery, dermatology, radiotherapy, gastroenterology, endocrinology, pediatrics, rheumatology, and dentistry. In addition, the patient organizations have been actively involved. The multidisciplinary collaboration has been based on equality.
The ABC subsequently developed as a flat organization, where mutual interest, exchange of knowledge, and innovation have led to a vivid open collaboration between clinicians and researchers.
The ABC provides a bridge between clinicians and research laboratories whose partners are embedded in the Amsterdam Movement Sciences research institute, the latter of which embraces the targeted laboratories specializing in multifaceted aspects of research on bone tissue, dentition and the surrounding tissues. In this way, it connects expert groups focussed on osteocytes (11), osteoblasts (12), osteoclasts (13), bone matrix formation (14), and angiogenesis (15), facilitating the study of bone differentiation and regeneration. With the aid of appropriate cell collection from RBD and control tissues, complex processes can be studied and interpreted in the physiological and pathological context. "Meet the expert" RBD sessions and annual RBD meetings help to keep the patients informed about the current research and progress. ABC education activities also extend to academic training at the bachelor, master and doctorate level by which enthusiasm for rare bone diseases is promoted in talented young professionals.
MANAGEMENT OF THE ABC
In place of the more typical hierarchical model in which all control is centralized to a Director, the ABC operates with a facilitating steering team, with one member in rotation functioning as the ABC chairman. The chairman conveys the consensus goals, ambitions, and decisions of the team. The different cultures and perspectives of the various collaborating departments are reflected in a steering team of four coordinators from the task force group, consisting of 2 preclinical theme leaders (from the Laboratory for Bone Metabolism of the Department of Clinical Chemistry and Cell Technology Laboratory of the Department of Oral and Maxillofacial Surgery) and 2 clinical theme leaders (from the Department of Internal Medicine section Endocrinology and the Department of Plastic Surgery). Steering team members are elected to their role for 2 years, based on their proven commitment to the ABC and their activities in promoting its interests.
In addition to the leading steering team, there is a task force which includes representatives from clinical and pre-clinical groups. These representatives are responsible for promoting their key themes [e.g., key themes are presently RBD, inflammatory bone diseases and bone oncology, complex fractures, and complex surrounding tissue injuries (Figure 1)]. The task force comes together in brainstorm sessions to translate critical clinical questions into structured preclinical research lines, and move preclinical findings into the clinical environment. In this organizational model, the coordinating task force is not focused on safeguarding its own structure, but on leveraging its diverse expertise to drive adoption of new ideas across the ABC members, identify scientific gaps, support the finding of solutions, enhance ABC connectivity and crosstalk between themes where possible, give direction to future common goals, support optimal clinical care for patients, and provide high quality education and research.
An annual symposium will ensure that all groups working in bone research and clinics in the ABC can benefit and easily collaborate in an ideal setup for research and care. Yearly goals are suggested and proposed to the ABC community in these symposia, and subsequently set and evaluated by the task force based on extensive feedback. The ultimate goal is to become further embedded in a larger international network of centers for bone research in general, and RBD in particular, in order to meaningfully help patients and offer innovative diagnostics, to develop treatment options and recovery solutions for RBD and related bone diseases. The task force also monitors whether the activities of the leading steering team align with the goals of the wider ABC community. The obvious advantage of this lateral ("flat") organization is that groups retain the freedom to pursue their own research choices, but they are encouraged to reach the best joint benefit.
FOCUS OF RBD WITHIN THE ABC
The focus of the RBD theme of ABC was initially placed on four RBD, including fibrodysplasia ossificans progressiva (FOP) (12,13,(16)(17)(18)(19), osteogenesis imperfecta (OI) (20,21), fibrous dysplasia (FD) with an emphasis on skull (22), and hereditary osteoporosis (her. OP) (23). This, repertoire was strategically composed based on the various clinical and research expertise available and on the possibility to match underlying etiology and clinical questions. A schematic overview of the differences and common ground of these RBD is given in Figure 2. Based on this, it is clear that these diseases can serve as a paradigm for other RBD sharing a similar pathology, but also provide insight into general bone pathology.
ACHIEVEMENTS ON RBD WITHIN THE ABC
A standardized approach to patient care of the four RBD is developed with the relevant clinical disciplines and patient organizations to create a patient-centered design. This has led to the implementation of a standardized route for care; its integration in numerous specialities is designed to thoroughly address all pathological aspects of each RBD (Figure 3). As a spin-off of the ABC structure, the RBD team has become an international referral center for FOP and it coordinates international studies on FOP, OI and hereditary OP, and FD.
Several preclinical research models have been developed to study the various RBD, including the culture of subdermal (12) and periodontal ligament fibroblasts (13) which can be converted to cartilage and bone-forming cells, or can be drivers for osteoclast formation (13,24); this provides unique insight in rare bone diseases that primarily focuses on additional bone formation rather than affected bone degradation. Many signaling pathways for cartilage and osteogenic differentiation are reflected in these models, which facilitates their study in easily obtainable patient tissue. This collaboration has yielded the discovery of newly discovered genes for these RBD (20,23); the investigation of their mechanism can help to shed light in possible therapeutic implications. The collaborative efforts have also led to innovative diagnostics, one example of which is a new modality for imaging of active heterotopic bone lesions in FOP patients with 18 F PET/CT (16)(17)(18)(19). Other advances include the development of new clinical trials with existing and new medications, translational projects on pharmacological therapy for RBD, and the development of new technology to quantify osteoclast activity ex vivo. The development of the RBD theme within the ABC structure has led to an increasing number of pre-and clinical scholarships, awarded from Amsterdam UMC AMS as well as other international universities, patient associations, and national and European funding organizations, in collaboration with pharmaceutical/industrial companies. This supports a rapidly developing academic trajectory resulting in many Ph.D. projects and dissertations.
FUTURE PLANS OF THE ABC
Regarding the future treatment of RBD, Regenerative Medicine (RM) is one of the main research priorities of the ABC. This specific focus within ABC aims meaningful repair/regeneration by exploiting the plasticity of the body's own cells. This requires extensive knowledge of the pathological mechanism of the disease extending from molecular interactions at the cellular level, to the influence of and inter-relationship with the surrounding tissues and other systemic factors. The aforementioned preclinical RBD models and findings can potentially be integrated with RM strategies in order to achieve synergy in disease control and tissue regeneration. This specific expertise within ABC extends to more prevalent bone disorders which are genetically less well-defined but which nonetheless may also benefit from therapeutic developments on RBD; these may include multifactorial osteoporosis, and immune-related bone diseases. The ultimate goal is to establish a regeneration center based on the development of new pathophysiological models for the realization of individualized treatment and prevention. In addition, ongoing future plans include the development of orthoplastic centers and the expansion of our network to more national, European and international collaborators outside the Amsterdam UMC.
In conclusion, this article we have outlined the establishment and development of the Amsterdam Bone Center, where "collaborative organization" encourages the cooperation of all relevant clinic and research teams. Specifically, we have successfully established a patient-centered, multidisciplinary focus on RBD including the development of targeted innovative diagnostics, clinical and research protocols and studies. Recognition of the different cultures and perspectives of the departments represented in the ABC, shared collaborative leadership, and a diverse and well-functioning task force is critical to maintaining a balanced and successful collaboration that advances science and innovation, and improves patient care.
Knowledge of this model may be useful to other organizations aiming to establish or enhance the growth of clinicalacademic collaboration.
DATA AVAILABILITY STATEMENT
All datasets presented in this study are included in the article/supplementary material.
AUTHOR CONTRIBUTIONS
This article on a unique collaborative, interdisciplinary concept in medical research arising from the rare bone diseases pillar of the Amsterdam Bone Centre was initiated by EE with contributions to all crude versions from DM, TF, TV, JCN, JK-N
|
2020-08-11T13:08:29.683Z
|
2020-08-11T00:00:00.000
|
{
"year": 2020,
"sha1": "240339ec6a0d44c652afc76357925b8b8a425119",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2020.00481/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "240339ec6a0d44c652afc76357925b8b8a425119",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
225548901
|
pes2o/s2orc
|
v3-fos-license
|
Object detection of aerial image using mask-region convolutional neural network (mask R-CNN)
The most fundamental task in remote sensing data processing and analysis is object detection. It plays an important role in classification and very useful for various applications such as forestry, urban planning, agriculture, land use and land cover mapping, etc. However, it has many challenges to find an appropriate method due to many variations in the appearance of the object in image. The object may have occlusion, illumination, viewpoint variation, shadow, etc. Many object detection method has been researched and developed. Recently, the development of various machine learning-based methods for object detection has been increasing. Among of them are methods based on artificial neural network, deep learning and its derivatives. In this research, object detection method of aerial image by using mask-region convolutional neural network (mask-R CNN) is developed. The result shows that this method gives a significant accuracy by increasing the image training and epoch time.
Introduction
The use of remote sensing images has been increasing in recent years due to the advanced development of remote sensing technology that offers a better quality and higher resolution of image. The amount of availability of image data is going up, as well. Many various applications such as forestry, urban planning, agriculture, land use and land cover mapping, etc. can be applied by using remote sensing image. Therefore, the demand of remote sensing image data is also growing lately. In order to obtain a good result and get benefit from remote sensing image data, the data should be processed and analyzed carefully.
The most fundamental task in remote sensing data processing and analysis is object detection. It plays an important role in classification. The result and accuracy of object detection can influence the accuracy of image classification and gives a significant effect in image analysis. However, object detection is challenging because the object may has many variation of appearance in image caused by scale variation, occlusion, illumination condition, viewpoint variation, deformation, shadow, background clutter, etc [1].
In the last decades, many study and research have been conducted to find and develop an appropriate method to obtain higher accuracy and better result in object detection. Generally, there are four main categories in object detection method include template matching-based object detection methods, knowledge-based object detection methods, object-based image analysis (OBIA)-based object methods, and machine learning-based object detection [2]. Nowadays, the development of various machine learning-based methods for object detection is growing. The techniques using machine learning are more advanced and powerful. Therefore, the technique becomes popular and widely used in many applications including some applications that use remote sensing image and analysis. Among of them are methods based on artificial neural network, deep learning and its derivatives.
In remote sensing community, artificial neural network or neural networks, which is the basis of deep learning (DL) algorithm, was popular in 1980s. And then some DL based methods like support vector machine (SVM), random forest (RF), etc. were proposed to do classification and change detection tasks. SVM has an ability to handle high dimensionality data and perform well with limited training samples [3], while RF is easy to use and can obtain high accuracy [4]. However, since 2014, DL as the derivative of neural networks has returned the interest of the remote sensing community to neural networks. In image analysis tasks including object detection, land use and land cover (LULC) classification, etc., DL algorithms give significant achievements [5]- [10].
The advanced development of graphical processing unit (GPU) increases computer performance in parallel processing that is beneficial for DL, especially convolutional neural network (CNN) that computes images parallely in object recognition and object detection tasks. In fact, deep learning is robust, fit for complex problem and has an ability to solve higher computational task compared to another machine learning-based method [11]. CNN, one of DL fundamental network, uses convolution to learn higher-level features of the object from low-level feature composition in image data. It imitates how human brain works. CNN has succeed in object recognition, detection and classification tasks. Since 2012, DL always wins object detection and image classification competition every year. Based on the previous achievement, the DL framework can be applied in many application fields, particularly in object detection and classification task for 2D image and even 3D image [1], [12].
Mask-region convolutional neural network (mask R-CNN) is an object detection algorithm method based on CNN that has extra feature for instance segmentation and extra mask head. Mask R-CNN is an extension of Faster R-CNN, its predecessor, by adding an additional branch for predicting segmentation masks on each Region of Interest (RoI) [13]. In this research, we implement framework for object (roof) detection and extraction which applies Mask R-CNN model trained to detect and report instances of roof segments within aerial image.
Neural Networks
Neural networks are one type of machine learning model. Fundamentally, machine learning is using algorithms to extract information from raw data and represent it in some type of model [14] that can be used to infer things about other data. Neural networks or artificial neural networks (ANN) are a computational model used in machine learning both for classification or regression and are inspired by the function of the human brain. ANNs are composed by several layers of nodes that are connected by links with a weight attached to them. In the fully connected version of ANNs, every node is connected to all nodes in the layers behind and in front of it and to no node in its own layer, meaning nodes in one layer are completely independent of each other. It is a supervised learning algorithm where first the data is forward propagated in the network the error is being calculated and then back-propagated through the network while adjusting the weights [11].
Deep Learning (DL)
One definition says that deep learning is a neural network with more than two layers. The problematic aspect to this definition is that it sounds and makes deep learning created since the 1980s. The fact is DL comes several years after neural networks and makes many changes in neural networks architecturally in network styles. DL shows a spectacular result that transcends the earlier generation of neural networks. The evolution of neural networks in DL includes the following facets [14]: More neurons than previous networks One of the great advantages of DL compared to another traditional machine learning algorithms is automatic feature extraction. By feature extraction, the networks process of deciding which characteristics of a dataset can be used as indicator to label that data reliably [14]. DL had reduced human effort in feature extraction which is an input of classification. Therefore, the accuracy gained by DL is higher than another conventional machine learning algorithms for almost every data type with minimal tuning and human role.
Convolutional Neural Networks (CNN)
Convolutional neural networks (CNN) are one of fundamental network architectures of deep learning.
CNNs are a specialized kind of neural network for processing data that has a known, grid-like topology [15]. The network uses a mathematical operation, convolution, which the name of CNN comes from. Therefore, CNNs can be defined as neural networks that employ convolution in place of general matrix multiplication in at least one of their layers. A convolution is a powerful concept for helping to build a more robust feature space based on a signal. So, by using convolution, CNNs obtain the goal to learn higher-order features in the data.
Since CNNs perform well in learning feature of the data, then CNNs are suitable for object recognition, object detection, and classification. In fact, the successful of CNNs in image recognition makes the power of deep learning is recognized. CNNs can identify faces, individuals, street signs, platypuses, and many other aspects of visual data and CNNs are good at building position and rotation invariant features from raw image data, as well [14]. CNNs are very advantageous for the input that has structure, repeating patterns and spatially distributed value like images and audio data. Besides that, CNNs also have been used in natural language translation and sentiment analysis.
CNNs transform the input data from the input layer through all connected layers into a set of class scores given by the output layer [14]. There are many variations of the CNN architecture, but they are based on the pattern of layers, as demonstrated in the following figure. The figure depicts the high-level general CNN architecture that consists of three major groups of layer, namely input layer, feature-extraction (learning) layers and classification layers. Generally, in object detection the input is an image that has spatial information and depth representing the colour channels. Feature extraction layers have convolution layer, activation function layer and pooling layer. In Figure 2, the activation function layer is represented by ReLU as an example that widely used. These three layers find a number of features in the images and progressively construct higher-order features [14]. After extracting feature the next step is classification in classification layer that produces class probabilities or scores.
Convolution
A convolution is defined as a mathematical operation describing a rule for how to merge two sets of information. It is important in both physics and mathematics, defines a bridge between the space (time) domain and the frequency domain through the use of Fourier transforms. It takes input, applies a convolution kernel, and gives us a feature map as output. The convolution operation is known as the feature detector of a CNN. The input to a convolution can be raw data or a feature map output from another convolution. It is often interpreted as a filter in which the kernel filters input data for certain kinds of information. Figure 2 illustrates the convolution operation. The input data is convolved by the kernel filter to obtain the convoluted feature.
Mask Region Convolutional Neural Networks (Mask R-CNN)
Mask region convolutional neural networks (Mask R-CNN) is a general framework that conceptually simple and flexible for object detection and object instance segmentation. Mask R-CNN efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance [13]. The method of Mask R-CNN is an extension of earlier method, Faster Region Convolutional Neural Network (Faster R-CNN). In Mask R-CNN, a third branch is added for predicting segmentation masks on each Region of Interest (RoI), in parallel with the two existing branches for classification and bounding box regression. The mask branch which is the third branch, is a small fully convolutional network (FCN) applied to each RoI, predicting a segmentation mask in a pixel-to-pixel manner. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN [16]. Figure 3 shows the mask R-CNN framework that is used in our implementation. Mask R-CNN adopts the method from Faster R-CNN, which consists of two stages. The first stage, called a Region Proposal Network (RPN), proposes candidate object bounding boxes. The second stage is extracting features using RoIPool from each candidate box and performs classification and bounding box regression. Mask R-CNN is a two stage framework. The first stage scans the image and generates proposals (areas likely to contain an object). And the second stage classifies the proposals and generates bounding boxes and masks.
Backbone CNN
Backbone is a standard convolutional neural network that serves as a feature extractor. The early layers detect low level features (edges and corners), and later layers successively detect higher level features (object like car, person, sky, etc.). In the original paper of Mask R-CNN, the authors use ResNet for the backbone architecture. We also use ResNet in this research. The process in backbone network started with converting of RGB input image into a feature map of shape when it passes through the backbone network. This feature map will be the input for the next step.
Region Proposal Network (RPN)
The RPN is a light-weight neural network that scans the image in a sliding-window fashion and find areas that might contain objects. The regions scanned by RPN are called anchors. Anchors are used to detect multiple objects with different scales and overlapping objects in image. Anchors are boxes with a set of predefined bounding boxes of a certain height and width, so they can capture specific object classes with different sizes and aspect ratios. Anchor boxes are distributed over the image area and might be overlap each other to cover as much of image as possible.
The convolutional nature of the RPN handles the sliding-window, so RPN can scan all regions in parallel (on a GPU). RPN scans over the backbone feature map, not over the image directly. Therefore, RPN can reuse the extracted features efficiently and avoid duplicate calculations. The outputs for each anchor that generated by RPN are anchor class and bounding box refinement. Figure 4 shows the illustration of anchor boxes within the image. Anchors are the regions that the RPN scans over, which are boxes which are distributed over the image area. Proposed Regions of Interest (RoI) are generated by RPN which are the regions that contain an object of the class to classify. As illustrated in Figure 4, the proposed RoI contains objects, in this case are the roofs, that bounded by boxes. There are two types of box in each object, the one is formed by connected lines and the other is by dotted line. The box with dotted lines is the anchor box and the box with solid lines is its refinement.
RoI Classifier and Bounding Box Regressor
The output from RPN becomes an input for the next step, RoI classifier and bounding box regressor. This step runs on the RoI proposed by RPN and generates two outputs for each RoI, the class of the object in the RoI and bounding box refinement to refine the location and size of the bounding box to encapsulate object. The class that generated by this step is more specific than the class in RPN. In RPN the classes are only foreground and background. In this step, the network is deeper so it can classify a region to specific classes like person, car, etc. It also generated background class that makes the RoI to be discarded. The bounding box refinement in this stage is similar to the previous.
Segmentation Masks
The mask network is the addition that the Mask R-CNN paper introduced, by extending Fast/Faster CNN method. The mask branch is also a convolutional network that takes the positive regions selected by the ROI classifier and generates masks for them. The generated masks are low resolution. But they are soft masks, represented by float numbers, so they hold more details than binary masks. The small mask size helps keep the mask branch light. The scaling down of the ground-truth masks is conducted during training to compute the loss. And during inferencing, the predicted masks are scaled up to the size of RoI bounding box.
3.Results and Discussion
The data used in the experiment is an aerial image of Everswinkel city, Germany. First, we made a dataset that consists of training set, validation set and testing set from the data. The dataset generation was conducted by using Fishnet tool in ArcGIS. Figure 5 we can see that the data is divided into several clip images. Every object (roof) in each clip image will be annotated, digitized polygon with class labels of roof (gable, hip and flat). From image that has been annotated, a training set, validation set and testing set were generated with proportion (7:2:1). And then the next step is checking the dataset to see that data has been successfully read according to the masking of each roof types. After all the dataset were checked, the next step is training or learning process by using training set data. The model of mask R-CNN framework that we build in this research has to learn every roof type of the classes from training set. After training process, the next step is validation to tune all hyperparameters in the model. After all hyperparameters were tuned, then we conducted the testing to see if the model is able to detect and segment the object (roof) in image. Figure 6. The result of object (roof) detection using Mask R-CNN Figure 6 shows the result of the experiment conducted in our research. From the result shown in Figure 6, it can be concluded that the network implemented, Mask R-CNN, is able to detect an object in the image (roof top of building) with a good accuracy.
In order to measure the performance of classification and object detection model, the metric of measurements are needed to evaluate the model. There are evaluation metrics that show how well a model is doing in terms of real world performance. By these evaluation metrics, the quality of a model can be revealed and compared to different models on the same tasks. Average precision (AP) is a popular metric in measuring the accuracy of object detectors like Faster R-CNN, Mask-R CNN, etc. Mean average precision (mAP) is the average of AP. In some context, the AP is computed for each class and averages them. But in some context, they mean the same thing. IoU measure the overlap between 2 boundaries, that is used to measure how much the predicted boundary overlap with the ground truth (the real object boundary). IoU threshold can be predefined in some dataset. Precision measures how accurate is your prediction (the precentage of accurate prediction). Recall measures how good you find all the positives. IoU measures the overlap between 2 boundaries.
= + = +
From the experiment in our research, we obtained a good performance of the model we build in term of accuracy of the object detection. The mAP resulted is 0.9014186807449848 (mAP @ IoU=0,50). And another metrics are the precisions and recalls with value of 0.8871293140410786 and 0.530022520443246 consecutively. Figure 7. The result of object segmentation by using Mask R-CNN Figure 7 shows the result of object (roof) segmentation by using Mask R-CNN model we build. From the result we can see that the segmentation mask does not really fit on the object (roof). Therefore, improvement of the model by changing some parameters and adding more architectural variations should be conducted to make the segmentation better and increasing the accuracy. In domain of architecture used in instance segmentation task, a bigger version of ResNet or another Feature Pyramid Network (FPN) approach should be tried and explored to improve the object detection and segmentation result. The different RPN architecture also should be explored more.
Adding more features, changing the training strategy and increasing the amount of training set data can also improve the result. Another issue that can improve the result is environmental conditions. It means that we should consider environtmental conditions in image data such downtowns with closely standing high-rises, three shaded buildings, waterbodies nearby, etc. The environmental conditions might affect the detection and segmentation step during training and testing .
The term of balanced dataset should also be considered to reduce the domination of certain task that sometime influence to missclassification. Class imbalance is another issue with real-world training data. The number of flat roof buildings compared to another roof type should be considered, since it will lead to a problem for training a neural network classifier.
4.Conclussion
Mask-R CNN can be implemented in object (roof) detection of aerial image with a good accuracy (mAP 90,14% & precision 88,71%). RGB (spectral of image) can be used as a feature. The result of segmentation mask does not really fit on the object (roof). By using Mask-R CNN, the object is not only detected, but we can also know which pixels are belonged to object.
|
2020-07-09T09:12:08.252Z
|
2020-07-04T00:00:00.000
|
{
"year": 2020,
"sha1": "254cd9e843043ed5b3f2b48256e349d5ca81793f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/500/1/012090",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7714e74b836c500cccdd2de72020c5af48980cd8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
250933211
|
pes2o/s2orc
|
v3-fos-license
|
Implications of Adipose Tissue Content for Changes in Serum Levels of Exercise-Induced Adipokines: A Quasi-Experimental Study
Human adipocytes release multiple adipokines into the bloodstream during physical activity. This affects many organs and might contribute to the induction of inflammation. In this study, we aimed to assess changes in circulating adipokine levels induced by intense aerobic and anaerobic exercise in individuals with different adipose tissue content. In the quasi-experimental study, 48 male volunteers (aged 21.78 ± 1.98 years) were assigned to groups depending on their body fat content (BF): LBF, low body fat (<8% BF, n = 16); MBF, moderate body fat (8–14% BF, n = 19); and HBF, high body fat (>14% BF, n = 13). The volunteers performed maximal aerobic effort (MAE) and maximal anaerobic effort (MAnE) exercises. Blood samples were collected at five timepoints: before exercise, immediately after, 2 h, 6 h, and 24 h after each exercise. The selected cytokines were analyzed: adiponectin, follistatin-like 1, interleukin 6, leptin, oncostatin M, and resistin. While the participants’ MAnE and MAE performance were similar regardless of BF, the cytokine response of the HBF group was different from that of the others. Six hours after exercise, leptin levels in the HBF group increased by 35%. Further, immediately after MAnE, resistin levels in the HBF group also increased, by approximately 55%. The effect of different BF was not apparent for other cytokines. We conclude that the adipokine exercise response is associated with the amount of adipose tissue and is related to exercise type.
Introduction
Physical exercise is reported to avert diseases, thereby contributing to human health. It is also crucially involved in metabolism. The adipose tissue is an energy reservoir [1]. Physical activity initiates triglyceride hydrolysis, following which free fatty acids are released into circulation to fuel up the working muscle [2]. However, the adipose tissue also has other roles and is no longer solely perceived as an energy storage reserve. Recent literature has highlighted the importance of body fat, which has been recently described as a bona fide immune and endocrine organ [3,4]. That is because the adipose tissue is the source of numerous biologically active compounds and cells [4]. According to previous research, adipocytes produce and release a wide range of signal-transmitting molecules. For instance, the hormone adiponectin [5] plays anti-diabetic, anti-inflammatory, and anti-atherogenic roles [6,7]. It thus facilitates crosstalk between the adipose tissue and other metabolism-related organs [8]. Other such hormones are leptin [9,10], which controls the nutritional intake [11] and is thereby known as the satiation hormone, and resistin [12], linked to type 2 diabetes [13]. In contrast to adiponectin, which enhances muscle glucose uptake and increases fatty acid oxidation [14], resistin maintains fasting glycemia [15]. Further, it has been reported that follistatin-like 1, a well-known promoter of skeletal muscle growth [16,17] is expressed in adipose tissue [18]. This tissue also produces pro-inflammatory oncostatin M (OSM) [19], which is thought to regulate the homeostatic state of the tissue and the immune cell balance [20,21]. There is also evidence that adipocytes secrete interleukin 6 (IL-6) [22][23][24], described as an adipocytokine. IL-6 is a pro-inflammatory cytokine involved in lipid and glucose metabolism, and body weight regulation [3].
The secretory function of the adipose tissue is well described [3,7,25,26]. However, far too little attention has been paid to exercise-induced changes in the secretion activity of the adipose tissue. Although several research groups examined the effect of different types of exercise on the circulating levels of adipose tissue-derived factors, the results are inconsistent. For instance, in some studies, plasma adiponectin levels were unchanged during acute cycling in healthy individuals [27], or after acute/moderate exercise in overweight/obese individuals [28,29]. By contrast, in another study, raised plasma adiponectin levels were reported in overweight elderly men undergoing 6 months of high-intensity resistance training, while moderate-intensity training did not have any effect [30]. Data on leptin have also been inconsistent and contradictory. For example, in one study, short-term exercise (<60 min) did not acutely affect leptin levels in healthy volunteers [31]. While a decrease in plasma leptin in men after a graded treadmill exercise tolerance test was shown [31], an increase in leptin levels during 41 min of cycling at 50% of maximal oxygen consumption (VO 2max ) was recorded after administration of a standardized meal [32]. This was followed by a reduction in leptin levels during recovery time, and they increased to control values after 2 h [32].
Regarding resistin, a potential link between obesity and diabetes has been proposed [15]. Consequently, resistin is mostly studied in obese individuals. Its high blood levels are linked to poor exercise capacity [33]. In overweight men, high-intensity endurance exercise does not affect circulating resistin levels up to 48 h after the exercise [34]. Similarly, resistin mRNA levels in the adipose tissue are not affected in lean and overweight subjects [35]. By contrast, data for healthy individuals subjected to exercise are scarce.
The role of follistatin is relatively established. A recent study concluded that follistatin is released into the bloodstream following an acute bout of exercise [36]. In the study, involving young and healthy men, 3 h of cycling at 50% VO 2max elevated follistatin blood levels but not the follistatin mRNA levels in the muscle. Similarly, resistance training is associated with an increase in circulating follistatin levels in elderly overweight women [37]. As for OSM, VO 2max exercise elevates OSM serum levels in young and old men [38]. These results corroborate earlier findings in a mouse model [39].
IL-6 levels increase during exercise [40]. Nonetheless, the increase is most likely driven by the muscle [41,42] and increased IL-6 output from the adipose tissue has not been convincingly demonstrated to date. The adipose tissue does not seem to contribute to the elevated arterial IL-6 levels observed during a moderate short-duration workout [43]. However, according to some authors, almost 30% of IL-6 present in the blood is derived from the adipose tissue [7].
Collectively, the above studies outline the critical role of adipocytes and the influence of physical activity on the secretory profile of the adipose tissue. It is also apparent that discrepancies exist in the data regarding adipose tissue-derived factors. In addition, sideby-side comparisons of the secretory activity of the adipose tissue in the context of body fat percentage are scarce. Further, the impact of aerobic [44] and anaerobic exercise [45] on the secretory activity of the adipose tissue is not yet clear. Little quantitative analysis of systemic response to physical activity is available. Finally, much uncertainty still surrounds the relationship between the type of exercise and adipose tissue secretion.
Accordingly, the present study was designed to determine the effect of intensive aerobic and anaerobic exercise on the serum levels of adipokines and selected cytokines considering the adipose tissue content of healthy physically active young adults.
Experimental Overview
Healthy and physically active male volunteers were assigned to three groups depending on the body fat content determined using a bioelectrical impedance analyzer. Body fat content was categorized as low body fat (<8%), moderate body fat (8-14%), or high body fat (>14%). These reference points were set to correspond to BMI values below 18.5 kg/m 2 and two equal ranges within 18.5-25 kg/m 2 [46]. Based on the WHO criteria [47], the authors of [46] calculated that approximately 8% of body fat corresponds to the BMI of 18.5 kg/m 2 (the threshold for underweight) and approximately 20% of body fat corresponds to the BMI of 25 kg/m 2 (the threshold for overweight). The range of 18.5-25 kg/m 2 and, in this study, 8-20% body fat percentage range was wide and represented typical nutritional status. Therefore, it was decided to investigate participants that were closer to the underweight values and those closer to the overweight threshold. Thus, the body fat range was arithmetically divided into half, acknowledging that both size and amount of adipose tissue are correlated with adipokines secretion ( [48] and [49], respectively).
The study was based on a quasi-experimental, repeated-measures design and was adapted from our previous procedure [50]. The study protocol involved two maximal tests: an anaerobic test (maximal anaerobic effort, MAnE; a double Wingate anaerobic test, WAnT) and an anaerobic test (maximal aerobic effort, MAE; Bruce treadmill test). Venous blood samples were taken at the following timepoints: immediately before, immediately after, 2 h, 6 h, and 24 h after each type of maximal physical exercise. Medical examination, the subject's age, body composition, and height were analyzed at the study enrollment. All of the volunteers were examined by a professional physician before and after every test. Performance tests commenced with MAnE, and MAE was performed 14 days later. All of the laboratory analyses were performed at the Gdansk University of Physical Education (Gdansk, Poland).
Participants
Forty-eight male volunteers (21.78 ± 1.98 years old) participated in the study. The participants were assigned to three groups based on a bioimpedance body composition analysis (InBody 720, South Korea, Seoul): low body fat group (LBF, n = 16; 20.66 ± 1.91 years), moderate body fat group (MBF, n = 19; 19.86 ± 0.88 years), and high body fat group (HBF, n = 13; 20.53 ± 1.40 years). The characteristics of the groups are presented in Table 1. Recruitment to the research project was carried out based on letters of intent among the population of male students at Gdansk University of Physical Education and Sport.
The participants were physically active healthy Gdansk university students without any structured or professional sports training. All of the participants filled in the Global Physical Activity Questionnaire (GPAQ), excluding professional athletes, extremely physically active individuals, and those who were completely inactive. During the examination, 11 people were excluded because they did not meet the study's physical activity demands. All of the participants had similar levels of physical effort exposure due to their daily schedules related to their course of study. None had a history of known diseases or reported any intake of medication due to illnesses 6 months before the study.
The participants' description is consistent with our previous specifications published by Humińska et al. [50]. In the presented study, the population that was not qualified to participate in this project (according to the GPAQ declaration of intensive physical training) was qualified for the previously presented research. The men were representatives of the control study described in [50]. However, some of the participants involved in the current study were rejected from our previous study [50] due to the higher body fat percentage. While more physically active participants were recruited for the earlier study, the less active ones engaged in our current study. Nonetheless, all of the participants in both studies were recruited at a similar time and underwent similar testing procedures, and were treated similarly, e.g., concerning completing questionnaires and nutrition. For the entire duration of the study, the participants were instructed to maintain their everyday diet, and were asked to refrain from vigorous exercise and avoid caffeine and alcohol consumption during the 48 h preceding the testing date. Food was not consumed during testing and water was available ad libitum.
The study protocol was accepted by the Bioethics Committee for Clinical Research of the Regional Medical Society in Gdansk (KB-27/18) and the study was conducted according to the Declaration of Helsinki. Written consent was obtained from each study participant before the study. The recruits were also informed about the possibility of withdrawing consent at any time and for any reason. Before participation, the subjects were informed about the study procedures.
The lipid panel was assessed once before exercise testing, in venous blood collected into 5 mL tubes containing lithium heparin as an anticoagulant. Plasma was obtained after centrifugation at 3500 rpm for 15 min (according to manufacturer protocol). Total cholesterol, triglycerides, high-density lipoprotein (HDL) cholesterol, and low-density lipoprotein (LDL) cholesterol were quantified using spectrophotometric methods. Laboratory kits (Randox Laboratory Ltd., Crumlin, UK) were used for all of the biochemical analyses and sample absorbance was read using a UV-vis spectrophotometer (DREL 3000 HACH). The results of total cholesterol, HDL, LDL, and triglyceride analyses are outlined in Table 2.
Measurement of Anaerobic and Aerobic Fitness Level
The subjects' performances were assessed using the WAnT (for MAnE) and the Bruce treadmill test (for MAE). Table 3 summarizes the individuals' fitness levels.
Maximal Anaerobic Effort
The maximal anaerobic effort was determined using a twice repeated WAnT on a cycle ergometer (Monark 894E, PeakBike, Sweden). The procedure was described previously and adapted from Kochanowicz et al. [51]. The saddle height was adjusted for each participant (knees remaining slightly flexed after the completion of the downward stroke for the final knee angle of approximately 170-175 • ). All of the participants started with a standardized warm-up on the cycle ergometer (5 min at 60 rpm, 1 W/kg). During the test, each participant pedaled with maximum effort for 30 s against a fixed resistive load of 75 g/kg of total body mass, as recommended by Bar-Or [52]. After that, the participants had a 30 s break and the WAnT was repeated in the same manner, with maximum verbal encouragement.
Maximal Aerobic Effort
For MAE, the Bruce protocol on an electric treadmill (h/p/cosmos, Germany) was implemented as described elsewhere [50]. Briefly, after a standardized warm-up, each participant undertook running with an increasing load, including velocity and treadmill inclination. During the test, the participants wore a facemask connected to a pulmonary gas exchange analyzer (Quark CPET, Cosmed, Italy). The test ended when the subject could not continue because of fatigue or other conditions.
Blood Sample Collection and Measurements of Selected Markers
The following procedures were adapted from our previous study [53]. Blood (9 mL) was collected five times: immediately before, immediately after, 2 h, 6 h, and 24 h after every test. Venous blood samples were collected into Sarstedt S-Monovette tubes (S-Monovette ® Sarstedt AG&Co, Nümbrecht, Germany) without an anticoagulant for serum separation but containing a coagulation accelerator. The serum was separated using standard laboratory procedures, aliquoted, and frozen at −80 • C until further analysis.
Statistical Analysis
The statistical procedures used were adapted from our previous work [50]. The descriptive statistics included mean ± standard deviation (SD) for all of the measured variables. One-way ANOVA was used to investigate intergroup differences in physical, lipid profile, and performance characteristics. Two-way ANOVA with repeated measures (RM: baseline and immediately after, and 2 h, 6 h, and 24 h after exercise) was used to investigate the levels of biochemical markers after MAE and MAnE depending on the participants' percent body fat (group: LBF, MBF, and HBF). To assess differences in particular subgroups, Tukey's post hoc test was used. In addition, the effect size was calculated by using eta-squared statistics (η 2 ). Values equal to or more than 0.01, 0.06, and 0.14 indicated a small, moderate, and large effect, respectively. The Shapiro-Wilk and Levene's tests were performed to check the normal distribution and homogeneity of variance, respectively. The total sample size of 48 participants was determined using the G*Power software ver. 3.1.9.4. (Franz Faul et al., Universität Kiel, Kiel, Germany) for the moderate effect size and power of 0.95. All of the analyses were performed using Statistica 12 (StatSoft Inc., Tulsa, OK, USA). The level of significance was set at p ≤ 0.05.
Maximal Anaerobic Effort
Changes in biochemical marker levels after MAnE are shown in Figure 1. In contrast to IL-6, the analysis of variance revealed a significant RM effect for all of the tested biochemical markers. In turn, the effect of the group factor was apparent for adiponectin and leptin. Significant interactions of the group and RM factor were noted for leptin and resistin. A post hoc analysis revealed a significant increase in serum leptin levels 6 h after exercise, compared with the values recorded immediately after exercise, only in the HBF group (by 34.35%). On the other hand, leptin levels in the LBF and MBF groups were unchanged from baseline to 6 h after exercise. Leptin levels significantly decreased in the LBF (by 31.32%) and MBF groups (30.33%) after 24 h. Despite the decrease noted 24 h after MAnE in each group, leptin levels in the HBF group were significantly higher than in the LBF group.
A post hoc analysis of serum resistin revealed a significant increase in the HBF group immediately after MAnE. Resistin levels 6 h and 24 h after exercise decreased to values comparable with those at baseline (Table 4).
Maximal Aerobic Effort
Changes in the levels of biochemical markers after MAE are presented in Figure 2. Similar to MAnE, the analysis of variance revealed a significant effect of the time factor on all of the tested biochemical markers, in contrast to IL-6. However, the effect of the group factor was noted for IL-6, leptin, and OSM.
The analysis of variance revealed a significant interaction of effects only for serum leptin levels ( Table 5). As in the case of MAnE, serum leptin levels were highest 6 h after exercise. However, a significant increase was noted from the baseline values (35.48%) and immediately after exercise (27.29%). In addition, leptin levels 6 h after MAE in the HBF group were significantly higher than those in the LBF and MBF groups.
Discussion
In this study, we set out to determine changes in the circulating adipokine levels in association with the amount of adipose tissue and inflammation, in response to intensive aerobic and anaerobic exercise in physically active young adults. To the best of our knowledge, this is one of the first studies that mainly concentrates on the association between aerobic and anaerobic exercise and the endocrine function of the adipose tissue. A direct comparison of the results to those of other studies is therefore limited because the discussed problem is novel.
The analysis revealed that exercise-induced changes in the serum adipokine levels are associated with the amount of adipose tissue and related to the type of physical effort. For both anaerobic and aerobic exercise, leptin levels increased substantially only in the HBF group and reached a peak 6 h after exercise. Most studies investigating the effects of short-term exercise on leptin report a reduction or no changes in leptin levels [54][55][56][57][58][59][60][61]. For example, a transient decline in leptin levels (6-14%) in individual subjects, up to 120 min post exercise, was shown in men and women 18-55 years of age and with a BMI corresponding to that of the HBF group in the current study after a treadmill test following the Bruce protocol to exhaustion [31,62]. Similarly, longer exercise, i.e., 1 h of running at 50% VO 2max , caused a transient decrease (28%) of leptin levels in obese women up to 60 min after the exercise [63]. A long-lasting physical effort was associated with a decline in leptin levels [63]. This decrease might be related to the elevated production of non-esterified fatty acids during exercise, which is inversely correlated with leptin levels [64].
Considering MAnE, one study reported no immediate effect of a single WAnT on leptin levels in moderately active men with moderate body fat (BMI = 23.78 kg/m 2 ) [65]. On the other hand, Guerra et al. [66] demonstrated that leptin levels in skeletal muscle are reduced in response to a single WAnT exercise by 17% and 26%, 120 and 240 min after exercise, respectively. In another study, four repeated WAnTs decreased leptin levels by up to 20% within the first 90 min after exercise in young overweight/obese women [67]. While we did not detect any significant changes in leptin levels immediately after a double WAnT in the current study, we did observe significantly higher leptin levels 6 h after the exercise in the HBF group than those in the LBF and MBF groups. Most likely, we did not observe a significant reduction in leptin levels in the current study because the applied exercise required only half of the energy expenditure of that applied by Vardar et al. [67]. In addition, we used a different methodological approach than that in [67], comparing five timepoints instead of three. This resulted in a relatively lower sensitivity to detect small changes between analyzed parameters. In summary, the immediate and short-term effects of MAnE on serum leptin levels are probably associated with the amount of adipose tissue and the exercise volume.
The leptin data are intriguing, as the levels increased 6 h after exercise for both types of exercise only in the HBF group. Duzova et al. [68] also reported an increase in serum leptin levels following the implementation of the Bruce treadmill protocol. However, the increase occurred immediately after the exercise. This presumably could be associated with different amounts of adipose tissue (32% in ref. [68] vs. 17.4% in our study) and the different sex of the participants (females in [68] vs. males in our study). It should be noted that in the current study, we also observed the tendency to increase immediately after MAE and as soon as 2 h after MAnE. Of note, the increase reported by Duzova et al. [68] was observed only after 12 weeks of jogging-walking training. Other studies investigating changes in leptin levels after acute exercise focused on the immediate effects in untrained individuals or athletes with lean body types. Therefore, an increase in serum leptin levels might be observed only in individuals with an increased amount of adipose tissue and who are trained to withstand intense aerobic exercise. It is accepted that leptin resistance might not be a simple short-term biomarker of satiety [69] and that leptin levels are a function of body fat and food availability [70].
Besides the immediate and short-term effects of exercise, others have reported a delayed effect on leptin levels. For instance, it has been reported that the deferred (24-48 h) decline in leptin levels after exercise mainly depends on the energetic expenditure [71][72][73], so the higher the energy expenditure, the shorter the delay in leptin level decrease, even within a few hours in the case of prolonged exercise [71][72][73]. We here showed a reduction of leptin levels 24 h after only MAnE, which was more pronounced in the LBF group than in the HBF group. In the case of MAE, a tendency for leptin levels to decrease after 24 h was only apparent in the HBF group. It is possible that the rise in leptin levels described earlier compensated for the reduction observed in participants with a relatively low amount of adipose tissue.
Similar to the leptin data, the current study revealed a decrease in serum adiponectin levels only 24 h after MAnE. This was observed regardless of the amount of adipose tissue. The knowledge of the late effects of exercise on adiponectin levels is limited. Jamurtas et al. [34] reported no difference in adiponectin levels in overweight men up to 48 h after 45 min of exercise at 65% VO 2max intensity. A similar lack of change was documented 17-22 h after an ultramarathon run [74]. The difference in outcomes in [74] and in the current study could be related to the different types of effort and energy expenditure tested.
According to previous studies, strenuous exercise augments adiponectin levels or does not affect them at all [54,55,67,75]. Conversely, others reported a reduction in serum adiponectin levels after five repeated WAnTs in sedentary young adults [76]. The lack of immediate effect in the current study could be explained by the notion that raised catecholamine levels during intense exercise hamper adiponectin secretion [29]. Augmentation of adiponectin levels is likely related to the changes in body composition, instead of a specific manner in which an exercise is being performed [34], when energy expenditure is limited. Resting serum adiponectin levels are diminished in overweight/obese individuals [77]. While the amount of adipose tissue in the HBF group was higher than that in other groups, we did not observe any differences in adiponectin levels at rest or after exercise. The current study suggests that it is unlikely that changes in the adiponectin levels after exercise are related to the amount of adipose tissue in physically active non-obese young adults.
In the case of resistin, the effect of the amount of adipose tissue was only observed in MAnE, with resistin levels elevated immediately after exercise in the HBF group. Similarly, prolonged strenuous exercise, such as marathon running, leads to an increase in serum resistin levels among athletes [74,78,79]. Other studies investigating acute effects of exercise on serum resistin levels in overweight/obese participants reported no changes [34] or a transient decrease after 90 min [67] after four repeated WAnTs. These discrepancies could be explained by the differences in the amount of adipose tissue and physical activity of study participants (VO 2max of 55.28 mL/kg/min and a mean relative power output of 7.87 W/kg in the current study, with 32.8 mL/kg/min in Jamurtas et al. [34] and 3.7 W/kg in Vardar et al. [67], accordingly). Hence, the aerobic and anaerobic capability could play a role in post-exercise changes in resistin levels, especially since the changes in serum resistin levels (associated with a 10.8% mean power output increase) were no longer apparent after 19 days of high-intensity interval training [67]. The increase in resistin levels in the HBF group immediately after MAnE suggests dysregulation of adipose tissue activity. This secretory factor has been linked to insulin resistance and diabetes. Therefore, it is important to analyze it in the context of metabolic syndrome [15,80,81].
In the current study, FSTL-1 levels did not differ between groups. In both MAnE and MAE, we observed an increase in serum FSTL-1 levels immediately after exercise and a decrease 24 h after exercise, in relation to rest values. The observed increase after exercise is in accordance with the results of Mendez-Gutierrez et al. [70], Mieszkowski et al. [82], and Kon et al. [83], who showed that FSTL-1 levels increase after an endurance exercise session consisting of a maximum effort test on a treadmill, a marathon run, and four repeated WAnTs, accordingly [70]. Of note, among the three cited studies, only Mieszkowski et al. [82] investigated the late (24-48 h) response post exercise and, in contrast with the findings of the current study, no difference in comparison to the baseline was apparent. Similarly, OSM data indicated no effect of the adipose tissue on changes induced by either MAE or MAnE. Overall, we observed a reduction of serum OSM levels 24 h after both exercise types regardless of the group. Not much is known about the effect of acute exercise on OSM. According to one study, a marathon run does not substantially alter OSM levels [82]. While the amount of energy expenditure in the current study drastically differs from that in [81], it was also previously shown that a 12-week training either leads to an increase in OSM levels [84] or has no effect on them [85]. Therefore, the impact of exercise on OSM levels remains to be explored. IL-6 is considered to be both pro-and anti-inflammatory. It has been shown that obese individuals are prone to increase circulating IL-6 levels [86]. In the current study, while no obese participants were considered, we did evaluate participants (physically active men) with a wide range of adipose tissue content. Within that range, IL-6 levels in the HBF group were not higher than those in other groups. Of note, IL-6 levels in the HBF group tended to be lower than those in the MBF group, both at rest and after exercise.
It is well known that intense physical activity leads to an increase in circulating IL-6 levels [55], especially after prolonged strenuous exercise, such as a marathon run [75]. Surprisingly, in the current study, we did not observe any changes in IL-6 levels. Similar findings, i.e., no change, were reported by Lira et al. [87] after four repeated WAnTs of either the upper or lower limbs among judo athletes. The same was observed by Williams et al. [88], who showed that just 60 min of endurance effort (65% VO 2max ) induced significant changes in IL-6 levels, while four repeated WAnTs did not. According to Lira et al. [87], the maintenance of IL-6 levels can be related to increased serum glucose after exercise sessions [89], as trained individuals are typically characterized by elevated muscle glycogen storage [90]. Furthermore, no changes after 60 min of either moderate (50% VO 2max ) or intense (70% VO 2max ) effort were observed among overweight men [29]. On the other hand, Bilski et al. [61] reported that a single WAnT procedure increases plasma IL-6 levels. Similarly, Antosiewicz et al. [91] demonstrated that three repeated WAnTs lead to a rise in IL-6 levels in both untrained and trained participants. While the effect of different effort types is unclear, it appears that the exercise-induced increase in IL-6 levels is not associated with the amount of adipose tissue in non-obese young adults.
Of note, in the current study, the anaerobic and aerobic performance was similar in all of the groups, regardless of the amount of adipose tissue. As mentioned above, this could be a factor in the case of specific findings for participants characterized by high adipose tissue content but still capable of high performance.
In the present study, the HBF population was at an increased risk of metabolic syndrome, as indicated by the amount of adipose tissue and lipid profile characteristics (mainly, increased triglycerides, cholesterol, LDL, etc., and high body fat allocation). However, they were all young individuals in their twenties and, from this point of view, it is encouraging that HBF did not manifest significant alterations in most of the tested biochemical parameters. This implies that the secretory function of the adipose tissue is somehow balanced, so that even at such a distant timepoint as 24 h after both types of exercises, no striking effects were seen. Both aerobic and anaerobic exercise are distinctly correlated with improved health. Further work is required to determine the effects of aerobic and anaerobic exercise on the endocrine function of the adipose tissue and to establish the superiority of one type of exercise over another.
It is well known that intense physical exercise generates a robust inflammatory response, characterized by a great outflow of inflammatory mediators (cytokines, exerkines, interferons, growth-regulating factors, and other peptides). It should be emphasized that these mediators also act on each other, by inhibiting or stimulating other peptides, and their complex interconnection of mutual relations underpins the term "cytokine network" [92,93].
The exercise applied in the current study was relatively brief, only lasting up to several min (time to exhaustion) for MAE and 60 s for MAnE. Many studies employing longer training programs reported the anticipated anti-inflammatory effects [94]. Hence, in the current study, one could speculate that the two types of effort did not affect the inflammation profile, as any such change was not evident, e.g., in IL-6, oncostatin M, and follistatin levels. Nonetheless, we acknowledge that these markers are pleiotropic.
Further study could assess the short-term effects of both types of exercise on the levels of pro-inflammatory mediators, e.g., TNF-α and IL-1β, and those of anti-inflammatory mediators, e.g., TGFβ1 and IL-10. The duration of the inflammatory response varies greatly depending on the nature and duration of the stimulus. However, such data would be indicative of inflammation induced by aerobic and anaerobic exercise, if induced at all. In addition, experimental protocol enhancement, e.g., the extension of the experiment by several days in combination with dietary control and body fat measurements, could indicate whether the alterations in biochemical parameters have a disadvantageous or beneficial role in improving the anaerobic and aerobic performance of young adults with high body fat content [67].
Limitations
Considering the secretory endocrine activity of the adipose tissue, one should remember that the cellular response of adipocytes always depends on the number of adipose cells. In the current study, we focused on healthy physically active young adults with a BMI below 30 kg/m 2 , and with a relatively low or moderate fat percentage. Subsequent studies should also consider obese individuals with a BMI over 30 kg/m 2 and over 35 kg/m 2 . On the other hand, it should be acknowledged that the amount of adipose tissue is not the only factor regulating its secretory activity. We have to keep in mind that the secretory activity may also be influenced by the location and variability of adipose tissue. For instance, muscle mass and the number of myocytes determine myokine activity, which affects other cells' secretions. It is well established that adipocyte secretory activity should be assessed in relation to myocyte secretory activity as one of the main regulating factors. The age of the study population is another limitation of the current study. Our study focused on healthy young men, in whom any pathological changes related to the disturbance of endocrine activity of the adipose tissue might not be as pronounced, so that the compensatory mechanisms would hamper the progression of an unfavorable secretory trend. Further, the low number of participants and heterogeneity within groups is another limitation of the study, as that could bias the results. However, according to the sample size calculation, the number of participants involved in the current study exceeds the necessary number of participants for the selected study design. Furthermore, it has been shown that a three-group model similar to the one used in the current study and with a total sample size of 30-45 is robust even without heterogeneity of variance, up to a variance ratio of 3.0 [95]. Hence, the risk of bias in the used approach is limited.
Practical Application
The current manuscript complements earlier findings on the secretory/endocrine activity of the adipose tissue. The presented research has several practical applications. First, it points to the observation that differences in adipose tissue content influence many physiological factors. Of course, the exclusive focus on biochemical cellular responses is a limitation. However, an adiposity increase of only a few percent affects most of the physiological characteristics of training individuals and differentiates the exercise response. At the initial stage of training, obese individuals do not respond to acute aerobic and acute anaerobic activity, and they much better tolerate low-and moderate-intensity exercise. These types of exercises should be implemented at the initial stage of training when an individual starts performing physical activity. After a short period of adaptation to a certain type of activity, i.e., its duration and intensity, acute high-intensity exercise should be introduced. This would have a greater effect on cardiopulmonary efficiency than intense training counting from its beginning. Of course, this relationship concerns only the tissue-dependent response. Further recommendations should be carefully examined.
Conclusions
The current study revealed different responses of serum adipokine levels, depending on the amount of adipose tissue, to different types of exercise. The circulating leptin and resistin levels after an intense effort in physically active young adults with relatively high body fat were higher than those in physically active young adults with relatively low body fat. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical reasons.
|
2022-07-22T15:12:48.786Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "f36ae1fa18d96c953dbced29d477d818920edf3a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/14/8782/pdf?version=1658236262",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "da2fbf3e9e85248d58c80bc16df42e93af46975e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
247519071
|
pes2o/s2orc
|
v3-fos-license
|
Generation and Structuring of Multipartite Entanglement in a Josephson Parametric System
Quantum correlations are a vital resource in advanced information processing based on quantum phenomena. Remarkably, the vacuum state of a quantum field may act as a key element for the generation of multipartite quantum entanglement. In this work, generation of genuine tripartite entangled state and its control is achieved by the use of the phase difference between two continuous pump tones. Control of the subspaces of the covariance matrix for tripartite bisqueezed state is demonstrated. Furthermore, by optimizing the phase relationships in a three‐tone pumping scheme genuine quadripartite entanglement of a generalized H‐graph state ( H∼$\mathcal {\tilde{H}}$ ‐graph) is explored. This scheme provides a comprehensive control toolbox for the entanglement structure and allows to demonstrate, for first time to the authors' knowledge, genuine quadripartite entanglement of microwave modes. All experimental results are verified with numerical simulations of the nonlinear quantum Langevin equation. It is envisioned that quantum resources facilitated by multi‐pump configurations offer enhanced prospects for quantum data processing using parametric microwave cavities.
One of the most easily accessible and, at the same time, reliable sources for entanglement generation is the vacuum state of a quantum field [16].Squeezing, which is the fundamental operation for the continuous variable (CV) states production, allows generation of coherence and entanglement from vacuum fluctuations [17].While two-mode squeezing produces bipartite quantum states, multipartite states can be generated by applying similar operations [18].Intriguingly, multipartite CV states are shown to enable various promising phenomena such as quantum state sharing [19] and secret sharing [20], dense coding [21], error correction [22] and quantum teleportation [23].Alongside with phase sensing [24] and quantum sensor networks [25], multipartite entangled states have significant potential in multiparameter quantum metrology applications [25,26].Furthermore, CV cluster states show potential as a universal quantum computing platform [27][28][29], which has been under active development for the past 20 years.Cluster state calculus, foremost utilizing optical resources [28,[30][31][32][33], realize measurementbased quantum computing [3].
While optical-mode schemes for generation of multipartite states lack versatility, in-situ tunability and are limited to optical frequencies, the microwave platform allows for full control of operations via input rf-signals and integration with the existing silicon-based circuitry.During the past few years, significant progress in processing of CV multipartite states at microwaves has been achieved; for instance, squeezed states produced by microwave cavities have been shown to exhibit correlations between photons in separate frequency bands [18] and strong entanglement between different modes [34].
In this work, we experimentally generate genuinely entangled tripartite and quadripartite states using a superconducting parametric cavity, operated under steadystate conditions.Using the Gaussian-mode formalism [35][36][37], we characterize the generated states and verify entanglement from the covariance matrix.We develop an analytical description, which allows us to determine the entanglement structure of the generated state and establish a connection to H(amiltonian)-graph representation [7,29,38,39].All of the experimental results are in good agreement with the theoretical predictions, in which all circuit parameters were set in accordance with the measured characteristics of the device.
The paper is organized as follows.Section II describes the quantum dynamics of a Josephson parametric system and demonstrates the generation protocol for CV multipartite states, such as fully inseparable and genuinely entangled states.Using analytical methods, we provide the entanglement structure, which is described by graphs and their corresponding adjacency matrices.In section III we present the experimental setup and explain our data analysis methods.In Section IV we present the experimental and theoretical results on the generation of multipartite entanglement using a Josephson paramet-arXiv:2203.09247v2[quant-ph] 18 Apr 2023 ric system in both tripartite and quadripartite cases.In Appendices VIIA-E we present details of our analysis, experimental techniques, and parameter extraction.Appendix VIIF deals with analytical solution of the equations of motion for tri-/quadripartite states using simplified linear model of parametric amplifier and establish correlations (graph connections) between spectral modes, which allows us to classify and control the entanglement structure.Furthermore, analytical forms of relevant covariance matrices are given.
II. THEORETICAL FOUNDATIONS
A. Quantum dynamics of the device Our work employs a parametric system comprised of a superconducting λ/4 resonator terminated in a SQUID loop.Such a setup forms the archetype of a narrow-band superconducting Josephson parametric amplifier (JPA) [40,41].In our setting, we pump the JPA using multitone external RF magnetic flux through the SQUID at frequencies that are approximately twice the frequency of the resonator ω d ∼ 2ω r (three-wave mixing) [18,42].In the rotating frame, the Hamiltonian of the system, as derived in Refs.40 and 43, is given by: where ã (ã † ) is the annihilation (creation) operator for cavity photons in the rotating frame at angular frequency ω Σ /2, α d is the complex amplitude of the d-th pump tone and ∆ d = ω d − ω Σ is the angular frequency detuning of the corresponding tone.Possible extra phase factors in different pump tones are included in the complex pump amplitude α d = |α|e iϕ d .Here ∆ r denotes the detuning between half of the average pump angular frequency ω Σ and the resonator angular frequency ω r : ∆ r = ω r −ω Σ /2, with ω Σ representing the average angular frequency in multi-tone driven case: ω Σ = (1/p) p d=1 ω d with d = {1, . . ., p} as the pump tone index.
Strongly driven SQUIDs are notoriously nonlinear.Therefore, we also include the nonlinear Kerr term with strength K to the description of our parametric system.The Kerr constant controls the parametric behavior close to and above the critical pumping threshold α ≥ α crit .Several effects are accounted for by the Kerr nonlinearity, such as limited maximum gain, compression, observed at α α crit , broadening and shifting of the resonance curve, and parametric oscillation above the critical point.In our experiment, we employ the pump-power-dependent gain coefficient to extract the Kerr constant (see Appendix VII D).
In order to describe the coupling of the cavity resonator to the incoming transmission line and to an intrinsic thermal bath, we include two additional terms in the full Hamiltonian: where H sig includes the coupling to the signal port transmission line with dissipation rate κ, while H loss includes the coupling to the internal loss port with linear dissipation rate γ.
Using the Quantum Langevin Equation (QLE), we obtain the output modes in our parametric system.We employ the standard input/output formalism in the rotating frame, which yields where bin and cin are the ladder (annihilation) operators for the signal and linear dissipation ports, respectively.The output mode bout is obtained using the following relation between the incoming and outgoing modes: We are interested in the correlations embedded in the output mode given by Eq. ( 4) in the time domain.The correlations can be revealed in full after Fourier transformation to the frequency domain.By defining finite-band spectral modes (see Section II B) in the frequency domain and examining correlations between these spectral ranges, we can verify the presence of entanglement in the band-limited microwave signals.
B. Spectral modes definition
Parametric downconversion processes and the definition of employed spectral modes within the fundamental cavity resonance in a multi-pump JPA are illustrated in Fig. 1.The spacing and width of the spectral modes are selected in such a manner that that the modes are generated within the linewidth of the JPA resonance.Each pump that acts on the JPA triggers spontaneous parametric downconversion of a pump photon (vertical red arrows) into two photons, with their energies summing up to the energy of the pump photon (blue arrows).This process is stimulated by vacuum fluctuations, whose existence is a fundamental feature of quantum electrodynamics.One might expect that the downconversion processes would be random, occurring independently for each pump in the multi-tone pumping situation.In such a case, the result would simply be a sum of the downconversion processes, but this turns out not to be the case.Instead, the photons are fundamentally correlated, even if they originate from different pump tones, because they were "born into existence" by the same quantum fluctuation.In other words, one spectral mode contains photons correlated with the quanta in the other spectral modes and, consequently, we expect multipartite correlations to appear (depicted schematically via the zigzag lines).
JPA resonance
ω1/2 FIG. 1. Definition of spectral modes and their correlations in a multi-tone pump setting.Spectral modes in a multi-pump JPA, where the pump tones (red arrows) trigger parametric downconversion process (PDC) leading to the appearance of multipartite correlation between microwaves (blue arrows), extracted from vacuum fluctuations.Numbered spectral modes depicted in green are also correlated due to the continuous pumping of the JPA resulting in multipartite entanglement between microwaves in the spectral modes.The bandwidth of each spectral mode ∆ in Fourier analysis is chosen to be much narrower than the cavity resonance width.
We consider the fundamental resonance of a transmission line JPA centered at ω r frequency with a bandwidth 2δω, within which ([ω r − δω, ω r + δω]) we define N spectral modes as depicted in Fig. 1.Let us define ã as a vector of spectral modes: where N is a total mode number and the creation ã † i = ã † i (t) and annihilation ãi = ãi (t) operators are timedependent.In general, the frequency difference between half of p-th and (p + 1)-th pump tones defines the bandwidth of the spectral mode ãi .We employ an equidistant pump scheme where bandwidth of each mode ãi is defined as ∆ such that the spectrum of a full set of modes ãi covers the bandwidth 2δω of the cavity mode ã.In the experiment, we collect the emitted power over the whole [ω r − δω, ω r + δω] frequency range and separate signals in the N modes using numerical postprocessing.The same operation could be implemented by using accurate bandpass filters with bandwidth ∆.
Within the scope of this work, we consider only tripartite (N = 3, p = 2) and quadripartite (N = 4, p = 3) quantum states.In the following, we elucidate the internal structure of the generated states through a graph representation based on the quantum Langevin equation.
C. Graphical description of quantum states
In order to construct a comprehensive graphical representation, we consider interactions between cavity and input vacuum modes by solving the QLE given in Eq. (3) for N cavity modes defined in Eq. ( 5); for details see Appendix VII F. Assuming the strong coupling regime with κ ∆ while neglecting dissipation losses γ and the Kerrnonlinearity K, Fourier transformation yields a system of linear equations which can be cast in a matrix form: Here the interaction matrix M contains diagonal entries provided by cavity-related part of the Hamiltonian, whereas the parametric terms appear in the off-diagonal entries.For example, one obtains for the tripartite case with ∆ r = 0 and having different phases for the pump tones α 1 = αe iϕ1 , α 2 = αe iϕ2 : where the frequency dependency enters through the c = −iω + κ/2 coefficient.To express the intracavity modes through the vacuum input, we use the inverse matrix For the tripartite case, such a matrix is written in the following way where ∆ϕ = ϕ 1 −ϕ 2 .Besides two mode squeezing (TMS) correlations proportional to α, the matrix contains also beamsplitter correlations (BS) ∝ α 2 (denoted in bold).Note that the α 2 -terms are absent in the matrix M introduced in Eq. ( 6).In this tripartite example, the phase difference ∆ϕ contributes only to the BS connection ãi (ω) ↔ ãj (ω) or ã † i (ω) ↔ ã † j (ω).TMS connections are defined by entries ãi (ω) ↔ ã † j (ω) in M −1 .In Appendix VII F we discuss how the phase shifts between pumps influence the structure of subspaces within the covariance matrix.In the quadripartite case, it turns out that those products of M −1 responsible for beamsplitter interaction can be suppressed fully by choosing pump phases properly in certain pump tone configurations.
Interestingly, the matrix √ κM −1 can also be interpreted as an adjacency matrix, which is used in graph theory for characterization of the connections, the graph edges.In regular H(amiltonian)-graph [7,29,38,39], each vertex represents a vacuum mode and the adjacency matrix describes correlations produced by two mode squeezing (TMS) between the vacuum modes.However, our scheme produces additional correlations, BS correlations, that can no longer be described purely by the TMS correlations and, therefore, the standard H-graph theory needs a more generalized approach.
In our approach, we introduce generalized H-graphs formed by both two mode squeezing and beamsplitter correlations (see Appendix VII F).Examples of such graphs are presented in Fig. 2a, b for the tripartite and quadripartite case, respectively.Intriguingly, the famous Greenberger-Horne-Zeilinger (GHZ) states [7,38,44] have different structure since they are devoid of BS correlations.However, by applying additional pump tones and adjusting phase difference between them, one can generate tripartite and quadripartite GHZ states consisting of only SQ correlations as shown in Fig. 2c,d.In general, the considered multipumping scheme allows us to control SQ and BS bonds providing access to more complex structures of CV quantum states beyond GHZ-like states.
From the experimental point of view, the observer is interested in the adjacency matrix for out-coming modes bout , which are obtained from ã using the input-output relationship in Eq. ( 4).This equation yields bout (ω) = (I − κM −1 ) bin (ω), (10) on the basis of which we may define the adjacency matrix M = I − κM −1 for input-output mode graphs.Due to the linear nature of the equation, the unit matrix does not change the intrinsic form of interactions between the vacuum modes, but the correlation structure of intracavity and output spectral modes is equivalent.Consequently, analyzing a graph defined by the matrix M −1 is sufficient to characterize the BS and TMS connections between vacuum spectral modes.
D. Connection to Hamiltonian graph
CV cluster states with square-lattice graph structure provide a foundation for measurement-based continuous variable quantum computation (CVQC) [38].Cluster states can be asymptotically reached from H-graph states in the case of infinite squeezing [39].The H-graph structure is defined by its adjacency matrix G, whose entries G ij specify the multimode squeezing Hamiltonian as: Here, the pump tone amplitudes are considered to have equal strength α.The matrix G involves TMS correlations between modes ãi ↔ ã † j , but as was pointed out before, the BS correlations do not show up in the H-graph representation.The equations of motion for the operators are given by i ȧ † k = α j G jk ãj and i ȧk = −α j G * jk ã † j .Taking the Fourier transform, the left hand side equals ω × ã(ω), and the combination ωã k (ω) = α j G jk ã † j (−ω) provides the connection to the QLE treatment in Eq. ( 6): ã(ω) here is the cavity signal defined by the graph connections given by G jk .Consequently, the basic graph structures are the same, but the form of M −1 in the QLE analysis yields higher order correlations which are experimentally relevant.
A standard description for graphs is based on the complex symmetric matrix Z = ie −G , which is interpreted as the adjacency matrix for an undirected Gaussian graph with complex-valued edge weights [38,39].Decomposing such a matrix up to quadratic terms Z = iI − iG + i 2 G 2 + O(h 3 ), we obtain corrections to the adjacency matrix, which correspond to additional correlations, the BS correlations, obtained in our QLE analysis.Indeed, BS transformations embody interactions to second-order, which provides classical correlations between corresponding nodes [45].
Let us now show the origin of BS correlations using the multimode squeezing Hamiltonian of Eq. 11 in R = e − i Hτ where we have considered that the system is pumped for a finite time τ .The multimode squeezing operator R can be decomposed to a combination of TMS operators, containing B ij = ã † i ã † j + ãi ãj , and BS transformations based on T ij = ãi ã † j − ã † i ãj .By utilizing the Zassenhaus expansion (up to first order of commutation relationship) [45], we obtain for the tripartite squeezing operator Here, θ 13 specifies the relative phase shift between the two pump tones.For detailed information on the expansion coefficients for a bisqueezed state we refer the reader to Ref. 45.
The total multimode squeezing operation can be considered as a combination of TMS operators, acting on the respective bipartitions, and BS transformations between the other modes.The beamsplitter correlations are phase dependent, and the strength of the BS contribution can be tuned down to zero in certain cases via proper choice of the phase difference between the pumps.
The general decomposition of the multimode squeezing operator can be expressed as e θ13T13 e θ24T24 . . ., (13) where the total number of TMS and BS operators in the decomposition is N BS = n N − 2n and N TMS = n N − 2n + 1; where n = 1, 2, . . ., N 2 for the configuration introduced in Fig. 1 with N − 1 pump tones and N spectral modes.Collecting all of the entries of B ij and T ij , we obtain a generalized adjacency matrix G with entries Gij = NTMS 1 T ij .Thus, the beamsplitter correlations in adjacency matrix arise naturally from the squeezing operator formalism when the Hamiltonian is supplied with the second order terms in pump amplitude.The structure of the matrix G defines the edge connections in the generalized H-graph.
E. Verification of the multipartite entanglement
The generalized graph analysis allows us to visualise the structure of entanglement in the quantum state generated by simultaneous multiple pump tones.However, in order to estimate the amount of quantum resources embedded in the state, we have to investigate and quantify the classical and quantum correlations and determine how they reflect the genuine multipartite entanglement of the state.
Within the framework of parametric amplifiers, all microwave fields produced by a JPA below the critical threshold are Gaussian [17,46].Therefore, the output states of a N -mode JPA can be fully characterized by its covariance matrix of 2N -length column vector with quadratures r = (x 1 , p1 , . . .xN , pN ) T , where xi = (ã i + ã † i )/2 and pi = (ã i − ã † i )/2i.The covariance matrix V, whose elements are given by is sufficient for detection of the entanglement, eliminating the need for analysis of the full density matrix.The last term can be ignored as we take ∆r i = ri − ri .
Obtaining the covariance matrix, we can analyse the entanglement and examine the structure of the quantum state.In this work, we consider fully inseparable states and genuinely entangled states, for which the covariance-based detection is, in general, more robust than detection via complete determination of the state [47].While the covariance matrix is sufficient for evaluating entanglement of Gaussian states, it is necessary to include higher-order correlations in the evaluation of non-Gaussian states [48,49].
To examine inseparability properties of the quantum state [47,[50][51][52][53][54][55], we apply symplectic transformations to the covariance matrix and calculate its symplectic eigenvalues -the PPT criteria [56,57].Such transformations are equivalent to a phase space reflection of a single party in the N -partite state [50].All minimum symplectic eigenvalues {ν i } N i=1 would be less than one, which indicates that this partially time-reversed state is unphysical; in other words, the original state is fully inseparable.As has been pointed out in Ref. 54, if the purity of states cannot be guaranteed in an experimental setting, verification of full inseparability in a multimode system does not imply genuine multipartite entanglement (GME).
The entanglement structure becomes more involved with increasing number of parties.While the symplectic transform approach indicates that any one partite were inseparable from the whole, a state that is a mixture of separable states would show full inseparability based on this PPT criterion.The states that cannot be written in such a way are called genuinely entangled [17] and the verification of such states differs from full inseparability.Using generalized position and momentum observables, an entanglement criterion has been derived and applied to confirm tripartite energy-time entanglement of three spatially separated photons [58].In particular, there is an universal GME criterion derived in Ref. [47] and further refined in Ref. [54].This GME criterion utilizes only variances of quadrature operators and it can be used for entanglement verification without any additional measurements.This general criterion was recently employed for verification of genuine tripartite entanglement of microwaves in a double superconducting cavity setting [34].
The GME criterion is based on the weighted variance of the quadratures, where is sufficient to confirm genuine tripartite entanglement (N = 3) and violation of where is sufficient to confirm genuine quadripartite entanglement (N = 4) with weights h i , g k being in range [−1, 1].
To simplify the search domain, we set h 1 = g 1 = 1 and In the double and triple pumping schemes, we generate generalized tripartite and quadripartite H-graph states, which have a different structure compared with Greenberger-Horne-Zeilinger (GHZ) states [7,38,44].The generalization deals with addition of BS correlations in the H-graph structure.However, by applying additional pump tones and adjusting the phase difference between them, we can obtain regular GHZ type of entangled states.Thus, our scheme facilitates control of TMS and BS correlations and, thereby, allows tuning of the structure of the entangled state.
Typically, the experimental weights of the graph edge connections are slightly non-symmetric due to imperfections in the measurement settings.This results in a difference in the optimal weight values in the GME criterion.In order to find the full violation of the criterion in our analysis, we swap over all possible "base" modes (with weights h i = g i = 1) in order to detect the minimum value for S.
III. EXPERIMENT A. Experimental methods
In our microwave experiments, we employ a niobium λ/4 coplanar 50 Ω transmission line terminated into a SQUID loop (QWJPA), forming a quarter-wave Josephson parametric amplifier.The SQUID's junctions are formed by Nb/Al-Al 2 O 3 /Nb 1×1 µm tunnel barriers with I c 4 µA critical current.The JPA, operating in threewave-mixing mode around the cavity frequency ω r , is pumped by an external RF magnetic flux through the SQUID loop using a single turn pump coil at frequency 2ω r (marked as 2ω LO in Fig. 3a) [40,59].We chose this operation regime, because for four-wave mixing (typically using a current pump near ω r ), the large amplitude pump is within the amplification bandwidth, whereas the threewave mixing process separates the pump tone from the amplified signals, thus simplifying the practical use of the JPA.The loaded quality factor at the operation frequency is ∼900, while the internal Q is by a factor of FIG. 3. Experimental scheme and device characterization.(a) Principle of the experimental setup for tripartite entanglement measurements.The device is connected to the test ports via circulators.The DC bias current and AC pumping of flux are combined and reseparated in bias-T components.Depending on the measurement type, input is connected either to a vector network analyzer (VNA) or a 50 Ω termination, whereas the output is directed either to a VNA, a signal analyzer, or a analog/digital converter.The frequency span of spectral modes and their separation is given for tripartite and quadripartite case in frames (b) and (c), respectively.
three larger.The basic (zero-flux) resonator frequency is 6.115 GHz, it can be tuned below 5.5 GHz by imposing external DC magnetic flux through the SQUID.
Our measurement setup is illustrated in Fig. 3a.The experiments were conducted at 20 mK using a BlueFors LD400 dry dilution cryostat.The JPA was protected from external magnetic fields using a Cryoperm shield.
The DC flux bias and the RF pump shared a common on-chip flux line, and the signals were combined in an external bias-tee.Since our basic microwave setting is for reflection measurements, the sample is connected to the input and output ports via a circulator having a frequency band of 4 − 12 GHz.A vector network analyzer (VNA) was used to characterize the sample, whereas during the entanglement generation measurements, the signal port was kept terminated.By applying a multitone pump to JPA, correlated microwaves are generated from vacuum fluctuations.In the tripartite setting illustrated in Fig. 3a, we had control over the relative phases of the pumps directly whereas, in the quadripartite case, phase rotation was possible in the data analysis only.Basic experimental data in the tripartite case as well as determination of the cavity parameters κ, γ and K are discussed in Appendix VII D.
a. Tripartite case.In the tripartite case, phasecontrolled pump signals from the RF waveform generator are mixed with the frequency-doubled local oscillator (LO) frequency, filtered by a pair of home-made, tunable bandpass cavity filters.In order to avoid spurious pumping at ω 2LO , a band rejection filter tuned to 2ω LO is employed.The filtering ensures passage only for the desired two pump signal at angular frequencies ω 1 = ω r − ∆/2 and ω 2 = ω r + ∆/2.Sufficient noise thermalization is ensured by the 46 dB attenuation because the pump coil is only weakly coupled to the SQUID.
We apply a DC magnetic flux of Φ DC = 0.383Φ 0 through the SQUID loop, resulting in ωr 2π = 6.024GHz for the cavity frequency.In the three-mode experiment, we apply two pump tones at (2 × ωr 2π − 2) MHz and (2 × ωr 2π + 2) MHz, with the half-frequencies positioned as depicted in Fig. 3b and the correlated spectral modes are defined symmetrically with respect to the pump halffrequencies.Each mode has a bandwidth of 1.9 MHz and is separated from the other modes by 0.1 MHz.The phase control in the measurement is facilitated by phase-locking of microwave generators to a 10 MHz Rubidium reference clock and by using a joint external trigger.
To collect data for correlation analysis, we mix down the output signal using a synchronized LO signal and record the output quadratures using two channels of a Teledyne SP Devices ADQ14 digitizer with sample rate of 50 MSa/s per channel covering the bandwidth of 25 MHz.Furthermore, we employ an overall detuning of 14 MHz, i.e. a heterodyne detection scheme, in order to avoid 1/f noise from the measurement devices and the IQ mixer in the frequency conversion part of the setup.Using digital postprocessing, we can easily shift the center frequency of the heterodyned MHz signal to zero, ready for final correlation analysis of the modes.
Our three-mode measurement scheme provides the remarkable advantage of physical control of the phase difference between the two pump tones, which is essential for the analysis of phase dependence in the entangled states.The phase difference between 2 LO signal used as the carrier of the pump tones and LO readout signal remains fixed.
Since our fully phase-locked scheme preserves the phases of the received, demodulated output quadratures, the measurement of covariance matrix components can be averaged for reducing noise in the elements.Finally, the reference phase of a single mode (defining the basis for I and Q) can be adjusted in postprocessing step in such a way that the corresponding subspace of the covariance matrix becomes a diagonal 2 × 2 matrix.
In the tripartite case, indeed, we find that the hardware-controlled relative phase rotation (in addition to the reference phase value to both channels of the pump generator) is equivalent to a proper phase rotation in the postprocessing step.The postprocessing will be discussed in more detail in Section IV.
b. Quadripartite case.In the four spectral mode case, we simplify the experimental setup by eliminating the physical phase control, and replaced it by postprocessing of the received signals.This simplification possibility highlights the scalability of our entanglement generation method.The employed digital postprocessing is equivalent to hardware-level selective separation of spectral modes into four channels, e.g. using bandpass filters in conjunction with power splitters, and additional tunable delay lines for each selected spectral mode frequency.
We apply a DC magnetic flux of Φ DC = 0.417 Φ 0 through a SQUID resulting in ω r = 5.978 GHz cavity frequency that slightly differs from the tripartite case, see Fig. 3b,c.In order to generate quadripartite correlations, we apply three pump tones using Anapico APMS 12G generator, using strong high-pass filtering (2 of Mini-Circuits VHF-8400+) to avoid subharmonic transmission to the circuitry.In this scheme, we avoid any external mixers for the input pump microwaves.By applying three phase-locked pump tones at frequencies 2× ωr 2π MHz, (2 × ωr 2π + 1) MHz, and (2 × ωr 2π − 1) MHz, we generate four correlated spectral modes out from the ground state of the microwave cavity.Each mode has a bandwidth of 0.4 MHz and is separated from adjacent modes by 0.2 MHz.The output microwaves are captured, mixed down and digitized by Anritsu MS2830A Signal Analyzer with a bandwidth of 2 MHz.Again, averaging is needed to lower the noise in the covariance matrix elements, and in this scheme, digital postprocessing is necessary to unify the phase settings in the covariance matrices before summation.
The experimental detection of H-graph states and their genuine multipartite entanglement depends on relations among the covariance matrix elements as discussed in Section II E. The degree of violation in the GME con-dition S < 1 depends strongly on the magnitude ratio of the diagonal covariance elements to the off-diagonal ones.Therefore, calibration of the detected signal powers is decisive, which is discussed in Appendix VII C.
B. Scaled covariance matrix
The system gain G Σ determined in Appendix VII C refers to measured power per unit band width.Since the measured spectral mode quadratures I i and Q i are determined over the band ∆f i , the scaled quadrature x i , equivalent to the amplitude of the quantum mechanical operators x, is given by the formula where G i = G Σ,i is the system gain for i th spectral mode, Z 0 = 50 Ω is transmission line impedance and ∆f i is the bandwidth of the spectral mode: ∆f i = 2 MHz or ∆f i = 0.4 MHz for tripartite and quadripartite case, respectively.Similar scaling is applied also to the quadrature component Q i .Similar to our earlier work [43], the noise added by the preamplifier is subtracted from the diagonal elements of the covariance matrix (see Eq. 14): where V off denotes the covariance matrix measured in the absence of the pump.Due to scaling of the covariance matrix V, this equation yields a unity diagonal matrix in the absence of pumping at T → 0. The average physical temperature in our experiments is T i = 20 mK resulting in coth hfi 2k b Ti = 1.000.
IV. MULTIPARTITE ENTANGLEMENT
To characterize the structure of the entanglement in output states, we analyze the resulting covariance matrices using positive partial transpose (PPT) formalism and GME criteria discussed in Section II E for tripartite and quadripartite cases.a. Tripartite case.Leveraging the amplitude and phase control of the pump signals, we experimentally evaluate the PPT and GME criteria values at different pump parameters.For comparison, we also conducted detailed numerical simulations based on the QLE in Eq. 26 using experimentally determined JPA parameters in the measurements.In general, we find good agreement between simulations and the experimental data, which is reassuring concerning the validity of the results.
Fig. 4 depicts our experimental results on genuine tripartite entanglement and their comparison with simulations.Fig. 4a illustrates results of numerical simulations on GME in terms of S defined in Eq. 15.At weak pumping, the condition for genuine entanglement S < 1 is fulfilled almost independent of the pump phases, but with increasing A, the simulations reveal an even smaller range of ∆ϕ yielding S < 1 (see the inset in Fig. 4a).The strongest genuine tripartite entanglement is reached at ∆ϕ −120 • under normalized pumping amplitude A 0.22, at which the simulations reach S = 0.70.At the minimum of S, the corresponding weights are h i = {1, −0.65, −0.65} and g i = {1, 0.65, 0.65}.It is noteworthy that the phase setting ∆ϕ = +90 • yields clearly worse entanglement than ∆ϕ = −90 • .This asymmetry in GME between ∆ϕ = ±90 • arises from differences in the covariance matrices which is illustrated in Fig. 5.
Our experimental data on S in Fig. 4b displays similar features as Fig. 4a.The measured GME criterion S as function of normalized pump amplitude for three phase differences is shown in Fig. 4b.In the experimental data, nearly no GME is observed at positive phase differences, whereas ∆ϕ = −90 • and ∆ϕ = −120 • yield suppression down to S = 0.75 ± 0.05.The measured result at ∆ϕ = −120 • follows quite well the simulated behavior as a function of the pump amplitude, and genuine entanglement is observed in the normalized amplitude range A ∈ [∼ 0.01, 0.4].Overall, the pattern of S(ϕ, A) in the inset of Fig. 4b coincides with the simulated pattern in Fig. 4a.The agreement strongly supports the presence of genuinely entangled bisqueezed state in our experiment.We emphasize that the optimum entanglement at ∆ϕ = −120 • , observed both in our simulations and in the experiment, cannot be obtained from a simple analytical calculations for the lossless, strongly coupled model.The reason is the frequency-dependent phase response of the cavity due to finite coupling and dissipation rates (see Section VII E), which, when included in the simulations, result in very good matching with the experiment.
Covariance matrices measured at the pump phase difference ∆ϕ = +90 • and ∆ϕ = −90 • are illustrated in Fig. 5. Technically, by rotating the phase of a pump signal, we selectively control certain subspace of the covariance matrix, which can be seen in Fig. 5.An applied phase shift to the 1st or the 2nd pump rotates directly the subspace corresponding to two mode squeezing correlations, modes 1 − 2 or 2 − 3, respectively.If no phase shift to the selected pump tone is applied, the corresponding TMS subspace preserves its distribution of covariances.The subspace spanned by modes 1 and 3, corresponding to the beamsplitter type of correlations, has a structure according to products of the involved TMS subspaces.Distinct control of the BS subspace alone (leaving the TMS subspaces fixed) using a rotation of the pump phases is not possible.
Comparison of Figs.5a and 5b, reveals how the subspaces transform with the phase difference from ∆ϕ = −90 • to ∆ϕ = +90 • .Subspace 1 − 3 corresponding to BS correlations shows a sign inversion in its elements.Subspace 2 − 3 corresponds to the pump, the phase of which has not been changed and, thus, its elements re- The inseparability of the covariance matrix was investigated using the PPT criteria (see Sect.II).Symplectic eigenvalues obtained for mode partitions 1 − 23, 2 − 13, and 3 − 12 are depicted in Fig. 6a for simulations, while experimental data are displayed in Fig. 6b; here the first index specifies the mode in which the sign of the momentum has been reversed.The results are plotted as a function of normalized pump amplitude since the phase difference between the pump tones does not play a role.Indeed, while the genuine entanglement is sensitive to both pump amplitude and phase difference, the PPT criterion is phase independent -the minimum symplectic eigenvalues remain constant when the phase difference is varied at fixed pump amplitude.Therefore, by exercising phase control over each pump, we gain the ability to switch from fully inseparable state to genuinely entangled state, without making any changes to the type of interaction between the modes.According to experimental results in Fig. 6b, the middle frequency acted on by both pumps is the most inseparable part of the covariance matrix.
b. Quadripartite case For the quadripartite case, we apply three pump drives with identical amplitude α 1 = α 2 = α 3 = α.While our goal is to demonstrate genuine entanglement generation of cluster states (mode structure depicted on Fig. 2), we reject direct, physical phase control and use digital postprocessing to transform the covariance matrix to the desired form on which we then verify its entanglement properties.However, we do preserve the coherence between pump tones by phase locking so that the relative phases do not fluctuate over time.By applying a postprocessing phase rotation for each mode separately, we bring the covariance matrix into the symmetric form (see Sect.IV 0 a).For the analysis of full inseparability of the covariance matrix according to the PPT criterion, we evaluate the minimum symplectic eigenvalues min{ν i } of the following mode permutations: 1−234, 2−134, 3−124, 4−123.The experimentally obtained symplectic eigenvalues as function of normalized pump amplitude A are displayed in Fig. 7 alongside with the corresponding predictions given by our numerical simulations.The minimum symplectic eigenvalue min{ν i } = 0.79 ± 0.018 is reached, while all of the eigenvalues in the normalized pump amplitude range 0.01 A 0.15 are less than 1.Compared with the minimum symplectic eigenvalues in Fig. 6, we may conclude that the influence of BS correlations on min{ν i } value is less in the quadripartite state than for the tripartite case.
The GME criterion for four modes as a function of the normalized pump amplitude is depicted in Fig. 7b; the symbols display data while the simulation result is indicated by the solid curve.As was discussed in Sect.II, the optimized weights in GME inequality Eq. ( 16) are chosen in the same manner as in the tripartite case: h 1 = g 1 = 1 and h i = h, g i = g, i = {2, 3, 4}.The strongest genuine entanglement S = 0.84 ± 0.02 is observed at A 0.08 pump amplitude using the weights h i = {1, −0.51, −0.51, −0.51} and g i = {1, 0.69, 0.69, 0.69}.The numerical simulation provides h and g coefficients that coincide with the experimental values with 1% error, which strongly establishes that the states produced in the experiment coincide with the ones that were obtained and analysed in the numerical model.
The covariance matrices obtained in the experiment and using numerical simulation are presented in Figs.8a and 8b, respectively.They are determined at the strongest entanglement point reached at A 0.08.TMS type of correlations are seen in the mode combinations 1 − 2, 2 − 3, 3 − 4 and 1 − 4. Subspaces corresponding to BS correlations are visible in the plot as product distributions in 1 − 3 and 2 − 4 subpartitions.The covariance matrix structures illustrated in Fig. 8 Covariance matrix of genuinely entangled quadripartite H-graph state.a) Experimental covariance where the rotation of the TMS subspaces 1-2, 2-3, amd 3-4 have been made in such a way that the structure coincides with the matrix in Eq. ( 54) of Appendix VII F (each pump has phase π 2 ).The employed pump amplitude A 0.08 yields the smallest value for S. b) Simulated covariance matrix using equal pump phases π 2 at A 0.08.The difference from the matrix in Eq. ( 54) is due to the cavity response that induces extra phase shifts.
we conclude that for the employed pump configuration, the genuine quadripartite entanglement appears in the amplitude range 0.01 A < 0.13.
V. DISCUSSION
The control of bisqueezed tripartite and generalized H-graph ( H-graph) quadripartite states by relative positioning of the pump frequencies and their phases is indicative of the strong potential of these methods for CV quantum state processing.The basic parametric microwave setting allows for enhancement in the number of spectral modes by additional pump tones, which leads to generation of more complex, entangled H-graph states.Enhanced number of modes requires larger bandwidth, which calls for broadband parametric devices such as TWPAs [60][61][62][63][64] or broadband JPAs [59,65] in order to avoid problems with spectral mode crowding.
Our approach based on QLE puts in evidence additional correlations, which are captured by the definition of H-graph states.The correlations arise naturally from the connection between intracavity modes and the input vacuum modes, due to which the same vacuum fluctuations may act in the downconversion of more than one quanta.In the literature on cluster and H-graph states, the adjacency matrix for H-graphs is defined via the matrix specified in the multimode squeezing Hamiltonian.The QLE analysis corresponds to the expansion of the multimode squeezing operator up to second order, which leads to the appearance of beamsplitter correlations in the adjacency matrix.In Appendix VII F (Eqs. ( 55)-( 57)) we show how to use well-chosen relative pump phase values in the quadripartite case to prepare an entangled square lattice state -that is, a state without BS correlations.For the case of very large squeezing, H-graph state can be regarded as an approximation of a 4-node cluster state, minimizing errors in gate operations of measurement-based CV quantum computing.
Cluster states form a promising platform for scalable quantum information processing.In one-way quantum computing [3], the entire computational resource is provided by the entanglement of the cluster state.The processing is based on quantum measurements which facilitate gate operations as well as the read-out of the final result.However, cluster states can be obtained from graph states only in the mathematical limit of large squeezing parameter [27][28][29]32].For quantum information processing steps, it is sufficient to perform sub-cluster measurements in specified order using a suitable computational basis.In Refs.28, 66, and 67 different computation scenarios based on resources provided by squeezing generators and beamsplitters are described.Encoding, gate and measurement operations have been so far considered in optical circuits for continuous variable quantum data and can be efficiently extended to the microwave realm.In this work, we have have utilized this correspondence between optics and microwaves and demonstrated H-graph state encoding.
In contrast to computational models for graph states [38] considered as ideal clusters, hardware based on finite squeezing with noise and decoherence requires error correction procedures [6,68,69] to provide reliable CV computation.Using the presented scheme one can implement error correction codes based on the idea of repetitions of selective measurements and new encoding of H-graph states before each gate operation.In Ref. 70 a multidimensional platform for scalable quantum computing has been proposed, based on cluster states created using microring resonators; also multiple frequency combs [67] created by optical parametric amplifiers and beamsplitters can serve as an excellent platform for quantum computation.Our work shows that the methods of generation of highly-entangled CV states are not restricted to just optical parametric amplifiers, but the methods can be carried over into the microwave domain by employing parametric Josephson junction devices for creation of topologically involved and structurally versatile H-graph states.
An implementation of the universal quantum computer based on bosonic modes with the possibility of hardware-efficient quantum error correction [71] requires efficient generation of continuous-variable quantum resources.The genuine entanglement between several bosonic modes could potentially be employed for errorcorrectable codeword states [72].Besides potential in error correction, the introduction of entanglement into quantum measurement implementations leads to a quantum advantage in the detection process when detection is performed in the presence of high level of noise and loss [73].
Increase of the number of entangled spectral modes is essential for future technological application of these CV quantum state generation methods.The limiting factors are the requirements of high precision for the pump frequency and its phase, the stability of the biasing flux, and possible crowding of modes within a narrow-band JPA resonance.However, recently it has been demonstrated that entanglement can be generated in low-loss traveling wave parametric amplifiers [62][63][64].This opens a way to significant increase in the number of entangled modes.
VI. CONCLUSION
In this work, we presented a practical scheme for generation of controllable multipartite entanglement from vacuum fluctuations, based on multitone pumping scheme of a JPA, which facilitates pivotal resources for quantum technologies at microwave frequencies.While optical schemes for multipartite entanglement generation operate on even larger clusters, they lack versatility and are limited to optical frequencies as such.On the other hand, our scheme allows for a flexible increase in the number of modes and control of the entanglement configuration among the modes by adjusting pumping on the same device, whereas optical setups call for massive hardware reconfiguration when the entanglement structure is altered.Through phase and amplitude variation of the microwave pump tones, we reach a comprehensive control over the entanglement structure within the spectral modes of a single JPA cavity mode, which we experimentally verify in detail for the tripartite case.
Using the developed scheme, we made the first successful demonstration of an on-demand tunable, fully inseparable and phase controllable genuinely entangled tripartite and quadripartite states in a superconducting system.The presence of multipartite quantum correlations was verified using the covariance matrix formalism and genuine entanglement criteria constructed from the measured quadratures.Experimental results were accurately reproduced by calculating symplectic eigenvalues of a partially-transposed covariance matrix for full inseparability detection as well as computing GME criteria in normalized pump amplitude in range 0 < A < 0.5 (0 < A < 0.25) and verified genuine entanglement in the range of 0.01 A < 0.4 (0.01 A < 0.13) for the tripartite (quadripartite) state.
We provided results of phase-dependent GME criterion for bisqueezed state.With optimal phase shift between two pumping tones ∆ϕ = −120 • minimum value of criterion S = 0.75 ± 0.05 was obtained.This result were also faithfully reproduced by numerical simulations.
In our analytical derivations, we demonstrated additional control possibilities over the BS correlations in the covariance matrix of quadripartite H-graph state.To visualize the formed entanglement structure, we provided an extension for the known H-graph adjacency matrix: besides TMS, it includes BS correlations between the vacuum modes.The QLE approach was used to introduce such an adjacency matrix and to connect it to the general approach starting from multimode squeezing operator and the TMS Hamiltonian for the multi-mode case with multiple pumps.As shown in Appendix VII F, BS correlations can be fully suppressed by implementing a 180 o phase shift of one pump.Such a phase combination creates a distinct square-lattice H-graph state which, for the limit of infinite squeezing parameter, transforms to a square-lattice cluster state.
Additional TMS correlations can be introduced by inserting new pump tones, which can change the nature of the entangled states drastically.For example, using two additional pump tones with half frequencies at {− ∆ 2 ; ∆ 2 }, we are able to connect all 4 modes with TMS correlations and thereby achieve a GHZ-like state.Furthermore, by tuning the phases of the pumps, the state can be converted into square-lattice H-graph state.With the bandwidth improvements provided by the state-ofthe-art superconducting parametric devices, such as the broadband, low-loss travelling wave parametric amplifier [62][63][64], we expect a substantial increase in the number of entangled modes, which facilitates generation of highlysqueezed square-lattice H-graph states for CV quantum computation at microwave frequencies.
The Hamiltonian of JPA system is given by where â(â † ) is the annihilation (creation) operators for cavity photons, α d is the complex amplitude for pump tone d, and K denotes the strength of the Kerr nonlinearity term.Using the average of p pump tones ω d , d = {1, . . ., p}, we define the detuning between the half pump frequency and the resonator frequency . For each of the p pump tones, we define the detuning from the average frequency ∆ d = ω d −ω Σ , d = {1, . . ., p}.By applying the rotating wave approximation in the frame ω Σ /2 (ã(t) = â(t)e iωΣt/2 ) and leaving only the effective high-order terms, we obtain for the nonlinear part of the Hamiltonian As usual, the bosonic commutation relationships are valid for the cavity modes ã, ã † = 1.The parametric resonator is coupled to a transmission line via the signal port and to the thermal bath via a linear dissipation port.The coupling Hamiltonian associated with the signal port is given by where creation and annihilation operators b † and b refer to modes in the transmission line, and κ denotes the coupling rate.The Hamiltonian related to the linear dissipation port where c † and c describe creation and annihilation of thermal bath modes and the rate γ represent the coupling of cavity modes to the linear dissipation port.The transmission line and bath operators obey the commutation relations The total Hamiltonian can be conveniently written as a sum of the separate parts given above: For further analysis and for our simulations, we use the Quantum Langevin Equation (QLE) for the cavity operator ã(t): where the presence of the Kerr term allows us to consider dynamics of the parametric resonator above the critical oscillation threshold.To obtain the modes coming out from the cavity, we employ the standard input-output formalism which yields the relationship: Eqs. ( 26) and ( 27) are used in our numerical simulations with Matlab ODE45 solver.
B. Full inseparability
Assuming that the microwave fields produced by the JPA below the critical pumping threshold are Gaussian [34], the states with multiple spectral modes can be fully characterized by measuring the covariance matrix of corresponding in-phase I and quadrature Q voltages.For the measurement of tripartite correlations, we collect quadrature data for 0.8 seconds at every phase difference and pump amplitude value, without averaging.For the quadripartite case, we repeat the experiment 20 times at each pump power value and every quadrature sequence has a duration of 1.3 seconds.
The quantum quadratures xi = ãi+ã † i 2 and pi = ãi−ã † i 2i can be combined into a 2N-long column vector operator for the N-mode state r = (x 1 , p1 , . . .xN , pN ) T .The commutation relations can be written down in a skewsymmetric, block-diagonal matrix form [50]: The covariance matrix V is given by elements V ij = 1 2 ∆r i ∆r j + ∆r j ∆r i − ∆r i ∆r j where we have defined standard error ∆r i = ri − ri and ri = tr (r i ρ).
The uncertainty principle requires that applies for a physical covariance matrix.For verification of entanglement, we may investigate a modified equation where V k = λ λ λ k Vλ λ λ k , λ λ λ k is diagonal matrix with ones entries, except of that related to k-th mode, with value of −1.For example, transformation with λ λ λ k≡N = diag(1, 1, ..., 1, −1) means a partial transposition of the covariance matrix with respect to the last mode.The Positive Partial Transpose (PPT) criterion for multipartite case requires that there is a violation of Eq. ( 30) when applying a partial transposition with respect to each from full set of modes: V k ≥ i 4 Ω.In Ref. 47, the entangled states are classified in accordance to the number of modes for which the condition (30) is broken.We follow this approach to demonstrate the highest class -full inseparability -in four mode case.
Unitary operations which retain the Gaussian character of the states, e.g.squeezing, are of particular importance.Such operations on the Hilbert space correspond to a linear transformation P in the phase-space which preserve the symplectic form, i.e., Symplectic transformations on a 2N-dimensional phasespace form the real symplectic group denoted as Sp(2N ; R), which is a proper subgroup of the special linear group of 2N × 2N matrices [74].By utilizing Williamson's theorem [75], any covariance matrix can be expressed in the Williamson normal form: where Ṽk is a 2N-dimensional diagonal matrix consisting of the symplectic eigenvalues, νk , of the covariance matrix.The symplectic eigenvalues are called the symplectic spectrum which provides a practical means to verify physicality and various entanglement criteria.Separability is in force, when condition νk ≥ 1/4 fulfilled for Ṽk .
For convenience, we insert an additional factor of 4 to the covariance matrix and work with fluctuations with zero mean values: V * ij = 2 ∆r i ∆r j + ∆r j ∆r i .Consequently, for evidence of 'fully inseparable' states, we need to find minimum symplectic eigenvalues with ν * k < 1 for each partial transposition k.
C. System gain calibration
Our system gain calibration procedure consists of a measurement of Johnson-Nyquist noise spectral density emitted by a 50 Ω termination at different temperatures.Assuming perfect matching of the source and load impedances, the received power per unit of bandwidth can be written by applying the Friis formula: the measured noise is given by the noise temperature of the source T s , the contribution of the cooled amplifier T HEMT , and the noise of the room-temperature amplifiers T RT multi- plied by the system gain G Σ,i = G HEMTi G RTi : Here i refers to the frequency of the spectral mode and ∆f i refers to the bandwidth of the detection of quadratures I i and Q i .The total gain G Σ,i was separately determined for different spectral modes.Fig. 9 displays the measured noise power per unit band as a function of sample temperature T s , averaged over frequencies covering the resonance curve.By fitting a line to the data, we obtain G Σ,i = 94.4 ± 0.2 dB for the average total gain.The linear fit in Fig. 9 is performed at T > 0.2 K, which allows us to neglect the corrections from the coth( ω/2k b T s ).
The error in the system gain calibration results in uncertainty in the symplectic eigenvalues on the order of 2%, i.e. the eigenvalues fall in the range of min{ν * k }=min{ν * k } ± 0.018 for each partial bipartition.Random variations of the system parameters were reduced by averaging the outcome by ten to twenty times.
D. System parameter fitting
In order to determine coupling rates γ and κ introduced in Section IIA, we characterized our nonlinear resonator as a two-port device using a vector network analyzer.For the characterization, we chose the optimal DC operating point Φ DC = 0.383 Φ 0 depicted in Fig. 3b.At this DC-flux, we measured the resonance curve in the absence of the pump in order to estimate the external and internal loss rates κ and γ, respectively.By fitting the measured resonance curve to the analytical solution .Pumping carried out in degenerate mode, ω d = 2ωr.Fano resonance picture, given in experimental plot, explained by phase shift between the cavity and input modes and described by complex rate value of κ.
of the QLE ( bout (ω)/ bin (ω)), derived for the linear case without any pump drive, we obtain the coupling coefficients κ 2π = 4.44 MHz and γ 2π = 2.30 MHz.The employed analytical solution, displayed in Eq. ( 34), was derived from the full QLE in Eq. ( 26) without taking the nonlinear part −iKã † ãã into account: For fitting of the Kerr constant K, we employed the whole form of the QLE in the rotating wave approximation (26).By comparing the measured and simulated gain coefficients G(ω probe − ω r , A) (Fig. 10) in the cavity at large pump amplitudes, we obtain an estimate K = 6.5ω r for the Kerr constant.Corresponding phase shifts applied to pump tones to reach "symmetric" covariance matrix view on double frequencies are ∆ϕ1 = π 2 − π 4 = π 4 and ∆ϕ2 = π 2 + π 4 = 3π 4 with corresponding phase shift between pump tones π 2 , given in results (See.Fig. 4).Half of applied phase shift described by pump tones on double resonance frequencies.Additional phase shift with increase of A relates to modification of phase response curve during pumping.
E. Cavity phase response
Optimal value of GME criterion, which governed by "symmetric" covariance matrix view in tripartite mode case, can be obtained with { π 2 ; π 2 } only if modes reshuffling suffers no additional phase rotations (See next Subsection).However, in experiments we deal with finite values of coupling and dissipation loss rates.Cavity phase response becomes crucial figure in pump tones phase shift adjustments.Cavity phase response illustrated on Fig. 11.
F. Multifrequency correlations in terms of QLE with 3 and 4 spectral modes
As discussed in Section IIC, our measurement setting probes outgoing waves from the parametric resonator, which brings about slight differences with standard quantum optics schemes where the entanglement analysis is based on the Hamiltonian of the system.In our case, the QLE provides a good description, and here we derive the relevant matrix equations describing the coupling of the different outgoing spectral modes under two and three pump tones (3 and 4 spectral modes, respectively).
3 mode case.Let us define ã as a vector of spectral modes: where the creation ã † i = ã † i (t) and annihilation ãi = ãi (t) operators are time-dependent.After Fourier transform, 2 )} according pump tone positions {−∆, ∆} (see Fig. 1).Similarly, we define for the input and output modes bin/out : The commutation relationships for the case of N modes can be conveniently expressed in matrix form.We use the common convention for [ã i , ãj ] from Ref. 74.
The effect of Kerr nonlinearity is significant only at large pump amplitudes.Hence, we may take the QLE (26) without the nonlinear part for our treatment.In theoretical analysis we assume, that spectral modes lay down deep in cavity mode, such that ∆ κ; we also neglect internal dissipation expressed by loss rate γ.For that case phase shift between modes, provided by phase response of the cavity, can be neglected.Guided by standard Fourier transform technique for solving linear QLE [40], we denote ãi (ω) = ãi (t)e iωt dt and Fourier transform the QLE term by term.Owing to detuning of the pump tones in the rotating frame, there will be coupling of spectral modes and we have mode index exchange.For example, for ã † (38) the QLE yields the following system of linear equations: We cast Eq. ( 39) into matrix form: where Solving for the inverse of M and using Eq. ( 27), we obtain bout (ω) = (I − κM −1 ) bin (ω) (42) for the outgoing radiation bout (ω) in terms of incoming waves bin (ω).
Because our goal is to determine the structure of the experimental covariance matrix, it is unsatisfactory to consider cavity modes ã with equation ã(ω) = √ κM −1b in (ω) though it has a more compact final form.However, the presence of the identity matrix I and the multiplication factor κ do not change the final structure.
Assuming that the pump amplitude α is a real number and c 1 = c 2 = c (zero detuning case), we have This allows us to draw the generalized H-graph for describing the parametric interaction between the spectral modes, Fig. 2. The off-diagonal beamsplitter elements proportional to α 2 are set in bold in Eq. (43).Still, we want to construct the parametric interaction matrix S −1 for quadrature vector operator r.Using a linear operator matrix K to implement a change of basis we obtain by a canonical transformation Note, that here the overall structure of the matrix has changed because of the basis change from ladder to quadrature operators.This is seen, for example, in the distribution of the off-diagonal beamsplitter correlations (shown in bold).
Since the environment of the cavity is in the ground state, bin has a Gaussian covariance matrix of the form V in = 1 4 I. Consequently, the covariance matrix of the cavity spectral modes ãi can be represented as or, equivalently, for output modes bout : Both forms V a and V out can be employed for studying the structure of parametric interactions between the quadratures, because inputoutput relationship doesn't change the general structure of the couplings between the quadratures (see below).
As shown in Section IV experimentally, phase shift between pumps changes the appearance of the covariance matrix (see Fig. 5) as well as the strength of genuine multipartite entanglement.A change in the matrix M due to a phase shift is illustrated in Eq. ( 47), in which the phase of the first pump has been rotated by e The elements affected by the rotation are indicated in bold in the matrix.The elements in bold face indicate coupling between modes ã1 (ω) ↔ ã2 (ω) while the other off-diagonal elements indicate squeezing across ã2 (ω) ↔ ã3 (ω).Note, that phase rotation operates in opposite direction on rows related to bin (ω) and b † in (ω).The inversion of the rotated matrix M yields for the parametric interaction matrix, where all the beamsplitter elements (in bold) have acquired a π/2 phase shift.This phase shift can be unwound by a phase shift on the second pump, which indicates different phase dependence of the beamsplitter correlations compared with the TMS correlations.The structure of matrix M −1 in Eq. (48) shows that phase rotation of specified pump tones does not change parametric interaction form between modes, preserving structure of a bisqueezed tripartite state.However, as shown in the main text, the criterion describing the strength of GME (see Eq. ( 15)) depends on the difference of pump phases and strong genuine entanglement is reached only at specific phase settings.
The covariance matrix V a obtained from matrix M in Eq. ( 43) with zero pump phase shifts is given in Eq. ( 49).The corresponding covariance matrix for π/2 phase rotation in the first pump is displayed in Eq. (50).The matrix V a in Eq. ( 50) has one rotated subspace, corresponding to two quadrature pairs; these rotated components are indicated by bold face.Based on these analytical relationships we conclude that control over desired covariance matrix TMS-subspace can be provided by phase rotation of corresponding pump tone.Finally, we introduce the same phase rotation e iπ 2 to the second pump.This brings the covariance matrix for the spectral cavity modes to the "standard-symmetric" form displayed in Eq. (51).By comparing Eq. ( 49) and Eq. ( 51) we note that the beamsplitter elements (in bold) in the covariance matrix are unchanged (the phase difference between the pumps is the same) while the TMS elements are different.
The parametric interactions in the covariance matrix can be analysed in the same way as above, but now the number of phase differences influencing the beamsplitter correlations has increased.The system of linear equations ban be written as for three modes in Eq. ( 39), but we skip it and write down the interaction matrix M (Eq.( 52)), where all c coefficient are equal since we have assumed ∆ r = ω r − ωΣ 2 = 0.The signs of α's are governed by the choice of pump phases as {αe iπ 2 ; αe iπ 2 ; αe iπ 2 }.The correlations produced by the pump at ω 2 = 2ω r are indicated in bold.The special role of the central pump is seen because its correlations cover the whole ascending diagonal.
The inverse matrix M −1 reveals the beamsplitter correlations between ã1 (ω) ↔ ã3 (ω) and ã2 (ω) ↔ ã4 (ω): The beamsplitter correlations are indicated in bold in this matrix M −1 .We see that there are two sequences of pump transformations that yield BS correlations between modes ã1 (ω) ↔ ã2 (ω) and ã3 (ω) ↔ ã4 (ω).This agrees with the simple argument that indicates BS correlations to exist when two spectral bands are connected across squeezing action by two pumps with one joint frequency.
Higher order correlations via three pumps exist also, but these are neglected in our analysis.Note that the number of beamsplitter correlations also coincides with the independent number of phase differences between the pumps.Connection of the cavity spectral mode correlations to H-graphs is illustrated in Fig. 2. The beamsplitter correlations are prominent also in the covariance matrix V a (see 46): Bold font marks the beamsplitter correlations which display a different structure in comparison to Eq. ( 53) owing to the base change to quadratures ordered as (x 1 , p1 , . . .xN , pN ) T .So the BS correlations are between quadratures of ã1 (ω) ↔ ã3 (ω) and ã2 (ω) ↔ ã4 (ω).
Finally, the corresponding covariance matrix V a for cavity spectral modes is given by Eq. ( 57).This structure for the covariance matrix is obtained when all the pump signals have an additional phase shift of e iπ 2 .Such a choice of phases will result in a covariance matrix with "symmetric" structure as shown in experimental data in Figs.8a and 8b.By controlling the phases of the pump tones separately, we can rotate and adjust certain subspaces of the 8 × 8 covariance matrix.In particular, the influence of the beamsplitter correlations can be eliminated from V a in the four pump case.
Regarding the quadripartite covariance matrix structures, the relative phase shift between the pump tones are not influenced by the cavity response in the limit of vanishing band widths or with the assumption of huge coupling rate and tiny internal dissipation loss rate.However, additional phase shifts will appear if these conditions are not met, which has to be taken into account in the generation of the desired entangled states.
In principle, it would be possible to evaluate the criteria for GME from the analytical expressions derived in this Appendix (see e.g.Eqs. ( 51) and ( 57)).However, we leave the conclusions about genuine entanglement both in the tripartite and quadripartite case for analysis based on numerical simulations where even the nonlinear terms can be taken into account.The nonlinear terms are of central importance when increasing the pump drive past the critical pumping amplitude.
FIG. 2 .
FIG. 2. Graph representation for the entangled tripartite and quadripartite systems.Generalized H-graph ( H-graph) representation of bisqueezed tripartite CV entangled state (a) and quadripartite CV entangled H-graph state (b) obtained in our experiments.Vacuum modes (red circles) are connected via two-mode squeezing (TMS, solid lines on graph) and beamsplitter (BS, dashed lines) correlation.Graphs (c) and (d) represent tri-and quadripartite GHZ states, which can be obtained via introducing an additional pump tone with ∆ d = 0 in the tripartite case and two additional tones at −∆ and ∆ in the quadripartite setting.The additional pumps supply missing TMS connections to the entangled states.For details, see Appendix VII F.
FIG. 4 .IG. 5 .
FIG. 4. Phase-dependent genuine entanglement of tripartite bisqueezed state.(a) Simulation results for GME criterion as a function of normalized pump amplitude for three different values of the phase difference ∆ϕ between pump signals indicated in the figure; the simulation parameters were set to match the experiment (see Appendix VII D).The inset illustrates S(A, ∆ϕ) up to critical amplitude A = 0.5.The weights hi, gi were optimized in the calculation of S as discussed in the text.(b) Experimental values for S as a function of A at the same phase difference values ∆ϕ between pump signals as in frame (a).The inset illustrates measured S(A, ∆ϕ) up to A = 0.5.In general, the measured S(A, ∆ϕ) corresponds quite well to the inset in frame (a).Due to noise, the measured GME nearly vanishes around ∆ϕ +90 • where even the simulated S is only slightly below 1.The best genuine multipartite entanglement is reached at ∆ϕ = −90 • • • • − 120 • owing to phase shifts introduced by the cavity (see Fig.11in Appendix VII E).The parametric drive changes the phase response of the cavity which leads to a shift in the optimum conditions for GME as a function of A.
FIG. 6 .
FIG.6.Phase-independent full inseparability of tripartite bisqueezed state.a) PPT criteria in terms of the minimum symplectic eigenvalues simulated for our doublepump QWJPA using experimentally determined parameters.Eigenvalues min{νi} are traces over normalized pump amplitude A; permutations 1 − 23, 2 − 13 and 3 − 12 have been considered.The symplectic eigenvalues are the smallest for time-reversed second mode (ν2−13), which participates to both TMS processes.b) Experimentally determined symplectic eigenvalues for the same permutations (•, •, •); the solid lines are just to guide the eyes.We find full inseparability at normalized pumping 0.05 A 0.3 in the experiment.Grey dashed line displays the full inseparability threshold.The difference in the simulated behavior of ν1−23 and ν3−12 is caused by asymmetry due to finite value of resonance detuning ∆r.
FIG. 8.Covariance matrix of genuinely entangled quadripartite H-graph state.a) Experimental covariance where the rotation of the TMS subspaces 1-2, 2-3, amd 3-4 have been made in such a way that the structure coincides with the matrix in Eq. (54) of Appendix VII F (each pump has phase π 2 ).The employed pump amplitude A 0.08 yields the smallest value for S. b) Simulated covariance matrix using equal pump phases π 2 at A 0.08.The difference from the matrix in Eq. (54) is due to the cavity response that induces extra phase shifts.
10 FIG. 9 .
FIG. 9. Gain calibration using linear temperature dependence of the measured thermal noise spectral density of a 50 Ohm terminator measured as a function of Ts, the source temperature.The average total gain GΣ,i = 94.4 ± 0.2 dB over the cavity resonance is obtained from the linear fit (in red) to the data.This gain value GΣ,i also includes frequency mixing losses/amplification in the signal analyzer circuit part.The term Tpreamp = THEMT + T RT G HEMT = 5.2 ± 0.25K characterizes the equivalent noise temperature of the amplifiers; the largest contribution originates from the cooled HEMT amplifier at 4 K.The value coth( hf i 2k b Tpreamp ) sets the background for the diagonal elements in the covariance matrix 4V off .
FIG. 11 .
FIG.11.Cavity phase response given by fitted experimental parameters κ 2π = 4.44 MHz and γ 2π = 2.30 MHz.Vertical dash lines show center frequencies of first and last modes.Corresponding phase shifts applied to pump tones to reach "symmetric" covariance matrix view on double frequencies are ∆ϕ1 = π 2 − π 4 = π 4 and ∆ϕ2 = π 2 + π 4 = 3π 4 with corresponding phase shift between pump tones π 2 , given in results (See.Fig.4).Half of applied phase shift described by pump tones on double resonance frequencies.Additional phase shift with increase of A relates to modification of phase response curve during pumping.
2 }
we are able to flip the sign of one minor diagonal indicated by bold font in Eq. (55).
|
2022-03-18T01:15:48.124Z
|
2022-03-17T00:00:00.000
|
{
"year": 2023,
"sha1": "7156b5383d72e809a2787ce4c7a86a4bc3777490",
"oa_license": "CCBY",
"oa_url": "https://research.aalto.fi/files/98355915/Generation_and_Structuring_of_Multipartite_Entanglement_in_a_Josephson_Parametric_System.pdf",
"oa_status": "GREEN",
"pdf_src": "ArXiv",
"pdf_hash": "7156b5383d72e809a2787ce4c7a86a4bc3777490",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
259257209
|
pes2o/s2orc
|
v3-fos-license
|
MillenniumDB: An Open-Source Graph Database System
ABSTRACT In this systems paper, we present MillenniumDB: a novel graph database engine that is modular, persistent, and open source. MillenniumDB is based on a graph data model, which we call domain graphs, that provides a simple abstraction upon which a variety of popular graph models can be supported, thus providing a flexible data management engine for diverse types of knowledge graph. The engine itself is founded on a combination of tried and tested techniques from relational data management, state-of-the-art algorithms for worst-case-optimal joins, as well as graph-specific algorithms for evaluating path queries. In this paper, we present the main design principles underlying MillenniumDB, describing the abstract graph model and query semantics supported, the concrete data model and query syntax implemented, as well as the storage, indexing, query planning and query evaluation techniques used. We evaluate MillenniumDB over real-world data and queries from the Wikidata knowledge graph, where we find that it outperforms other popular persistent graph database engines (including both enterprise and open source alternatives) that support similar query features.
Introduction
Recent years have seen growing interest in graph databases [4], wherein nodes represent entities of interest, and edges represent relations between those entities.In comparison with alternative data models, graphs offer a flexible and often more intuitive representation of particular domains [3].Graphs forgo the need to define a fixed (e.g., relational) schema for the domain upfront, and allow for modeling and querying cyclical relations between entities that are not well-supported in other data models (e.g., tree-based models, such as XML and JSON).Graphs have long been used as an intuitive way to model data in domains such as social networks, transport networks, genealogy, biological networks, etc. Graph databases further enable specific forms of querying, such as path queries that find entities related by arbitrary-length paths in the graph.Graph databases have become popular in the context of NoSQL [11], where alternatives to relational databases are sought for specialized scenarios; Linked Data [20], xxx Contributions The contributions of this paper are as follows: • the domain graph and property domain graph data models, which allow for succinctly representing graph data models popular in practice, including RDF graphs, RDF-star graphs, property graphs, and the Wikidata knowledge graph [49]; • a formal query language based on domain graphs that captures key features of popular query languages for graph databases, along with a concrete query syntax; • an indexing scheme and query engine designed for domain graphs that incorporates both traditional and state-of-the-art techniques, with optimizations dedicated to the evaluation of graph patterns and path queries; • experiments over the Wikidata knowledge graph [49], involving real-world graph data and queries, comparing algorithms internal to MillenniumDB as well as other graph database engines.
Our experimental results highlight the benefits, for example, of incorporating worst-case-optimal join algorithms when evaluating complex graph patterns (with many joins) versus a more traditional approach based on applying binary joins with a Selinger-based query engine.We further compare the performance associated with different graph search algorithms in the context of path queries.On a more practical note, we show that MillenniumDB, under optimal configurations, clearly outperforms prominent graph database systems -namely Blazegraph, Neo4j, Jena and Virtuosoand discuss why.We further publish a first release of MillenniumDB as an open source graph database engine [41], which we plan to extend in future in order to support more query syntax, query features, transactional updates, index structures, and more.
Paper structure The rest of this paper is structured as follows: In Section 2 we describe existing graph data models and their limitations.In Section 3, we propose domain graphs as an abstraction of these models used in MillenniumDB.In Section 4, we describe the query language of MillenniumDB, and how it takes advantage of domain graphs.In Section 5, we explain how MillenniumDB stores data and evaluates queries.In Section 6, we provide an experimental evaluation of the proposed methods on a large body of queries over the Wikidata knowledge graph.In Section 7, we provide some concluding remarks and ideas for future research.
Data Availability & Supplementary Material Statement
The source code of Millen-niumDB is provided in full at [41].Experimental data is given at [42].
Existing graph data models and their limitations
In this section we briefly recap the popular graph data models in use today, and discuss their limitations when modeling real-world datasets.
Graph data models
RDF and RDF*.One of the simplest models used for representing knowledge graphs is based on directed labeled graphs, composed of a set of edges of the form a Such graphs are the basis of RDF [12], where the source node, edge label and target node are called subject, predicate and object, respectively.Given a universe Obj of objects (ids, strings, numbers, IRIs, etc.) 3 the RDF data model is defined as follows: Definition 1.An RDF triple is an element (s, p, o) ∈ Obj × Obj × Obj.An RDF graph is a finite set of RDF triples.
Upon analyzing this definition, we can immediately notice that the RDF data model lacks the ability to directly refer to the edge (s, p, o) itself.For instance, if we wanted to add the information about when the presidency represented by the above edge starts and when it ends, we would have to resort to some sort of reification, which would introduce an artificial object representing the edge that can then be linked to the start and end date information.For example, the (reified) triples representing the duration of this presidency could be represented as shown in Figure 2. The reification is given by the use of the edges labeled as source, label and target.In order to avoid the need for reification, an extension of the RDF data model called RDF* (or RDF-star) was proposed [18].Intuitively, in RDF* an entire triple can appear as a subject or an object in another triple.For example, in Figure 3, we are modeling the fact that Michelle Bachelet was the president of Chile from 2014-03-11 to 2018-03-11.The node representing the edge is called a quoted triple [19].To distinguish edges that originate in a quoted triple, in Figure 3 we denote them with a dotted line.Formally, the RDF* data model can be defined as follows: Definition 2 ([18]).An RDF* triple is defined recursively as follows:
Michelle
• An RDF triple (s, p, o) is an RDF* triple; and • If s, o are RDF* triples or elements of Obj, and p ∈ Obj, then (s, p, o) is an RDF* triple.Another model extending RDF is that of RDF datasets [12], which are typically used to represent and manage multiple named RDF graphs.This model can be defined in two manners.The first, most general, definition permits empty graphs.Definition 3.An RDF dataset is defined as a pair D = (G, {(n 1 , G 1 ), . . ., (n k , G k )}) where: • G, G 1 , . . ., G k are RDF graphs; and • n 1 , . . ., n k are objects such that n i ̸ = n j for 1 ≤ i < j ≤ k.
The graph G is called the default graph, while each pair (n i , G i ) is called a named graph, composed of the name n i and its corresponding RDF graph G i .
For example, letting G denote the RDF graph of Figure 1, and letting G 1 denote the RDF graph of Figure 2, then we can capture both graphs separately in an RDF dataset of the form D = (G, {(n 1 , G 1 )}), where n 1 is a name (e.g., reified) used to reference the graph G 1 .It is common to simply represent RDF datasets as a set of quads of the form (s, p, o, g) ∈ Obj × Obj × Obj × Obj [17], which indicates that the RDF triple (s, p, o) is in the RDF graph with name g.In this quad-based view, for example, D would then contain the quad (e 1 , source, Michelle Bachelet, reified).A special name can be reserved in order to denote the default graph; for example (Michelle Bachelet, position held, President of Chile, default).This quad-based definition cannot directly support naming empty RDF graphs (though it could be extended to incorporate a set of names for empty graphs).
Property graphs.Finally, one of the more popular graph data models is that of property graphs [15].Property graphs extend the simple edge labeled directed graph with two additional features: (i) they assign explicit identifiers to nodes and edges, so that one can refer to them; and (ii) they allow for annotating both nodes and edges with a set of property-value pairs.For example, the information from Figure 3 can be equivalently represented by the property graph in Figure 4.Here the nodes have identifiers (n 1 , n 2 ) as well as labels (human, public office).Similarly, edges have both identifiers (e 1 ) and labels (position held).A node can have multiple labels, while an edge always has a single label (often referred to as its type).The edge e1 has two properties, namely start date and end date, each with an associated value.Formally, if Obj is a set of objects, L is a set of labels, P a set of properties, and V a set of values, we define the property graph data model as follows: Definition 4. A property graph is a tuple G = (V, E, src, tgt, lab, prop), where: • V ⊂ Obj is finite set of node identifiers; • E ⊂ Obj is finite set of edge identifiers disjoint from V ; • src : E → V assigns a source node to each edge; • tgt : E → V assigns a target node to each edge; • lab : (V ∪ E) → 2 L is a function assigning a finite set of labels to nodes and edges, with |lab(e)| = 1 for all e ∈ E; and • prop : (V ∪ E) × P → V is a partial function assigning a value to a certain property of a node or an edge.
Moreover, we assume that for each object o ∈ V ∪ E, there exists a finite number of properties p ∈ P such that prop(o, p) is defined.
Limitations of existing models
While all of the described data models have great expressive power, they are sometimes cumbersome to use when representing real-world datasets that contain higherarity relations.To illustrate this, we will use the Wikidata [49,34,23] knowledge graph.Consider the two Wikidata statements shown in Figure 5.Both statements claim that Michelle Bachelet was a president of Chile, and both are associated with nested qualifiers that provide additional information: in this case a start date, an end date, who replaced her, and whom she was replaced by.There are two statements for two distinct presidencies.Also the ids for objects (for example, Q320 and P39) are shown; any positional element can have an id and be viewed as a node in the knowledge graph.As aforementioned, representing statements like this in RDF graphs requires reification to decompose n-ary relations into binary relations [21].Figure 6 shows a graph where e 1 and e 2 are nodes representing two distinct n-ary relationships (an extended version of Figure 2).For greater readability, we use human-readable nodes and labels, where in practice, the node Sebastián Piñera will rather be given as the identifier Q306 , and the edge type "replaces" will rather be given as "P155".
Since property graphs allow labels and property-value pairs to be associated with both nodes and edges, reification can be avoided in our example.For instance, the statements of Figure 5 can be represented as the property graph in Figure 7. Though more concise than reification, labels, properties and values are considered to be simple strings, which are disjoint with nodes; for example, Ricardo Lagos is neither a node nor only represent one of the statements (without reification), as we can only have one distinct node per edge; if we add the qualifiers for both statements, then we would not know which start date pairs with which end date, for example. 4xx Regarding RDF datasets, we could model both statements by creating two named graphs, each with a copy of the statement that Michele Bachelet has been President of Chile, thereafter defining the start date, end date, replaced by and replaces annotations in another graph using the graph name.The resulting quads could thus be as follows if we define the latter information in the default graph (for example): This is quite a concise way to model the aforementioned Wikidata statements, wherein we effectively use graph names to assign each edge a unique id that serves as a graph node elsewhere.Indeed, the data model we propose follows a similar idea.However, RDF datasets were defined in the context of managing several (named) graphs, where using them to define edge ids gives rise to several complications; for example, SPARQL does not support evaluating path queries that span different named graphs.
Data model underlying MillenniumDB
In this section, we present the graph data model upon which MillenniumDB is based, called domain graphs, and discuss how it generalizes existing graph data models such as RDF and property graphs.We also show its utility in concisely modeling real-world knowledge graphs that contain higher-arity relations, such as Wikidata [49].
Domain Graphs
The structure of knowledge graphs is captured in MillenniumDB via domain graphs, which follow the natural idea of assigning ids to edges in order to capture higher-arity relations within graphs [21,25,26,5].Formally, assume a universe Obj of objects (ids, strings, numbers, IRIs, etc.).We define domain graphs as follows: Intuitively, O is the set of database objects and γ models edges between objects.If γ(e) = (n 1 , t, n 2 ), this states that the edge (n 1 , t, n 2 ) has id e, type t, and links the source node n 1 to the target node n 2 . 5We can analogously define our model as a relation: DOMAINGRAPH(source, type, target, eid) where eid (edge id) is a primary key of the relation.
The domain graph model of MillenniumDB already subsumes the RDF graph model [12].Recall that an RDF graph is a set of triples of the form (a, b, c).To show how RDF is modeled in domain graphs, consider again the RDF triple from The id of the edge itself is not needed in the RDF data model, but it can be used for modeling RDF-star (RDF*) graphs [18,19].For example, to represent the RDF* graph from Figure 8, we can extend the function γ with two additional statements: Here we use two new edges, e1 and e2, which have the edge e as their starting node.
For stricter backwards compatibility with legacy property graphs (where desired), MillenniumDB implements a simple extension of the domain graph model, called property domain graphs, which allows for external annotation, i.e., adding labels and propertyvalue pairs to nodes and edges without creating new nodes and edges.Formally, if L is a set of labels, P a set of properties, and V a set of values, we define a property domain graph as follows: Definition 6.A property domain graph is defined as a tuple G = (O, γ, lab, prop), where: • lab : O → 2 L is a function assigning a finite set of labels to an object; and • prop : O × P → V is a partial function assigning a value to a certain property of an object.Moreover, we assume that for each object o ∈ O, there exists a finite number of properties p ∈ P such that prop(o, p) is defined.
While domain graphs (without properties) can directly capture property graphswhere, for example, the property-value pair (gender, "female") on node n 1 can be represented by an edge γ(e 3 ) = (n 1 , gender, female), the property-value pair (order, "2") on an edge e 2 becomes γ(e 4 ) = (e 2 , order, 2), the label on n 1 becomes γ(e 5 ) = (n 1 , label, human), the label on e 1 becomes the type of the edge γ(e 1 ) = (n 1 , father, n 2 ), etc. 6 -this can generate "incompatibilities" between the legacy property graph and the resulting domain graph; for example, strings like "male", labels like human, etc., now become nodes in the graph, generating new paths through them that may affect query results.Property domain graphs thus offer an extra layer of flexibility, and interoperability with legacy property graphs, where needed for a given use-case.xxx To illustrate how property domain graphs work, consider the property graph (as introduced in Definition 4) from Figure 9.To model this information via property domain graphs, we use the domain graph part to capture the graph structure of our model, while property domain graphs also permit annotating that graph structure with labels and property-value pairs.The property graph in Figure 9 can be represented with the following property domain graph G = (O, γ, lab, prop), where the graph structure is as follows: and the annotations of the graph structure are as follows: prop(n2, gender) = "male" prop(e2, order) = "2" prop(n2, children) = "2" prop(n1, gender) = "female" prop(n2, first name) = "Alberto" prop(n1, children) = "3" prop(n2, last name) = "Bachelet" prop(n1, first name) = "Michelle" prop(n2, death) = "12 March 1974" The relational representation of property domain graph then adds two new relations alongside DOMAINGRAPH: LABELS(object, label), PROPERTIES(object, property, value), where object, property is a primary key of the second relation, with the first relation allowing multiple labels per object.
Domain graphs compared with other graph data models
Why did we choose (property) domain graphs as the model of MillenniumDB?As discussed in the previous section, it can be used to model both directed labeled graphs (like RDF) as well as property graphs.It also has a natural relational expression, which facilitates its implementation in a query engine.But it is also heavily inspired by the needs of real-world knowledge graphs like Wikidata [49,34,23].To illustrate its versatility, consider again the Wikidata statements shown in Figure 5.Note that edges that originate in another edge are drawn with a dotted line.
As discussed in Section 2, neither RDF nor RDF* can represent these statements without resorting to reification, while property graphs cannot take nodes as values for properties.The domain graph model allows us to capture higher-arity relations more directly.In Figure 10 we present one possible representation of the statements from Figure 5.We only show edge ids as needed (all edges have ids).We do not use the "property part" of our data model for external annotation, considering that the elements of Wikidata statements shown can form nodes in the graph itself.
Domain graphs are similar to named graphs in RDF datasets.Both domain graphs and RDF datasets can be represented as quads.However, the edge ids of domain graphs identify each quad, which, as we will discuss in Section 5, necessitates fewer index permutations.RDF datasets were proposed to represent multiple RDF graphs for publishing and querying.SPARQL thus does not support querying paths that span different named graphs; to support path queries over singleton named graphs, all edges would need to be duplicated (virtually or physically) into a single graph [21].Named graphs could be supported in domain graphs using a reserved term graph, and edges of the form γ(e 3 ) = (e 1 , graph, g 1 ), γ(e 4 ) = (e 2 , graph, g 1 ); optionally, named domain graphs could be considered in the future to support multiple domain graphs.The idea of assigning ids to edges/triples for similar purposes as described here is a natural one, and not new to this work.Hernandez et al. [21] explored using singleton named graphs in order to represent Wikidata qualifiers, placing one triple in each named graph, such that the name acts as an id for the triple.In parallel with our work, recently a data model analogous to domain graphs has been independently proposed for use in Amazon Neptune, which the authors call 1G [26].Their proposal does not discuss a formal definition for the model, nor a query language, storage and indexing, implementation, etc., but the reasoning and justification that they put forward for the model is similar to ours.Similar such models have been generalized as multilayer graphs [5], where the appearance of edge ids within the graph induces different layers of reference.Our work proposes a novel query language, storage and indexing schemes, query planner -and ultimately a fully-fledged graph database engine -built specifically for this model.Furthermore, with property domain graphs, we support annotation external to the graph, which we believe to be a useful extension that enables better compatibility with property graphs.6 (all features except External annotation can be supported in all models with reserved vocabulary).Reserved terms can add indirection to modeling (e.g., reification [21]), and can clutter the data, necessitating more tuples or higher-arity tuples to store, leading to more joins and/or index permutations.The features are then defined as follows, considering directed (labeled) edges: • Edge type/label: assign a type or label to an edge.
• Node label: assign labels to nodes.
• Edge annotation: assign property-value pairs to an edge.
• Node annotation: assign property-value pairs to a node.
• External annotation: nodes/edges can be annotated without adding new nodes or edges.
• Edge as node: an edge can be referenced as a node (this allows edges to be connected to nodes of the graph).
• Edge as nodes: a single unique edge can be referenced as multiple nodes.
• Nested edge nodes: an edge involving an edge node can itself be referenced as a node, and so on, recursively.
• Graph as node: a graph can be referenced as a node.
Some unsupported features in Table 1 are more benign than others; for example, Node label requires a reserved term (e.g., rdf:type), but no extra tuples; on the other hand, Edge as node requires reification, using at least one extra tuple, and also a reserved term.
Wikidata requires Edge as nodes as per Figure 5, where values like Ricardo Lagos are themselves nodes.Only RDF datasets, domain graphs and property domain graphs can model such examples without reserved terms; however, the use of RDF datasets requires co-opting graph names, which are typically used to manage multiple graphs, to rather serve as edge ids.Comparing RDF datasets and domain graphs, the latter sacrifices the "Graph as node" feature without reserved vocabulary to reduce indexing permutations (discussed in Section 5).Property domain graphs further support external annotation, and better compatibility with legacy property graphs.
Query language
Per our goal of supporting multiple graph models, MillenniumDB aims to support a number of graph query languages.However, no existing query language would take full advantage of the property domain graph model defined in the previous section.We have thus implemented a base query language, called DGQL, which closely resembles Cypher [15], but is designed for the property domain graph model, and adds features of other query languages, such as SPARQL, that are commonly used for querying knowledge graphs [10,23].Herein we provide a guided tour of the syntax of DGQL.A full formal specification of the language can be found in the appendix of this paper.
To introduce the features of the query language, in Figure 11 we present (a snippet of) a bibliographical knowledge graph representing data about publications, authors, institutions, etc.The knowledge graph is represented as a property domain graph, where, for authorship relations, we use properties on the edge to indicate the author order, but directly link the edges (via their ids) to the organization node with which the author was affiliated for that particular paper (something not directly possible in property graphs).We further use abstract node and edge ids (n 1 , . . ., n 15 , e 1 , . . ., e 21 ) for brevity, though these may be instantiated with application ids; for example, in Wikidata, the node n 15 denoting the U.S. might rather have the id Q30.
We will use this knowledge graph as a running example in order to illustrate the MillenniumDB query language in the context of a bibliographical use-case, where we wish to analyze citations, find possible collaborators, etc.
Domain Graph Queries
A DGQL query takes the following high-level form: Querying objects.The most basic query will return all the objects (or more precisely, their ids) in our property domain graph.In MillenniumDB we can achieve this via the following query:
MATCH
Over the knowledge graph of Figure 11, this would return a table of all node and edge ids: n 1 , . . ., n 15 , e 1 , . . ., e 21 .Of course, one usually wants to select objects with a certain label, or a certain value in a specific property, as illustrated in the following example.EXAMPLE 4.1.The following DGQL query returns articles published in 1967 from Figure 11: This returns the ids of nodes with label article and value 1967 for the property year, along with their value for the property name, i.e., we return two results as follows: "Pr.Lang.for Aut." If, for example, n 3 did not have a name, we would still return n 3 as a result, leaving the corresponding value for ?x.name blank.
If we wish to specify a range, we can rather use the WHERE clause, which allows us to specify conditions on the results returned.EXAMPLE 4.2.If we want to find articles published before 1990, we can use the following query: MATCH (?x :article) WHERE ?x.year < 1990 RETURN ?x, ?x.nameThis returns the same solutions over Figure 11 as in Example 4.1.If we were to replace "<" with "<=", we would receive a third result for n 5 and "Add.Machines".
Querying edges.In order to query over edges, we can write the following query, which returns γ, i.e., the relation DOMAINGRAPH: The RETURN * operation projects all variables specified in the MATCH pattern, while the construct (?x)-[?e :?t]->(?y) specifies that we want to connect the object in ?x with an object in ?y, via an edge with type ?t and id ?e.This is akin to a query DOMAIN-GRAPH(?x,?t,?y,?e) over the domain graph relation.Variable or constant edge types (e.g.?t above) are prefixed by a colon.EXAMPLE 4.3.Over the graph of Figure 11, the aforementioned query would return results of the following form, with 21 results in total (one for each edge): This is akin to returning the DOMAINGRAPH relation.
We can also restrict which edges are matched, as shown in the following example.
EXAMPLE 4.4.The following query in DGQL will return the ids and names of articles that cite an article of the same year: MATCH (?x)-[:cite]->(?y)WHERE ?x.year == ?y.year RETURN ?x, ?x.name Here we choose to omit the edge id variable as we do not need it (e.g., in the WHERE or RETURN clause).Over the knowledge graph of the running example, this returns a single result: "Pr.Lang.for Aut." In the next example, we illustrate two features together: the ability to return and specify conditions on edge properties, and the ability to query known objects.As per the previous example, the WHERE clause may use Boolean combinations.We recall that the running example uses abstract node ids for brevity.In practice, the node id n 11 could rather be an id such as Q17457, which identifies Donald Knuth on Wikidata.
Path queries.
A key feature of graph databases is their ability to explore paths of arbitrary length.DGQL supports two-way regular path queries (2RPQs), which specify regular expressions over edge types, including concatenation (/), disjunction (|), inverses (ˆ), optional (?), Kleene star (*) and Kleene plus (+).We use =[]=> (rather than -[]->) to signal a path query in DGQL.EXAMPLE 4.6.If we wish to find all of the citations of the article named "Add.Machines", and their respective citations, and so on transitively, we can use the regular expression :cites+ in the following way, further returning the name and year of the articles where available: Notice that a shortest path to each node is returned, so an additional path to e.g.n 4 using cites is not returned.
The final example for paths illustrates operators nested inside a Kleene star.A path that cycles back to Donald Knuth (n 11 ) is included.If we wished to filter such results, we could use WHERE to require the inequality ?y != n 11 .Also there are two possible shortest paths for the n 11 result (via n 4 or via n 5 ), where the first such path to be found is returned.
If we wished to return all shortest paths, we could use the DGQL keyword ALL before before the path variable.For instance, in the previous example, we can write [ALL ?p . ..], which returns a second result for n 11 indicating the other shortest path.
Unlike Cypher, we can return paths matching 2RPQs, not just Kleene star.Unlike SPARQL, we can return paths, not just pairs of nodes.No manipulation of path variables, apart from outputting the result, is currently supported in MillenniumDB, but a full path algebra will be supported in future versions.[3] lie at the core of many graph query languages, including DGQL.Such graphs following the same structure as the data model, but allowing variables in any position.They can be seen as expressing natural xxx (multi)joins over sets of atomic edge patterns.In DGQL, they are given in the MATCH clause.Basic graph patterns are evaluated under homomorphism-based semantics [3], which allows multiple variables in a result to map to the same element of the data.If we evaluate this query over the running example, we get the following results: Given the homomorphism-based semantics, results are returned that map both variables to the same author.If we wished to filter such results, we could stipulate the desired inequalities with WHERE ?x != ?y, which would filter the first, second, fifth and eighth result.
Basic graph patterns. Basic graph patterns
The previous example could equivalently be expressed as a path of the form ?x=[:author/ˆ:author]=>?y.However, with basic graph patterns, we can also capture branches and cycles, as illustrated in the following example.The DGQL query language allows us to take full advantage of domain graphs by allowing joins between edges, types, etc., as illustrated by the following example.EXAMPLE 4.12.The following query looks for articles with an affiliation that is current, i.e., where an author is still staff at the indicated organization: Results for n 9 and n 10 are still returned though they are not currently staff at any organization; the corresponding variables are left blank.Nested optional patterns are also supported.However, optional patterns must form well-designed patterns [31].
Limits and ordering.Some additional operators that MillenniumDB supports are LIMIT and ORDER BY.These allow us to limit the number of output mappings, and sort the obtained results, as illustrated by the following example.
The result returned is as follows: ?x ?x.name n 5 "Add.Machines" Ordering is always applied before limiting results.
Formal definitions for DGQL
For readers interested in a formal specification, we provide the full definition of DGQL, together with the associated semantics, in the appendix to this paper.Specifically, in Appendix A we provide a grammar for DGQL queries, and in Appendix B we define (an equivalent) abstract syntax of DGQL and formal semantics of the language.
Every query language considered in Table 2 supports the notion of a basic graph pattern (BGP), which, in its most general form, is a graph pattern structured like the data, but allowing variables to replace constants.In most cases, the result of a basic graph pattern is a relation (or table) consisting of results, and in some cases it is possible to construct/return a graph (like in G-CORE and SPARQL).
Considering that a graph pattern extracts a table from a graph (as seen in the examples of Section 4.1), relational graph patterns (RGPs) allow the use of relational-based operators to combine the results of one or more graph patterns into a single relation.Full support of this feature in Table 2 indicates that a language provides join, optional, union and negation of graph patterns.Partial support indicates that a language supports some of these operators, usually join and optional graph patterns, as is the case for DGQL (we plan to extend this in future to support more relational operators).Querying edges (QE) is a particular feature of DGQL, allowing for querying relationships involving edges.Notably, DGQL allows an id to be extracted as an edge in one part of the query, and then used as a node in another part.Other query languages provide partial support for querying edges, as they are restricted to query the labels and properties of the edges, require reserved vocabulary (reification), or have other restrictions (e.g., using named graphs in SPARQL over which paths cannot be resolved).
The regular path queries (RPQs) feature refers to matching paths based on (2-way) regular expressions, with concatenation, disjunction, inverse, optional and Kleene star.Partial support indicates that a language offers a restricted group of such operators, such as in the case of Cypher, which supports only Kleene star on top of a single edge type, and not over a subexpression, thus supporting an expression such as cites+, but not a more complex expression, such as (author/ˆauthor)+.
We use the term navigational graph patterns (NGPs) to represent the combination of basic graph patterns and regular path queries.These queries are akin to conjunctive (2-way) regular path queries.NGPs are supported by DGQL, SPARQL and G-Core.
Finally, a query language with full path recovery allows not only to search for some paths, but also to return such paths as objects that can be manipulated (with the nodes and edges in a path).This is a particular feature of G-CORE as it supports path construction operations, and the data model permits storing paths.In Cypher, the resulting paths can be assigned to a variable, so the elements of each path can be accessed by using ad-hoc functions, although with reduced facilities.SPARQL does not support this feature as the output of a path expression is only the start and end nodes of each path.In GSQL and Gremlin, the result of a path query is a set of objects, so the resulting paths must be processed by using a programming language.Currently, DGQL partially supports this feature by returning a path as a string; however, MillenniumDB has been designed to support path manipulation in the future.
Table 2 focuses on core features for querying graphs [3], and thus omits features (e.g., borrowed from SQL) that are supported by some of the languages, and that are potentially very useful in practice, such as aggregations, solution modifiers, federation, etc.Such features can be layered atop the features mentioned.
System architecture
In this section, we describe the internals of the MillenniumDB engine, which have been designed to efficiently support the domain graph model.The overall architecture of the system is presented in Figure 12, and will be explained in the following.
MillenniumDB is founded on tried and tested relational techniques: it stores the (property) domain graph model as several relations indexed in B+ trees, loading parts xxx into main memory as needed using a fixed-size buffer.It also uses algorithmic techniques recently suggested in the theoretical literature for evaluating queries [48,8] techniques not typically implemented in graph database systems -for supporting the domain graphs model in practice.Specifically, we combine three different techniques that are new to the architecture of graph database systems when used in conjunction.First, the data model is encoded as basic relations, indexed following different attributes orders, wherein data objects (e.g., nodes, strings) are represented by ids.Second, we translate the evaluation of any query to several joins between basic relations, which we manage using worst-case optimal join algorithms [30]: an evaluation technique recently proposed for relational database systems.Last, we combine join algorithms with the evaluation of path queries by compiling the path pattern into an automaton and running the query on the fly.These techniques, together, are at the heart of how MillenniumDB optimizes queries over the domain graphs model in practice.
In what follows, we explain how one can store (property) domain graphs and index them.We then outline the query evaluation process and the algorithmic techniques it uses, like the worst-case optimal query plan and the evaluation of path queries.• Nodes, which are objects in the range of γ.They are divided into two subclasses: named nodes, which are objects in the domain graph for which an explicit name is available (e.g.Q320 in Wikidata), and anonymous nodes, which are internally generated objects without an explicit name available to the user (similar to blank nodes in RDF [22]).
• Edges, which are objects in the domain and range of γ, and are always anonymous, internally generated objects.
• Values, which are data objects like strings, integers, etc.These values are classified in two subclasses: inlined values, which are values that fit into 7 bytes of the identifier after the mask (e.g. 7 byte strings, integers, etc.), and external values, which are values longer than 7 bytes (e.g.long strings).
All records stored in MillenniumDB are composed of these identifiers.We will explain later how long strings for external values are handled.
To store property domain graphs, MillenniumDB deploys B+ trees [32].For this purpose, we build a B+ tree template for fixed sized records, which store all classes of identifiers.To store a property domain graph G = (O, γ, lab, prop), we simply store and index in B+ trees the four components defining it: • OBJECTS(id) stores the identifiers of all the objects in the database (i.e., O).
• DOMAINGRAPH(source,type,target,eid) contains all information on edges in the graph (i.e., γ), where eid is an edge identifier, and source, type, and target can be ids of any class (i.e., node, edge, or value).By default, four permutations of the attributes are indexed in order to aid query evaluation.These are: source-targettype-eid, target-type-source-eid, type-source-target-eid and type-target-source-eid. • LABELS(object,label) stores object labels (i.e., lab).The value of object can be any identifier, and the values of label are stored as ids.Both permutations are indexed.
• PROPERTIES(object,property,value) stores the property-value pairs associated with each object (i.e., prop).The object column can contain any id, and property and value are value ids.Aside from indexing the primary key, an additional permutation is added to search objects by property-value pairs.
All the B+ trees are created through a bulk-import phase, which loads multiple tuples of sorted data, rather than inserting records one by one.In order to enable fast lookups by edge identifier, we use the fact that this attribute is the key for the relation.Therefore, we also store a table called EDGETABLE, which contains triples of the form (source, type, target), such that the position in the table equals to the identifier of the object e such that γ(e) = (source, type, target).This implies that edge identifiers must be assigned consecutive ids starting from zero internally by MillenniumDB (they are not specified by the user).In total, we use ten B+ trees for storing the data.
To transform external strings and values (longer than 7 bytes) to database object ids and values, we have a single binary file called OBJECTFILE, which contains all such strings concatenated together.The internal id of an external value is then equal to the position where it is written in the OBJECTFILE, thus allowing efficient lookups of a value via its id.The identifiers are generated upon loading, and an additional hash table is kept to map a string to its identifier; we use this to ensure that no value is inserted twice, and to transform explicit values given in a query to their internal ids.Only strings are currently supported, but the implementation interface allows for adding support for different value types in a relatively simple manner.
All of the stored relations are accessed through linear iterators which provide access to one tuple at a time.All of the data is stored on pages of fixed (but parametrized) size (currently 4kB).The data from disk is loaded into a shared main memory buffer, whose size can be specified upon initializing the MillenniumDB server.The buffer uses the standard clock page replacement policy [32].Additionally, for improved performance, upon initializing the server, it can be specified that the OBJECTFILE be loaded into main memory in order to quickly convert internal identifiers to string and integer values that do not fit into 7 bytes.
Evaluating a query.In MillenniumDB, the execution pipeline follows the standard database template where the string of the query is parsed and translated into a logical plan, which is then analyzed and converted into a physical plan, and finally evaluated, as illustrated by the Query Processor component of Figure 12.
A key part is in how the patterns and filters of a DGQL query (see Section 4.1) are evaluated.Specifically, patterns and filters are grouped together into a list of relations that can be edges, labels, properties, or path queries, forming a large multi-way join query.In essence, evaluating these joins is analogous to selecting an appropriate join plan for the relations representing the different elements.This also goes in hand with selecting the appropriate join algorithm for each of the joins.Given that edges, labels, and properties are all indexed, this will most commonly be index nested-loop join.Paths on the other hand are not directly indexed.For this reason, they are pushed to the end of the join plan and joined via nested-loop with the rest of the multi-way join 7 .
MillenniumDB supports different mechanisms for evaluating the multi-way join formed by the pattern and filter of DGQL query.
• A worst-case optimal query plan as described in [24] is used whenever possible.
This approach implements a modified leapfrog algorithm [48] in order to minimize the number of intermediate results that are generated.
• The classical relational optimizer, which is based on cost estimation, and tries to order base relations in such a way as to minimize the amount of (intermediate) results.We currently support two modes of execution here: (i) Selinger-style join plans [35] which use dynamic programming to determine the optimal order of relations.
(ii) In the presence of a large number of relations, a greedy planner [16] is used which simply determines the cheapest relation to use in each step.
Two particular points of interest are the worst-case optimal query planner, and the way that paths are evaluated.Both of these deploy state-of-the-art research ideas that are usually not implemented in practical graph database systems (though some prototypes exist [24,29,6]).We provide some additional details on these next.
Worst-case optimal query plan.Evaluating multiple joins in a worst-case optimal way is done using a modified leapfrog algorithm [24].While a classical join plan does a nested for-loop over relations, leapfrog performs a nested for-loop over variables [48].Specifically, the algorithm first selects a variable order for the query, say (?x, ?y, ?z).It then intersects all relations where the first variable ?x appears, and over each solution for ?x returned, it intersects all relations where ?y appears (replacing ?x in its current solution), and so on to ?z, until all variables are processed and the final solutions are generated.We refer the reader to [48] and [24] for a detailed explanation.Two critical aspects for supporting this approach are indexes and variable ordering, explained next.
To support the leapfrog algorithm over traditional relational indexes such as B+ trees, we should index all relations in all possible orders of their attributes in order to ensure efficient intersections, which greatly increases disk storage [24].In Millen-niumDB, we include four orders for DOMAINGRAPH, and all orders for LABELS and PROPERTIES.With these orders we can cover the most common join-types that appear in practice [10] by a worst-case optimal query plan.We use the classical relational optimizer if the plan needs an unsupported order or one of the relations uses a path query.
The leapfrog algorithm further requires choosing a variable ordering, which is crucial for its performance.The heuristic we deploy for selecting the variable ordering mixes a greedy approach, and the ideas of the Graham-Yu-Özsoyoglu (GYO) reduction [51].More precisely, we first order the variables based on the minimal cost of the relations they appear in and resolve ties by selecting the variable that appears in more distinct relations.The variables "connected" to the first one chosen are then processed in the same manner (where connected means appearing in the same relation) until the process can not continue.The isolated variables are then treated last.
Evaluating path queries.
For evaluating a path query (2RPQ), the path pattern is compiled into an automaton.Then a "virtual" cross-product of this automaton and the graph is constructed on-the-fly, and navigated via breadth-first search, as commonly suggested in the theoretical literature [28,7,8] (our experiments in Section 6 will also test a depth-first search (DFS) variant).Our assumption is that each path pattern will have at least one of the endpoints assigned before evaluation.This can be done either explicitly in the pattern, or via the remainder of the query.For instance, a path pattern (Q1)=[P31*]=>(?x) has the starting point of our search assigned to Q1.On the other xxx hand, (?x)=[P31*]=>(?y :person) does not have any of the endpoints assigned, however, the (?y :Person) allows us to instantiate ?y with any node with the label :person.
Intuitively, from a starting node (tagged with the initial state of the automaton), all edges with the type specified by the outgoing transitions from this state are followed.The process is repeated until reaching an end state of the automaton, upon which a result can be returned.This allows a fully pipelined evaluation of path queries, while only requiring at most a fixed amount of memory (the neighbors of the node on the top of the BFS queue).Additionally, the BFS algorithm also allows us to return a single shortest path between each pair of endpoints (see Section 4 for an example).Returning a single shortest path comes almost for free, given that it can be reconstructed using the set of visited nodes as used for bookkeeping in the BFS algorithm.The algorithm can also be extended to return all shortest paths (as supported by DGQL) by keeping a list of predecessors that reach the node via a path of shortest length.
The implemented algorithm only requires two permutations of the DOMAIN-GRAPH relation: one for retrieving all of a node's successors via an edge of a specified type; and another for retrieving all such predecessors of a given node.
Benchmarking
In this section, we provide an experimental evaluation of the core graph querying features of MillenniumDB addressing two key questions: (Q1) Which join and path algorithms provide the best performance over domain graphs?(Q2) How does MillenniumDB's performance compare with existing graph database engines?
We base our experiments on the Wikidata knowledge graph [49], which is one of the largest and most diverse real-world knowledge graphs that is publicly available, and also provides a public log of real-world queries posted by Wikidata users that we can use for experiments [27,10].The experiments focus on two fundamental query features: (i) basic graph patterns (BGPs); and (ii) path queries.Regarding (Q1), we compare the performance of different join and path algorithms within MillenniumDB.Regarding (Q2), we also provide a side by side comparison with several popular persistent graph database engines that support BGPs and at least the Kleene star feature for paths.We publish the data, queries, scripts, and configuration files for each engine online, together with the scripts used to load the data and run the experiments [42].
Internal baselines.The base of our comparison is the MillenniumDB implementation available at [41].For comparing the performance of different join and path algorithms in MillenniumDB (per Q1), we include internal baselines, where we test: (i) Millenni-umDB LF, which is the default version implementing the leapfrog triejoin algorithm; (ii) MillenniumDB GR, which implements the greedy algorithm for selecting the join order; and (iii) MillenniumDB SL, implementing the Sellinger join planner.Similarly, for path queries, we test (a) MillenniumDB BFS, the default version of the engine; and (b) MillenniumDB DFS, which evaluates path queries using the depth-first traversal.
Other engines.We also compare the performance of MillenniumDB with five persistent graph query engines (per Q2).First, we include three popular RDF engines: Jena TDB version 4.1.0[40], Blazegraph (BlazeG for short) version 2.1.6[47], and Virtuoso version 7.2.6 [13].We further include a property graph engine: Neo4J community edition 4.3.5 [50]. 8Finally, we also compare with Jena Leapfrog (Jena LF, for short) -a xxx version of Jena TDB implementing a leapfrog-style algorithm [24] -in order to compare with an external graph database using a worst case optimal algorithm.
The machine.All experiments described were run on a single commodity server with an Intel®Xeon®Silver 4110 CPU, and 128GB of DDR4/2666MHz RAM, running the Linux Debian 10 operating system with the kernel version 5.10.The hard disk used to store the data was a SEAGATE ST14000NM001G with 14TB of storage.
The data.The base for our experiments is the Wikidata dataset.In particular, we used the truthy dump version 20210623-truthy-BETA [14], keeping only triples in which (i) the subject position is a Wikidata entity, and (ii) the predicate is a direct property.We call this dataset Wikidata Truthy.The size of the dataset after this process was 1,257,169,959 triples.The simplification of the dataset is done to facilitate comparison across multiple engines, specifically to keep data loading times across all engines manageable while keeping the nodes and edges necessary for testing the performance of BGPs and property paths.The size of the Wikidata Truthy dataset, when loaded into the respective systems, is summarized in Table 3. Default indices were used on Jena TDB, Blazegraph and Virtuoso.Jena LF stores three additional permutations of the stored triples to efficiently support the leapfrog algorithm for any join query, thus using more space.Neo4j by default creates an index for edge types (as of version 4.3.5).To speed up searches for particular entities and properties, we also created an index linking a Wikidata identifier (such as, e.g., Q510) to its internal id in Neo4j.We also tried to index literal values in Neo4j, but the process failed (the literals are still stored).Mil-lenniumDB uses extra disk space because of the additional indices needed to support worst-case optimal join over domain graphs (similar to the case of Jena LF).
How we ran the queries.We detail the query sets used for the experiments in their respective subsections.To simulate a realistic database load, we do not split queries into cold/hot run segments.Rather, we run them in succession, one after another, after a cold start of each system (and after cleaning the OS cache).This simulates the fact that query performance can vary significantly based on the state of the system buffer, or even on the state of the hard drive, or the state of OS's virtual memory.For each system, queries were run in the same order.We record the execution time of each individual query, which includes iterating over all results.We set a limit of 100,000 distinct results for each query, again in order to enable comparability as some engines showed instability when returning larger results.
Memory usage.Blazegraph, Jena and Virtuoso were assigned 64GB of RAM, as is recommended.Neo4J was run with default settings 9 , while MillenniumDB had access to 32GB for main-memory buffer, and it uses an additional 10GB for in-memory dictionar-xxx ies.Since the systems tested are buffer-based -i.e., since they reserve a fixed amount of scrap space (the buffer) in main memory for their operation, and do not exceed this memory (except perhaps modulo a small amount used for internal operations) -and since they tend to use the buffer available, their maximum memory usage corresponds to the these settings.Thus, in the rest of this section we focus on comparing runtimes.
Handling timeouts.We defined a timeout of 10 minutes per query for each system.Apart from that, we note that most systems had to be restarted upon a timeout as they often showed instability, particularly while evaluating path queries.This was done without cleaning the OS cache in order to preserve some of the virtual memory mapping that the OS built up to that point.In comparison, MillenniumDB managed to return a non-trivial amount of query results on each query, and did not need to be restarted, thus handling timeouts gracefully.
Basic Graph Patterns
We focus first on basic graph pattern queries.To test different query execution strategies of MillenniumDB, we use two benchmarks: Real-world BGPs and Complex BGPs, which are described next.
Real-world BGPs
The Wikidata SPARQL query log contains millions of queries [27], but many are trivial to evaluate.We thus generate our benchmark from more challenging cases, i.e., a smaller log of queries that timed-out on the Wikidata public endpoint [27].From these queries we extracted their BGPs, removing duplicates (modulo isomorphism on query variables).We distinguish queries consisting of a single triple pattern (Single) from those containing more than one triple pattern (Multiple).The former set tests the triple matching capabilities of the systems, whereas the latter set tests join performance.Single contains 399 queries, whereas Multiple has 436 queries.
Real-world Single Table 4 (top) summarizes the query times on this set, whereas Figure 13 (left) shows boxplots with more detailed statistics on the distributions of runtimes.Since these queries do not require joins, we show one variant for Millenni-umDB.MillenniumDB is the fastest overall (median of 0.05 s), followed by Blazegraph (median of 0.09 s).In terms of average times and higher percentiles, MillenniumDB more clearly outperforms other engines, being able to enumerate up to 100,000 results (the limit) more quickly due to decoding internal ids more quickly.In MillenniumDB, values such as P12 or Q10 that fit within 7 bytes are inlined, and do not need to be dictionary decoded.The remaining dictionary fits entirely in available memory (∼8 GB of RAM).In the other systems, dictionary decoding generates random accesses to the disk.The four SPARQL engines tested must store IDs as IRIs within the RDF model, which include relatively long prefixes.However, since RDF datasets typically have few prefixes repeated often, we could support full IRIs within MillenniumDB with minimal overhead by encoding a prefix id for the top k prefixes in ⌈log 2 (k)⌉ bits within the object identifier, keeping the small mapping from prefix id to string in memory.The Wikidata query service lists 32 prefixes, which would require 5 bits that would fit "for free" in the class byte (essentially considering each prefix to be a class).Complex BGPs.This is a benchmark used to test the performance of worst-case optimal joins [24].Here, 17 different complex join patterns were selected, and 50 different queries generated for each pattern, resulting in a total of 850 queries.Figure 13 (right), and Table 4 (bottom), show the resulting query times.In this case, the difference between the join algorithms of MillenniumDB is more clear.The worst-case-optimal version (MillenniumDB LF) is not only considerably more stable than the other two versions, but also twice as fast in the median.We can also observe that MillenniumDB GR wins out over MillenniumDB SL on average (but not the median case).When comparing with other engines, the next-best competitor after MillenniumDB LF is Jena LF, showing the benefits of worst-case optimal joins.Virtuoso follows not far behind, while MillenniumDB GR, Jena, Blazegraph and Neo4j are considerably slower.Overall, Mil-lenniumDB LF offers the best performance for every statistic shown in the plot.
Path Queries
To test the performance of path queries, we extracted 2RPQ expressions from a log of queries that timed out on the Wikidata endpoint [27].The original log has 2110 queries.After removing queries that do not use direct properties (which are absent in the Wikidata Truthy dataset), we ended up with 1683 queries.These were run in succession, each restricted to return at most 100,000 results.In the case of SPARQL engines, we added the DISTINCT keyword to remove duplicates caused by the rewriting of fixed-length path queries to unions of BGPs that are then evaluated under bag semantics.To make the comparison fair, the DISTINCT keyword was also added in MillenniumDB queries.
Each system was started after cleaning the system cache, and with a timeout of 10 minutes.Since these are originally SPARQL queries, not all of them were supported by Neo4J given the restricted regular-expression syntax it supports.MillenniumDB and Neo4J were the only systems able to handle timeouts without being restarted. 10In this comparison we do not include Jena LF since it uses the same execution strategy as Jena for property paths.Likewise, for MillenniumDB, we introduce two internal baselines for breadth-first search (BFS) and depth-first search (DFS).The experimental results for these path queries are summarized in Table 5 and Figure 14.
In terms of our internal comparison, we can see that the DFS algorithm slightly xxx ..... Figure 14: Boxplots of query times on property paths outperforms BFS.The reason for keeping BFS as the default algorithm is twofold: (i) it significantly outperforms DFS when paths are also returned; and (ii) it supports returning all shortest paths between any pair of nodes.To illustrate point (i), we ran our experiments again, but now also returning a single path witnessing each query answer.In this case, the average for BFS is 5.9 sec, and median is 0.086 sec.On the other hand, when paths are returned, DFS takes 7.9 sec on average, and 0.1 sec median time.
Compared with other engines, MillenniumDB is generally the fastest, and has the most stable performance.Its average is near a second, i.e., five times faster than the next best contender (Virtuoso).Its median, below 0.1 seconds, is half the next one (Jena's).Even after removing the queries that timed-out on the other systems, they are considerably slower than MillenniumDB.In particular, if we only consider the queries that run successfully on Virtuoso (i.e., excluding the 59 queries that timed-out or gave an error), we get an average time of 0.85 seconds and a median time of 0.086 seconds on MillenniumDB: less than half the times of Virtuoso with these queries excluded.The boxplots further show the stability of MillenniumDB: the medians of other engines are above the third quartile of MillenniumDB.Their third quartile is 5-10 times higher than MillenniumDB's, and higher than its topmost whisker.
To further test robustness, we also ran all of the queries without limiting the output size on MillenniumDB.In this test, the engine timed out in only 15 queries, each returning between 800 thousand and 44 million results before timing out.When running queries to completion, MillenniumDB BFS averaged 13.4 seconds per query (8 seconds excluding timeouts), with a median of 0.1 seconds (both with and without timeouts).
Wikidata Complete
To show the scalability of MillenniumDB, and to property leverage its domain graph, we ran experiments with a full version of Wikidata.We call this dataset Wikidata Complete, and base it off the Wikidata JSON dump 11 version 20201102-all.json,which is preprocessed and mapped to our data model.In Wikidata Complete, we model qualifiers xxx We use properties to store the language value of each string in Wikidata, and also to model elements of complex data values (e.g., for coordinates we would have objects with properties latitude and longitude, and similarly for amounts, date/time, limits, etc.).Each object representing a complex data value also has a label specifying its data type (e.g.coord for geographical coordinates).All qualifiers were loaded.The only elements excluded from the full Wikidata data dump were sitelinks and references.This full version of Wikidata resulted in a knowledge graph with roughly 300 million objects, participating in 4.3 billion edges.The total size on disk of this data was 827GB in MillenniumDB, i.e., more than four times larger than Wikidata Truthy.More details about this dataset can be found in the online material accompanying this paper [42].
We ran the same queries from the benchmarks (Single, Multiple and Complex BGPs, as well as Paths).The number of outputs on the two versions of the data, while not the same, was within the same order of magnitude averaged over all the queries.The results are presented in Table 6.As we can observe, MillenniumDB shows no deterioration in performance when a larger database is considered for similar queries.This is mostly due to the fact that the buffer only loads the necessary pages into the main memory, and will probably require a rather similar effort in both cases.We also note that, again, no queries resulted in a timeout over the larger dataset.
Discussion
Regarding (Q1) -i.e., which join and path algorithms provide the best performance in this setting -regarding join algorithms, we can conclude that the worst-case optimal join algorithm consistently outperforms the greedy and Selinger variants, being particularly notable in the case of more complex graph patterns with many joins (wherein Jena LF -also worst-case optimal -was the next best competitor).Worst-case optimal joins use more space for indexing, but provide superior query runtimes.Regarding path algorithms, we see less difference between BFS and DFS: DFS is slightly faster for returning pairs of nodes connected by paths, while BFS is faster for returning paths.
Regarding (Q2) -i.e., how existing graph database systems compare with Millen-niumDB -we found that MillenniumDB, when equipped with the best join and path algorithms, consistently outperforms other competitors in all query sets tested.
Conclusions and looking ahead
This paper presents MillenniumDB, an open-source graph database system with persistent storage implementing the novel (property) domain graph model.Domain graphs adopt the natural idea of adding edge ids to directed labeled edges in order to concisely model higher-arity relations in graphs, as needed in Wikidata, without the need for reserved vocabulary or reification.They can naturally represent popular graph models, such as RDF and property graphs, and allow for combining the features of both models in a novel way.While the idea of using edge ids as a hook for xxx modeling higher-arity relations in graphs is far from new (see, e.g., [21,25,26]), it is an idea that is garnering increased attention as a more flexible and concise alternative to reification.Our work proposes a formal data model that incorporates edge ids, a query language that can take advantage of them, and a fully-fledged graph database engine that supports them by design.We also propose to optionally allow (external) annotations on top of the graph structure, thus facilitating better compatibility with property graphs, whereby labels and property-values can be added to graph objects without adding new nodes and edges to the graph itself.
We have also proposed a new query language with a syntax inspired by Cypher, but that additionally enables users to take full advantage of the domain graph model by (optionally) referencing edge ids in their queries, and performing joins on any element of the domain graph.We further combine useful features present in both Cypher and SPARQL, in order to provide additional expressivity, such as returning the shortest path witnessing a result for a path query (as captured by a 2RPQ expression).
In the implementation of MillenniumDB, we combine both tried-and-trusted techniques that have been successfully used in relational database pipelines for decades [32] (e.g., B+ trees, buffer managers, etc.), with promising state-of-the-art algorithms for computing worst case optimal joins (leapfrog [48]) and evaluating path queries (guided by an automaton [28,7]).Our experiments over Wikidata, considering real-world queries and data at large-scale, show that this combination outperforms other persistent graph database engines that are commonly found in practice.
Limitations.Many of the current limitations of MillenniumDB relate to the fact that it is still under development.For example, at the moment, MillenniumDB only supports a bulk load of data, where support for (incremental) updates is currently under investigation and development.Currently only the core features of query languages such as Cypher and SPARQL are supported, where we are working on adding support for other features, including negation, value assignment, functions on datatypes, etc. MillenniumDB lacks some of the advanced features supported by other graph database systems, such as geographic, temporal and federated queries, keyword search features, etc.Finally, MillenniumDB does not yet support partitioning the graph over multiple machines in order to achieve horizontal scaling.We do not see such limitations as fundamental, but rather as features that can be added to the engine over time.
Future work.
Looking to the future, we foresee extensions such as: returning entire graphs, supporting more complex path constraints, returning sets of paths, path algebra, just to name a few.Regarding more practical features, we aim to add support for full transactions, keyword search, a graph update language, existing graph query languages, and more besides.More importantly, given that MillenniumDB is published as an open source engine, we hope that the research community can view the Millenni-umDB code base as a sandbox for incorporating their novel algorithms and ideas into a modern graph database, without the need to remake storage, indexing, access methods, or query parsers.Along these lines, we are currently working on adding an inmemory storage option to MillenniumDB using the ring [6]: a data structure based on the Burrows-Wheeler transform that supports worse-case optimal joins (over triples) in space similar to representing the graph itself.Initial tests show that the ring can store Wikidata Truthy in 50GB of space and improve median query times by a factor of 3, with average query times remaining similar.We are working on extending the ring to support edge ids and thus work with domain graphs.We also wish to explore the deployment of MillenniumDB for key use-cases; for example, we plan to provide xxx and host an alternative query service for Wikidata, which may help to prioritize the addition of novel features and optimizations as needed in practice.Intuitively, the MATCH clause specifies the basic or navigational graph pattern which we will look for in our graph.The WHERE clause is used to filter result based on a selection, usually by restricting the values of some of the attributes of a matched object.The RETURN clause specifies which of the matched variables will be returned.The ORDER BY clause allows us to reorder the results based on the values of some output variables, while LIMIT cuts off the evaluation after a specific number of results have been found.
We define the formal syntax of the DGQL query language in Figure 16.Examples of DGQL queries following this syntax can be found in Section 4.1.
B Formal Definition of Domain Graph Queries
Queries in MillenniumDB are based on the abstract notion of a domain graph query, which generalizes the types of graph patterns used by modern graph query languages [3].This query abstraction provides modularity in terms of how the database is constructed, flexibility in terms of what concrete query syntax is supported, and allows for defining its semantics and studying its theoretical properties in a clean way.
This section provides the formal definition of the MillenniumDB query language.From now on, assume an infinite set Var of variables disjoint with the set of objects Obj.
B.1 Basic graph patterns
At the core of domain queries are basic graph patterns. 12A basic graph pattern is defined as a pair (V, ϕ) such that ϕ : (Obj ∪ Var) → (Obj ∪ Var) × (Obj ∪ Var) × (Obj ∪ Var) is a partial mapping with a finite domain and V ⊆ var(ϕ), where var(ϕ) is the set of variables occurring in the domain or in the range of ϕ.Thus, ϕ can be thought of as a domain graph that allows a variable in any position, together with a set V of output variables (hence the restriction that each variable in V occurs in ϕ).
The evaluation of a basic graph pattern returns a set of solution mappings.A solution mapping (or simply mapping) is a partial function µ : Var → Obj.The domain of a mapping µ, denoted by dom(µ), is the set of variables on which µ is defined.Given v ∈ Var and o ∈ Obj, we use µ(v) = o to denote that µ maps variable v to object o.Given a set V ′ of variables, the term µ | V ′ is used to denote the mapping obtained by restricting Finally, for the sake of presentation, we assume that µ(o) = o, for all o ∈ Obj.
The evaluation of a basic graph pattern B = (V, ϕ) over a domain graph G = (O, γ), denoted by B G , is defined as For example, consider the basic graph pattern (V, ϕ) where V = {v 2 , v 4 , v 6 } and ϕ is given by the assignments: In Figure 17, we provide a graphical representation of the above graph pattern, and the solution mappings obtained by evaluating the graph pattern over the property domain graph shown in Figure 10.The solution mappings are presented as a table with columns v 2 , v 4 , v 6 (i.e. the variables in V ), and each row represents an individual mapping.In our definitions, different variables may map to the same object in a single solution.Thus, our notion of evaluation follows a homomorphism-based semantics, similar to query languages such as SPARQL [3]. 13upporting labels and properties.Observe that the formalization thus far only allowed to access elements of the function γ, and could not reason about labels, nor properties.In essence, up to now we only defined the semantics of queries over domain graphs.We now extend this definition to property domain graphs.
Following our approach of modelling a query similarly as a domain graph, we define a basic graph pattern with properties as a tuple (V, ϕ, Qlab, Qprop), where: • a ∈ dom(Qlab) implies that there are x, y, z, w such that ϕ(x) = (y, z, w); and a ∈ {x, y, z, w} • Qprop : (Obj ∪ Var) × P → V is a partial mapping; xxx • (a, k) ∈ dom(Qprop), for some k ∈ P, implies that there are x, y, z, w such that ϕ(x) = (y, z, w); and a ∈ {x, y, z, w} A basic graph pattern with properties extends basic graph patterns with a labelling function and a property checking function.Notice that the domain constraints on Qlab and Qprop serve to make sure that these are associated to some variable or object used in the core pattern.Given a property domain graph G = (O, γ, lab, prop), the semantics of a basic graph pattern with properties BP = (V, ϕ, Qlab, Qprop), denoted BP G is defined as This extends the evaluation to also support labels and properties with their values, akin to making the query pattern have the same structure as the property domain graph, allowing us to query labels and properties as described in Section 4.
B.2 Navigational graph patterns
A characteristic feature of graph query languages is the ability to match paths of arbitrary length that satisfy certain criteria.We call basic graph patterns enhanced with this feature navigational graph patterns, and we define them next.
A popular way to express criteria that paths should match is through regular expressions on their labels, aka.2-way regular path queries (2RPQs).More precisely, an 2RPQ expression r is defined by the following grammar: The semantics of an 2RPQ expression r is defined in terms of its evaluation on a (property) domain graph G, denoted by r G , which returns a set of pair of nodes in the graph that are connected by paths satisfying r.More precisely, assuming that G = (O, γ), o ∈ Obj and r, r 1 , r 2 are 2RPQ expressions, we have that: Moreover, assuming that r 1 = r and r n+1 = r/r n for every n ≥ 1, we have that: Other 2RPQ expressions widely used in practice can be defined by combining the previous operators.In particular, r? = ε + r and r + = r/r * .A path pattern is a tuple (a 1 , r, a 2 ) such that a 1 , a 2 ∈ Obj ∪ Var and r is a 2RPQ expression.As for the case of basic graph patterns, given a path pattern p, we use the xxx term var(p) to denote the set of variables occurring in p.Moreover, the evaluation of p = (a 1 , r, a 2 ) over a property-domain graph G, denoted by p G , is defined as: p G = {µ | dom(µ) = var(p) and (µ(a 1 ), µ(a 2 )) ∈ r G }.
For example, the expression (Michelle Bachelet, (replaced by) + , v) is a path pattern that returns all the Presidents of Chile after Michelle Bachelet.Given a set ψ of path patterns, var(ψ) also denotes the set of variables occurring in ψ, and the evaluation of ψ over a property-domain graph G is defined as: A navigational graph pattern is a triple (V, ϕ, ψ) where (V ′ , ϕ) is a basic graph pattern for some V ′ ⊆ V , ψ is a set of path patterns, and V ⊆ var(ϕ) ∪ var(ψ).The semantics of a navigational graph pattern N = (V, ϕ, ψ) is defined as: Hence, the result of a navigational graph pattern N = (V, ϕ, ψ) is a set of mappings µ projected onto the set V of output variables, where µ satisfies the structural restrictions imposed by ϕ and the path constraints imposed by ψ.Notice that multiple 2RPQ expressions can link the same pair of nodes.This is similar to the existential semantics of path queries, as specified in the SPARQL standard [17].
Given a domain graph G = (O, γ), we define paths over the directed labeled graph that forms the range of γ; in other words, we do not allow for matching paths that emanate from an edge object (except when it appears as a node).Such a feature could be considered in the future.We may also consider additional criteria on node or edges in the matching paths, etc.
B.3 Relational graph patterns
As previously discussed (and seen in the example of Figure 17), graph patterns return relations (tables) as solutions.Thus we can -and many practical graph query languages do -use a relational-style algebra to transform and/or combine one or more sets of solution mappings into a final result.
Towards defining this algebra, we need the following terminology.Two mappings µ 1 and µ 2 are compatible, denoted by µ 1 ∼ µ 2 , if µ 1 (v) = µ 2 (v) for all variables v which are in both dom(µ 1 ) and dom(µ 2 ).If µ 1 ∼ µ 2 , then we write µ 1 ∪ µ 2 for the mapping obtained by extending µ 1 according to µ 2 on all the variables in dom(µ 2 ) \ dom(µ 1 ).Given two sets of mappings Ω 1 and Ω 2 , the join (⋊ ⋉), anti-join (▷) and left outer join (⋊ ⋉) between Ω 1 and Ω 2 are defined respectively as follows: With this terminology, a relational graph pattern is recursively defined as follows: • If N is a navigational graph pattern, then N is also relational graph pattern; • If R 1 and R 2 are relational graph patterns, then (R 1 AND R 2 ) and (R 1 OPT R 2 ) are relational graph patterns. xxx The evaluation of a relational graph pattern R over a property-domain graph G, denoted by R G , is recursively defined as follows: • if R is a navigational graph pattern N , then R G = N G ;
B.4 Selection conditions
In addition to matching a graph pattern against a property-domain graph, we would like to filter the solutions by imposing selection conditions over the resulting objects (i.e.nodes and edges).More precisely, a selection condition is defined recursively as follows: (a
B.5 Solution modifiers
We consider an initial set of solution modifiers that allow for applying a final transformation on the solutions generated by a graph pattern.These include: RETURN, which defines a set of elements (variables and properties) to be returned; ORDER BY, which orders the solutions according to a sort criteria; and LIMIT, which returns the first n mappings in a sequence of solutions (with n specified in the clause).Notice here that the solution mappings are not defined by the RETURN solution modifier, but rather by the relational graph pattern, and by selection conditions.
Let S be the set of strings, v ∈ Var and k ∈ K.A return mapping is a function τ : S → Obj ∪ V.A return element is either a variable v or an expression v.k.Assume that there is a simple way to transform a return element into a string in S. Given a sequence of return mappings S and an integer n, the function limit(S, n) returns the first n elements of S when n > 0, and returns S otherwise.
xxx An order modifier is a tuple (e, β) where e is a return element and β is either asc or desc.Given a sequence of return mappings S and an order modifier o = (e, β), we say that S satisfies o, denoted S |= o, if it applies that: (i) β is asc and S satisfies an ascending order with respect to e; or (ii) β is desc and S satisfies a descending order with respect to e.Moreover, given a sequence of order modifiers O = (o 1 , . . ., o n ), we say that S satisfies O, denoted S |= O, if it applies that: (i) S |= o 1 when n = 1; or (ii) S |= o 1 and, for every sub-sequence of selection mappings S ′ ⊆ S such that τ i (e 1 ) = τ j (e 1 ) (with o 1 = (e 1 , β 1 )) for any pair of selection mappings τ i , τ j ∈ S ′ , it holds that S ′ |= (o 2 , . . ., o n ).
B.6 Graph Queries
A graph query Q is defined as a tuple (R, C, E, O, n), where R is a relational graph pattern, C is a selection condition, E is a sequence of return elements, O = {o 1 , . . ., o n } is a sequence of order modifiers, and n is a positive integer.We assume that R is the unique mandatory component.Given a variable v ∈ dom(R), the remaining components have the following expressions by default: C is v = v, E is v, O is (v, asc) and n = 0.
The evaluation of Q over G is defined as limit(S, n) where S = return(Ω, E) G , S |= O, and Ω = {µ | V | µ ∈ R G ∧ µ |= G C}.We will assume that every graph query Q = (R, C, E, O, n) satisfies the following two conditions: (i) For every sub-pattern R ′ = (R 1 OPT R 2 ) of R and for every variable v occurring in R, it applies that, if v occurs both inside R 2 and outside R ′ , then it also occurs in R 1 ; (ii) It applies that Var(C) ⊆ Var(R).Then, we say that Q is a well-designed graph query.
We finish this section noting that the semantics of a declarative query expression:
Figure 1 :
Figure 1: Information on presidency of Chile.
Figure 2 :
Figure 2: Reified triples representing the duration of a presidency.
Figure 3 :
Figure 3: RDF* triples with another triple as the subject
xxx
An RDF* graph is a finite set of RDF* triples.
Figure 4 :
Figure 4: Property graph representing the information about the presidency of Chile.
Figure 5 :
Figure 5: Wikidata statement group for Michelle Bachelet
Figure 6 :
Figure 6: Directed labeled graph reifying the statements of Figure 5
Figure 7 :
Figure 7: Property graph representing statements of Figure 5.
Figure 8 :
Figure 8: RDF* for one of the statements of Figure 5.
Figure 11 :
Figure 11: A property domain graph describing venues, papers, universities, authors, locations, and their relations.
Pattern WHERE Filters RETURN Variables Data Intelligence Volume x, Number x xxx When evaluated over a property domain graph, such a query will return a multiset of mappings binding Variables to database objects (or values) that satisfy the Pattern specified in the MATCH clause and the Filters specified in the WHERE clause.
EXAMPLE 4 . 15 . 18 Data
In order to find the most recent paper by Donald Knuth, we can use the following DGQL query: MATCH (?x { name : "D.Knuth" })-[:author]->(?y)ORDER BY DESC ?y.year RETURN ?x, ?x.name LIMIT 1 Intelligence Volume x, Number x xxx Table 2: Query features supported by graph query languages (BGP = basic graph patterns, RGP = relational graph patterns, QE = querying edges, RPQ = regular path queries, NGP = navigational graph patterns, FPR = full path recovery).The symbol ∼ is used to indicate partial support of a feature.Query language BGP RGP QE RPQ NGP FPR DGQL Figure 12: MillenniumDB Architecture Storage and indexing.Let us start by explaining the Disk and Storage Manager part of the MillenniumDB architecture from Figure 12.The main component of the domain graph data model are objects.Objects are represented internally as 8-byte identifiers.To optimize query execution, identifiers are divided into classes and the first byte of the identifier specifies a class it belongs to.The main classes in a property domain graph G = (O, γ, lab, prop) are:
Figure 15 :
Figure 15: General structure of queries in MillenniumDB
Figure 17 :
Figure 17: Graphical representation of a basic graph pattern (left), and the tabular representation of the solution mappings (right) obtained by evaluating the basic graph pattern over the property domain graph shown in Figure 10 .
k 2 and v 1 .k 1 = v are selection conditions; and (b) if C 1 , C 2 are selection conditions, then (¬ C 1 ), (C 1 ∧ C 2 ), (C 1 ∨ C 2 ) are selection conditions.Given a property domain graph G = (O, γ, lab, prop), a mapping µ, and a selection condition C, we say that µ satisfies C under G, denoted by µ |= G C, if one of the following statements holds:
the output of the graph query (R, C, E, O, n) on an input graph.Data Intelligence Volume x, Number x
Table 1
The following query extends that of Example 4.6 by binding paths witnessing each result to a variable ?p:This returns a string representation of a shortest path for each result, as follows:We can also combine different features to capture more complex paths.EXAMPLE 4.8.The following query looks for publications of staff and students of U.S. institutions, further including the direct citations of these publications: [3]pose we wish to find, for example, instances of self-citation of staff at U.S. institutions; we could write this as follows: Considering that the graph patterns considered previously allow for extracting tables from graphs, a way to enrich a graph query language is to support relational operators over these tables[3]; this gives rise to the notion of relational graph patterns.Thus far we have seen the ability to project results with SELECT, and apply selections over results with WHERE.We have also spoken about how basic and navigational graph patterns can be interpreted as natural joins in the relational algebra.
[3]igational graph patterns If we further allow path queries within basic graph patterns, we arrive at navigational graph patterns[3].xxx EXAMPLE 4.13.DGQL also supports optional graph patterns, which behave akin to left outer joins in the relational algebra, i.e., they allow for extending solutions with data that may or may not be available; in case the data of the optional pattern are not available, the solution is still returned and the optional data are left blank.EXAMPLE 4.14.Assume we want to find the authors who have published articles in the Journal of the ACM, their affiliation in those articles, and, if available, the organization at which they are currently staff.The following DGQL query achieves this: MATCH (?v)-[?e :author]->(?w),(?w)-[:venue]->(?x { name = "J.ACM" }), (?e)-[:org]->(?y)
Table 3 :
Wikidata Truthy sizes when loaded into each engine.The base dataset consists of roughly 1.25 billion triples.
Table 4 (
middle) and Figure13(middle) show the results for this set.Comparing different join execution strategies within MillenniumDB, we can see the superiority of the leapfrog triejoin variant (particularly on average, i.e., for more complex queries).The Selinger variant of MillenniumDB outperforms the greedy algorithm for join selection, but only marginally.Compared with existing graph engines, xxx
Table 4 :
Summary of runtimes (in seconds) for BGPsMillenniumDB clearly outperforms other systems on this query set.Its medians are an order of magnitude faster than those of Blazegraph, the next best contender.The difference is less sharp for averages, but MillenniumDB LF still takes 60% of the time of Virtuoso, the next best contender.
Table 5 :
Summary of runtimes (in seconds) for path queries
Table 6 :
Average and median runtimes, in seconds, for MillenniumDB on the complete version of Wikidata
|
2023-06-27T01:04:29.160Z
|
2023-06-13T00:00:00.000
|
{
"year": 2023,
"sha1": "61c931d50f3e32f70abf1bccddf2479212dd2fde",
"oa_license": "CCBY",
"oa_url": "https://direct.mit.edu/dint/article-pdf/5/3/560/2158194/dint_a_00229.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e031628ed5a08f3ddaa1942d041bc0c5737d4e76",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
210523472
|
pes2o/s2orc
|
v3-fos-license
|
Biodegradable Magnesium Alloys as Promising Materials for Medical Applications (Review)
М.V. Kiselevsky, MD, DSc, Professor, Head of the Laboratory of Cell Immunity1; N.Yu. Anisimova, DSc, Leading Researcher, Laboratory of Cell Immunity1; Researcher, Center of Composite Materials2; B.Е. Polotsky, MD, DSc, Professor, Leading Researcher, Thoracic Department1; N.S. Martynenko, PhD, Researcher, Laboratory of Non-Ferrous and Light Metals3; Engineer, Laboratory of Hybrid Nanostructured Materials2; Е.А. Lukyanova, PhD, Senior Researcher, Laboratory of Non-Ferrous and Light Metals Science3; Junior Researcher, Laboratory of Hybrid Nanostructured Materials2; S.М. Sitdikova, PhD, Senior Researcher, Laboratory of Cell Immunity1; S.V. Dobatkin, DSc, Head of the Laboratory of Non-Ferrous and Light Metals Science3; Professor, Department of Metallography and Physics of Strength2; Yu.z. Estrin, Honorary Professorial Fellow4
Introduction
Biodegradable metals (titanium and its alloys and stainless steel) are widely used for orthopedic implants. The main limitations to their application are associated with their undesirable mechanical properties resulting in serious problems of bone remodeling [1, 2]. Since these materials do not degrade, repeat operations are required to remove the implant. The release of toxic ions owing to corrosion and microparticles due to material wear can cause inflammatory osteolysis [3-6]. If metallic implants and prostheses are used for a long time, a high concentration of prosthetic metal particles is found in the synovial fluid and the tissue surrounding the implant which is the result of continual release of metal particles from the implants under mechanical load [7,8]. Though non-degradable metal implants are considered to be nontoxic, some of their components can promote the development of neoplasia [9]. Cases of osteogenic sarcomas developed after implantation of metallic reviews Biodegradable Magnesium Alloys endoprostheses have been recently described in the literature [10]. Thus, there is a need for development of biomaterials for the implants of a new generation which, while possessing the acceptable strength characteristics, would be biodegradable and would not require repeat surgical interventions to be extracted.
Recently, there has been a growing interest in biodegradable metallic materials. Among them, magnesium and its alloys, which are considered as promising candidates for medical applications, are being studied intensively [11][12][13][14]. Magnesium has some advantages over the materials used currently for metal structures and primarily for orthopedic implants. It attracted the attention of researchers owing to its good biocompatibility and mechanical properties that are similar to those of the native bone. Being characterized by biosafety and a good biocompatibility profile, magnesium is one of the most important microelements in the human body partaking as a cofactor in more than 300 different fermentative reactions and playing a significant role in the energy metabolism. The main product of degradation of a magnesium alloy is hydrogen which can also cause a favorable effect as it possesses anti-oxidative activity being a selective absorber of hydroxyl radicals and peroxynitrite. Ideal implants for bone fixation must have a slower resorption rate than the rate of bone remodeling. Biodegradable magnesium alloys can make it possible to achieve synchronization in the changes of their strength and the restoration of the bone tissue, whereas the mechanical properties of permanent implants from titanium and stainless steel remain unchangeable during the entire process of bone defect healing. This may cause a phenomenon of stress shielding manifesting itself as uneven remodeling of the bone tissue: a combination of resorption areas with hypertrophy of the bone tissue. Besides, bioresorbability of magnesium makes a repeat operation for implant removal unnecessary [15][16][17].
In clinical practice, local recurrences due to implantation of an orthopedic prosthesis after tumor resection in patients with primary or metastatic bone damage are an important unresolved problem. Therefore, the development of materials for implants with antitumor activity is also extremely vital. As demonstrated by the recent publications [18,19], magnesium alloys can possess antitumor properties alongside a good biocompatibility, a suitable combination of mechanical properties, and biodegradability. The antitumor activity of magnesium is associated with its ability to evolve hydrogen during biodegradation which causes a cytopathogenic effect on tumor cells. Besides, different alloying elements have been shown to increase the cytotoxic properties of magnesium-based alloys [20,21]. For example, for the Zn-doped Mg-Ca-Sr alloy, it has been established that Zn ions released into the cultivation medium during biocorrosion of the alloy inhibit proliferation of the tumor cells due to the alteration of the cell cycle and induction of cell apoptosis. Additionally, the ability of the tumor cells to migrate is reduced under these conditions. These data give grounds to suggest that Mg-Ca-Sr-Zn alloy may be considered as a prospective multifunctional material with antitumor activity. It can be proposed for application in orthopedic implants to compensate for bone defects after tumor resection and to prevent recurrences and metastases of malignant neoplasms. By in vitro tests, a cytotoxic activity against murine osteosarcoma cells exhibited by the products of biocorrosion of Mg-Nd-Y-Zr alloy produced by extrusion was established. They reduced the viability of the tumor cells during 24-48 h after the direct contact with the biocorrosion products on the alloy sample surfaces [22].
Problems with using magnesium alloys
At present, there are a number of unresolved questions connected with the perspectives of using magnesium-based alloys. First, pure magnesium and some of its alloys undergo extremely rapid corrosion under physiological conditions, which results in early implant loosening or disintegration before the formation of a new bone tissue. Rapid corrosion causes hydrogen to evolve excessively in the implantation area affecting negatively the adjacent tissues and preventing bone regeneration [23,24]. Solving this problem is thus vitally important for the development of magnesiumbased alloys with improved corrosion resistance in the principal physiological media.
Second, magnesium and its alloys are characterized by non-uniform degradation with the formation of local defects that contribute to the reduction of mechanical strength and may lead to implant fracture before the end of the expected service life. This makes it necessary to continue using conventional hard-alloyed devices with a low level of corrosion for the reconstruction of osteochondral defects as it has been done.
Thus, despite a great potential of magnesium and its alloys as materials for biodegradable implants, rapid and uncontrolled degradation in a physiological medium accompanied by hydrogen release is the main limitation to the application of these materials [39]. In some cases, these limitations could be overcome by a proper selection of the chemical composition of the alloy and its thermomechanical treatment as it has been done, for example, for a new Mg-4Li-1Ca alloy [40][41][42], but to date there is no general methodology of searching for magnesium alloys with a desired profile of mechanical properties, biocompatibility, and corrosion resistance. However, it is evident that the development of novel and modification of the known magnesium alloys must be directed not only to the optimal combination of strength and plasticity but also to their programmable degradation under the conditions of the internal body media.
Modifying the biocorrosion rate of magnesium alloys
Doping and surface coatings are used to modify the corrosion rate of magnesium alloys and improve their biological properties. Calcium, manganese, zinc, and zirconium are the main candidates for doping since they are not toxic to the human body and can slow down the biodegradation rate. Metals such as aluminium, silver, yttrium, zirconium, and neodymium were also employed as doping elements to improve mechanical properties and corrosion behavior of alloys. Presence of these elements makes it possible to improve physical and mechanical characteristics of magnesium-based alloys by refining their microstructure and isolating intermetallic particles.
Calcium is necessary for normal functioning of a number of important body systems and, in particular, bone tissue, therefore it is considered as the major component for the introduction into magnesium-based alloys for biomedical implants. There are also data showing that calcium can display anticarcinogenic properties [17].
Manganese is added to many magnesium-based alloys to improve corrosion resistance and reduce the detrimental effect of impurities. Zirconium-containing magnesium alloys possess improved mechanical properties. Besides, zirconium decreases the rate of alloy degradation. Investigations showed that Mg-Ca, Mg-Zn, and Mg-Mn-Zn alloys have good biocompatibility in vitro and in vivo and possess an increased corrosion resistance, dissolving gradually in the bone tissue [43][44][45][46]. Mg-Al, AZ91, and AZ31 alloys are of great interest due to their commercial availability and good mechanical properties. However, their use appears to pose a danger of aluminium penetration into the organism promoting the development of dementia and Alzheimer's disease [47].
In recent years, quite a number of novel magnesium alloys have been developed and tested on orthopedic and cardiovascular models [48]. A certain progress has been achieved in doping magnesium with rareearth (RE) elements to reduce the material corrosion in physiological medium [49]. New variants of magnesium alloys, e.g. Mg-Nd-Zn, have emerged, which are promoted as biomagnesium alloys. In this series of alloys, neodymium was chosen as a main alloying element in combination with zinc and zirconium. Neodymium is a RE element with low toxicity and its addition can significantly reduce electrochemical implant corrosion [50,51]. But it should be noted that its longterm effects have not been studied sufficiently.
Investigations of RE elements in vitro showed that dysprosium (Dy) and gadolinium (Gd) possess high cytotoxic activity which, in the authors' opinion, requires close attention to the choice of RE for doping magnesium alloys [52,53]. Therefore, to avoid problems related to potential toxicity in cases when high cytotoxicity is not intentional (see above) it is recommended to use the doping elements which have already demonstrated their good biocompatibility.
Good biocompatibility has been established for Mg-Ca-Zn (MCZ), Mg-Sr (MS), and Mg-Ca-Zn (MCZS) alloys. Introduction of these elements into alloy composition is suggested by their biological activity. Thus, zinc can promote a more rapid bone generation due to the production of alkaline phosphatase and collagen while calcium ions facilitate proliferation and differentiation of osteoblasts in vitro. Strontium is also recognized to be an osteogenic factor and can induce differentiation of mesenchymal stem cells into osteoblasts. In the ideal case, inclusion of calcium, zinc, and strontium can additionally reinforce a bone forming reaction to a magnesium alloy implant. Apart from the improvement of biological properties, the alloying elements can also contribute to the increase of the mechanical strength of the material. For example, magnesium alloyed with strontium and zinc, and also with calcium and zinc, showed better mechanical characteristics than pure magnesium. But it should be taken into consideration that adding alloying elements to improve osteogenic properties and mechanical strength can accelerate the corrosion rate of a magnesium-based material.
One of the results of alloying of magnesium is grain refinement, which can influence the rate of corrosion of the alloy. A more refined granular structure can also decelerate corrosion preventing its extension over the material surface [54]. At the same time, the secondary phases formed in magnesium alloys are usually electropositive in comparison with the magnesium matrix, thus promoting the reaction of cathodic reduction. The less corrosion-resistant magnesium matrix and more corrosion-resistant particles create multiple microgalvanic pairs enhancing microgalvanic corrosion [55]. Microgalvanic corrosion is likely to be an important factor for all alloys of interest as it is observed in the majority of magnesium alloys [56].
Recently, it has been shown that thermomechanical treatment in the form of severe plastic deformation (SPD) refines the grain structure efficiently, right down to nanoscale [57]. In the work [58], WE43 alloy of the Mg-Y-Nd-Zr system underwent SPD by equal channel angular pressing, multi-axial deformation, and rotary swaging with the resultant grain size below 1 μm. SPD resulted in an increase of the WE43 alloy strength by 40%. Grain refinement influenced positively the alloy's biocompatibility in vitro: induced hemolysis and cytotoxicity were reduced, the ability of the cells to proliferation increased, and the degradation rate slowed down [58].
Surface modification of magnesium alloys by the deposition of various coatings [59] (for example coatings from such materials as hydroxyapatite, chitosan, ceramics, and β-tricalcium phosphate) is effective in decelerating the degradation process of magnesiumbased biomaterials and diminishing hydrogen evolution. Cellulose acetate coating was suggested to protect reviews Mg-Ca-Mn-Zr alloy against corrosion [57]. This coating is characterized by stability in physiological media and facilitates adhesion and proliferation of osteoblasts. Cellulose -a polymer comprising D-glucopyranose links -is the most common organic compound. Cellulose possesses good mechanical strength, biocompatibility, hydrophilic behavior, high sorption capacity, and relatively good thermal stability. Cellulosecoated implants reduce the intensity of fibrosis and facilitate bone regeneration [60].
Microstructure is known to be a key factor in corrosion behavior of magnesium and its alloys. It also determines the mechanical characteristics of materials. A correlation between strength and biocorrosion characteristics of magnesium alloys caused by microstructural effects has been demonstrated in a number of works [61][62][63]. Classical methods of alloy strengthening are based on the addition of alloying elements. The strength of magnesium-based alloys was demonstrated to increase significantly by the formation of second phase particles. Therefore, these high-strength magnesium alloys usually contain a certain number of intermetallic particles increasing their strength. This process can concurrently contribute to the improvement of strength and plasticity of the alloys and enhance the corrosion resistance as well [64].
The alloy microstructure can depend on the way it was produced and on alloying with other elements. For example, the microstructure and mechanical properties of an alloy are determined by the presence of calcium and the methods of material production. At a low calcium concentration (below 16.2%), a Mg-Ca alloy possesses a crystal structure similar to that of pure magnesium [65]. Addition of calcium increases the corrosion resistance and reduces the grain size. With an increase of this element content the grain size diminishes and, at the same time, more particles of the eutectic Mg 2 Ca phase are observed at the grain boundaries [66][67][68][69][70].
The way of alloy fabrication is also of great importance for its mechanical properties and corrosion resistance. Thus, the authors of the works [71][72][73] have developed extruded Mg-Mn-Zn-Nd alloys. The experimental results showed that all of them had good ductility and significantly higher mechanical strength than those fabricated by casting. Tensile strength of the extruded alloys increased with the increase of the neodymium content. These alloys also exhibited good biocompatibility and much higher corrosion resistance than cast alloys.
One of the promising approaches to the control of magnesium alloy corrosion in biological media is their surface treatment [74]. An implant area is of great importance. It is believed that if the surface area of the magnesium implant is less than 9 cm 2 , the dissolved Mg 2+ ions will be easily consumed by the human body. However, quick formation of hydrogen/ hydroxide in the corrosion process may pose serious problems for patients.
The corrosion rate also depends on the geometry, composition, and location of the implant. Application of monocrystalline magnesium [75] and new technologies of surface coating with polymers may appear to be one of the perspective directions [76]. This will provide additional capabilities for adaptation to degradation and gradual replacement of the implanted device with a new tissue.
Intensive searching for various approaches including introduction of alloying elements into magnesium, coating with protective films and mechanical treatment, control of the alloy corrosion rate has been going on for a number of years. Despite these new strategies, an improvement of the corrosion rate control for magnesium alloys has been demonstrated only in the experiments in vitro [77][78][79][80]. At the same time, experiments on animals often provide data on insufficient reduction of the biodegradation process rate for these alloys. For example, biodegradable Mg-Ca-Zn alloy was tested on rabbits with a screw implanted into the bone for 24 weeks [81]. Histological and micro-CT analyses showed the formation of the bone tissue with a weak gas evolution and absence of foreign bodies around a slowly degrading specimen. On the basis of these rather limited data, the authors suggested that if the chemical composition of the magnesium alloy is selected correctly its microstructure may be designed in such a way as to make the mechanical properties of the alloy similar to the properties of the spongy bone. But even this optimistic assessment of the results did not allow the researchers to consider the tested alloy specimens as being suitable for devices with a load-bearing function. Farraro et al.
[82] investigated the possibility of using magnesiumbased alloys for functional tissue engineering. They used AZ31 alloy for fixation of tissue autografts during the reconstruction of the anterior cruciate ligament of the test animals. The experimental results showed that a fixation device based on the magnesium alloy promoted the restoration of the ligament function providing their mechanical integrity at the early stages and minimizing atrophy of the implanted fragments. Gradual resorption of the elements of a magnesium-based fixation device can make it possible to achieve reconfiguration and reinforcement of a ligament bioimplant.
In a preclinical study, nails from magnesium alloys containing different calcium concentrations were tested after intraosseous implantation in rabbits. Three-month observations enabled the authors to establish that implanted nails, judging from their reduced diameters, were gradually degrading. Besides, it was found that a new bone was forming around the Mg-Ca alloy whereas there was no visible bone growth around the nails from thallium. This demonstrates preferential integration of Mg-Ca nails with the bone and osteogenesis in the periimplant zone.
Thomann et al.
[83] investigated the effect of magnesium doping with the elements such as calcium, aluminium, and RE elements on the corrosion process.
reviews
It was found that after the implantation of the alloy into the cavity of the white rabbit's tibia marrow for a period of 12 months, this alloy provided a strong integration of the implant and bone followed by a gradual implant degradation by 11, 31, and 51% after 3, 6, and 12 months, respectively. Magnesium alloys containing zinc and manganese showed satisfactory mechanical properties. However, these alloys degraded relatively quickly: during 9-week implantation bioresorption was 10-17% and 18 weeks later it grew to 54%. In 2001-2005, Witte et al. [84] investigated in vivo decomposition of magnesium alloys with aluminium and zinc and RE elements (neodymium, cerium, lanthanum, and others). The study showed the alloy degradation 18 weeks after the operation with significant increase of bone formation in comparison with the control group (a polylactide nail). RE elements were detected in the corrosion layer of the amorphous Ca 3 (PO 2 ) 4 , but not in the surrounding bone tissue.
In recent years, various magnesium alloys developed to optimize degradation, mechanical properties, and biological reaction were studied. Trincă et al. [85] proposed to use a magnesium-based alloy with the addition of 0.4% of calcium and 0.5% of silicium. Histological examinations showed intensive and active bone formation 2 weeks after the implantation. X-ray and computer tomography detected the presence of experimentally created defect in the tibia and revealed the main stages of bone tissue regeneration concurrent with the process of implant specimen biodegradation. Wang et al. [86] implanted cylinders from Mg-Zn-Zr alloy into the tibiae of white rabbits. 23 weeks later, the implants were found to undergo partial biodegradation and the density of the surrounding spongy bone was increased. Micro-CT confirmed that a newly generated bone tissue on the surface of the remaining implant was formed after 12 to 24 weeks with the formation of multiple cavities filled with gas. The gas generated during Mg-Zn-Zr alloy degradation caused cavitation of the spongy bone but did not affect osteogenesis around the magnesium alloy. Exploration of Mg-Sr alloy showed that due to intercrystalline distribution of the second phase and microgalvanic corrosion Mg-Sr alloy, obtained by casting, decomposed more rapidly than the extruded alloy. Other authors [87] verified the fact that this alloy facilitated bone restoration during in vivo implantation. Lambotte [88] was the first to apply a magnesium alloy in orthopedics in 1906 when he used magnesium fixation elements for osteosynthesis. After the operation, extensive subcutaneous gas cavities were formed and on day 8 fragments of the destroyed magnesium plate were removed. The effect of biodegradation was most likely intensified by electrochemical mechanisms caused by the application of the steel screws for magnesium plate fixation. Though his first attempt was unsuccessful, Lambotte proceeded to do experiments on animals and found that complete magnesium resorption could occur over 7-10 months after the implantation. Later, in the 1930s, clinical investigations of pure magnesium without steel screws in children with bone fractures turned out to be more successful [89].
Clinical investigations of magnesium bioimplants
In recent time, only singular clinical pilot tests of magnesium alloys have been described. They demonstrate regeneration of the bone that occurs concurrently with continual implant degradation and the emergence of a biomimetic matrix for calcification at the degradation front which initiates the process of bone formation. Bone formation on the surface of the magnesium alloy gives rise to deceleration of degradation of the implant, which is completely replaced by new bone after 1 year [85]. Thus, degradability of a biodegradable magnesium alloy may contribute to the process of neobone formation and replacement of the decomposing fixation device with bone tissue. Biodegradability of the magnesium-based devices was established by the clinicians who conducted this study as an important factor which makes it possible to avoid repeated surgical procedures and changes the existing technology of fabricating fixation elements for bones [90].
A small-scale short-term pilot clinical study [91] showed that a biodegradable magnesium-based screw was roentgenographically and clinically equivalent to the common titanium screws. The authors did not observe any reaction to a foreign body, osteolysis, or systemic inflammatory reaction. But a limited period of observation (6 months) necessitates further prospective randomized investigations with a longer period of observation to validate these findings.
Thus, the data of the described investigations give reasons to make a conclusion that in aggregate they confirm the good potential of magnesium-based alloys for biocompatible, bioactive, and biodegradable scaffolds in bone tissue engineering. Further optimization of the technology for magnesium alloy fabrication may become a promising direction for creation of bioengineered constructs [92].
Biodegradable magnesium stents
As has been previously reported, magnesium-based alloys are considered to be prospective materials for biodegradable coronary arterial stents. Despite a wide and successful application of metal and polymer stents problems still arise: inflammation in case of a longterm usage and the necessity of repeated surgical intervention. Polymer stents are unable to provide sufficient mechanical strength for a period required for the restoration of initial elasticity of a native blood vessel [93]. The state of the art in the field of metal and polymer stents calls for the development of new approaches to improve the quality of treating stenotic and damaged coronary arteries. An ideal solution to this problem would be creation of a biodegradable stent which after having reviews fulfilled its function and provided the necessary support for the restoration of the injured artery would undergo bioresorption.
There are two main candidates for biodegradable metal stents: alloys based on iron and magnesium. A biodegradable iron-based stent (Fe>99.8%) was tested on rabbits. The results showed that implantation of this device into the aorta did not cause any signs of inflammatory response, neointimal proliferation, or toxicity. However, these stents do not degrade during a long period of observation [94]. Therefore, a more rapid degradation rate is required for iron stents demanding further investigations directed towards the correction of the stent composition and its geometric design.
Biodegradable magnesium-based alloys were used as an alternative to the iron-based stents. The problems of toxicity of the doping elements, which can be significant in the abovementioned bone implants based on magnesium alloys, seem not so serious when used in coronary and vascular stents due to their small dimensions. But a clinical study of magnesium stents [95] showed their extremely rapid degradation (less than a month after implantation) followed by vessel restenosis and loss of mechanical properties.
A successful application of coronary stents based on a biodegradable magnesium alloy was demonstrated in another clinical study [96]. Further improvements of magnesium stents were presented in the works of the same group of investigators [97,98]. Projects aimed at the assessment of long-term clinical trials of biodegradable magnesium alloy-based stents were described in the works [99][100][101].
Undoubtedly, in near future the efforts of a great number of researchers from many countries engaged in the problems of bioresorbable magnesium stents will result in a serious progress in this field.
Conclusion
The analysis of the literature data showed that despite a great potential of using biodegradable magnesium alloys there exist a number of problems preventing their clinical application. First, pure magnesium and some of its alloys are subject to excessively rapid corrosion under physiological conditions which leads to early implant loosening or disintegration before bone tissue remodeling and fast evolution of gaseous hydrogen may have a harmful effect on the adjacent tissues. Second, magnesium and its alloys are characterized by local and non-uniform degradation resulting in the reduction of the mechanical strength of an implant.
It follows that the development of new magnesium alloys with controllable biodegradation is of great importance for various branches of clinical medicine. In addition to orthopedics and cardiovascular surgery where applicability of bioresorbable magnesium alloy has been actively investigated, the employment of these alloys in oncology is also believed to be promising. It is in oncology that cytotoxicity of the alloying elements, commonly regarded as a negative factor restricting the application of the alloy in biomedical implants, may become its advantage through imparting improved mechanical strength and therapeutic antitumor properties to the implant. Implants from porous magnesium materials impregnated with antitumor preparations can elute drugs during biodegradation with a controllable rate preventing tumor recurrence in patients with osteogenic sarcoma after malignant neoplasm resection.
Study funding. The work was financially supported by the Russian Science Foundation (grant 18-45-06010).
Conflicts of interest. The authors have no conflicts of interest to declare.
|
2019-10-03T09:10:50.993Z
|
2019-09-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2d8d4374827d80210f0619a2d8dca017b4bedd1f",
"oa_license": null,
"oa_url": "http://www.stm-journal.ru/en/numbers/2019/3/1572/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f1d7c36aac7a117be7322fe64c507c456bd31fb6",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
35989965
|
pes2o/s2orc
|
v3-fos-license
|
A severe nervous disease in fancy pigeons caused by paramyxovirus-1 In Saudi Arabia
Between February and March 1992, a severe disease outbreak occurred in pigeons at Dirab (Locus 24’25’N ; 46”36’E), the central region of Saudi Arabia. The total number was thousand birds. The morbidity rate was 60 % while the case fatality rate was 40 %. The clinical signs resembled the neurotropic form of Newcastle disease (ND) in chickens. The birds were listless, unable to fly and had raffled feather. They showed incoordination, anorexia and torticolis. Greenish diarrhoea was also seen. The course of the disease took five to ten days during which the birds either died or gradually recovered with torticolis as sequelae (photo 1).
Introduction
Pigeons (Columba /ivia) in Saudi Arabia are mainly kept as fancy birds.For this purpose, various breeds are provided from abroad, and reared under good conditions.
Inspite of reports on poultry viral diseases in Saudi Arabia (l), pigeons have always been free from such infections.However, SHALABY et a/., 1985 (10) reported the presente of herpes virus infection in pigeons in the Eastern Province.
Between February and March 1992, a severe disease outbreak occurred in pigeons at Dirab (Locus 24'25'N ; 46"36'E), the central region of Saudi Arabia.The total number was thousand birds.The morbidity rate was 60 % while the case fatality rate was 40 %.The clinical signs resembled the neurotropic form of Newcastle disease (ND) in chickens.The birds were listless, unable to fly and had raffled feather.They showed incoordination, anorexia and torticolis.Greenish diarrhoea was also seen.The course of the disease took five to ten days during which the birds either died or gradually recovered with torticolis as sequelae (photo 1).
Gross-and-histopathology
Pigeons showing symptoms were sacrificed and postmortem examination carried out.Brains, livers, spleens, respiratory organs and whole blood in EDTA were collected for virus isolation assays.
Samples from the brain, lungs, livers, kidney, intestine, spleen and skeletal muscle, were also collected in 10 % formol-saline.Paraffin sections were prepared and stained with haematoxylin and eosin (HE) and examined for histopathological changes.
Virus isolation assays
Tissue samples for virological investigations were put into 50 % (W/V) suspension in phosphate buffered saline (PBS), pH 7.4 , and centrifuged at 750 rpm for ten minutes.The supernatant was collected to which antibiotics were added and used to inoculate 9-day old specificpathogen free (SPF) embryonating chicken eggs via the allantoic cavity (7).Inoculated eggs were incubated at 37 "C and candled daily.Eggs dying within the first 24 h were discarded.Subsequently, dying eggs were collected and kept for 3 h at 4 "C before the allantoic fluid was harvested.
Haemagglutination test (HA)
The haemagglutination test (HA) was performed on the original material and on the allantoic fluids from inoculated eggs using microtitre plates according to HANSON (8).
Communication
Haemagglutination inhibition test (HI) The beta-method of the haemagglutination test (HI) was performed ( 8) employing two conventional Newcastle antisera (classical avian paramyxovirus serotype 1).
Agar gel immunodiffusion test (AGID)
Fifty percent homogenates from the brain and other visceral organs, collected from ailing pigeons, and allantoic fluid from passage one were reacted against the ND antiserum in agar gel immunodiffusion tests (AGID) employing a known ND virus as a positive control antigen (8).The brain from a healthy pigeon was used as a negative control antigen.Non-immune pigeon serum was also used as a control.
Thermostability
Thermostability test on the virus was carried out at 56°C for various times as described by HANSON (8).
Agglutination of mammalian RBCs
The ability of the virus to agglutinate mammalian erythrocytes involved examination of sheep, goat, cattle and horse and also pigeon red blood cells (RBC), as described by HANSON (8).
Experimental infection of pigeons
Fifteen indigenous seronegative pigeons were used.Five were kept as uninoculated controls in a separate room.
Five were inoculated each intramuscularly with 108 50 % egg infectious dose (EID,,) of the virus isolated from the naturally-infected pigeons.The remaining 5 pigeons were inoculated intravenously with a similar dose of the virus Each group was kept separately in a cage and provided with feed and water ad lib.The birds were kept under daily observations.Blood for serum was collected every two days post inoculation to test for seroconversion.
TO detect excreated virus, cloacal swabs were taken every two days post inoculation.
Post-mortem
examination was performed on dead and sacrificed pigeons, and brains, blood and tissue samples from visceral organs were collected for virus reisolation.
Virus isolation and identification
The embryonating eggs, inoculated with the original material died whithin 3-5 days post inoculation.Most HA activity was detected in allantoic fluids from eggs inoculated with the brain material (1/1024), followed by the liver (1/512), blood (1/128), spleen (1/32) and respiratory organs (1/8), respectively.The same picture was seen in the original material with the brain giving the highest HA end-point titres followed by the liver, spleen and respiratory organs respectively The HA activity of the isolated virus was inhibited by the ND sera (avian paramyxovirus 1).
The AGID A complete precipitation line of identity was produced between the known ND antiserum and the virus contained in the brain and liver homogenates and the allantoic fluid of passage one respectively.This line completely merged with the line produced between the known ND virus and the ND antiserum.No lines were seen between the brain from healthy pigeons and the positive serum, and no reaction was seen between the negative serum and the brain or livers from sick birds.
Thermostability
The virus HA activity was completely lost following heating for 30 min at 56°C.
Agglutination of mammalian RBCs
The virus agglutinated best pigeon and chicken RBCs.
Low HA activity was seen with sheep and goat RBCs, but no HA activity was seen with horse or bovine RBCs.
Gros+and-histopathological findings
The P.M. picture showed congestion of the brain and fatty changes in the liver.The kidneys were oedematous.Splenomegaly was evident.There was massive myocardial necrosis and the lungs looked congested, and there was serous air-saculitis.The intestinal serosa and mucosa showed some haemorrhagic spots.
The histopathological changes were seen in the brain, lungs, liver, kidneys, intestines and skeletal muscles.
Reproduction of the disease in experimental pigeons
Two pigeons of the I/V group started showing symptoms by day three.Both pigeons died on day seven.The remaining three showed symptoms between day four and six and all died by day eight.The five pigeons inoculated I/M showed symptoms between day three and four and all died by day eight.The symptoms were nervous signs and loose droppings.Virus was isolated from all inoculated pigeons from day three.Low level HI titres were detected in 30 % of the inoculated pigeons.
Discussion
The clinical signs, gross P.M. lesions and histopathological picture of the examined pigeons together with the virus isolation and identification and reproduction of the disease in pigeons, were highly suggestive of the avian paramyxovirus-1 infection (3, 4).
Paramyxovirus-1 infections have caused great losses in pigeons in Continental Europe, Great Britain (2) and the Sudan (4) and was reported in Egypt during the last few years (9).
The present PPMV-1 infection in Saudi Arabia showed a high degree of host specificity to pigeons.No other avian species were reported to be infected during the natural outbreak of the disease in pigeons.This was also the case elsewhere (4, 5).Such a unique host specificity for a paramyxovirus-1 was rather unusual (3).However, experimental infection of chickens through the natural routes with virulent PPMV-1 failed to produce overt clinical signs (3, 6).
Conclusion
It is rather difficult to speculate on the threat of this virus for domestic poultry.However, it might get adapted to chickens through natural passage.This should be borne in mind when handling pigeon outbreaks due to this virus in Saudi Arabia.Further classification studies on the virus are underway.
cuffs were seen in the cerebral cortex with diffuse (occasionally focal) gliosis.Neuronal degeneration was also seen.Similar degenerative changes were seen in Purkinji cells in the cerebellum associated with vaculations in the white matter.Mild to severe changes were seen in the liver characterized by hepatocyte swelling, degeneration, sinosoidal dilatation and infiltration of mononuclear cells.In the heart, blood vessels were congested.Mild myocardial degeneration associated with slight to moderate proliferation of interstitial cells was seen.Changes in the kidneys were mainly congestion and haemorrhages accompanied by tubular degeneration and infiltration of the interstitium with mononuclear cells.
|
2018-04-03T03:00:59.104Z
|
1993-04-01T00:00:00.000
|
{
"year": 1993,
"sha1": "ab94937d24c3f8394e4493ee2a706bec3836b0b3",
"oa_license": "CCBYNCSA",
"oa_url": "https://revues.cirad.fr/index.php/REMVT/article/download/9407/9401",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ab94937d24c3f8394e4493ee2a706bec3836b0b3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
233275920
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Private Visiting and Promotion Strategy on Tourist Interest Visiting in Bali City
This study aims to determine the effect of tour guides and promotional strategies on the interest of tourists visiting Bali. The method used is explanatory research with analysis techniques using statistical analysis with regression testing, correlation, determination and hypothesis testing. The results of this study that the guide has a significant effect on tourist interest by 43.2%, hypothesis testing is obtained t count> t table or (8,452> 1.986). Promotion strategy has a significant effect on tourist interest by 43.9%, hypothesis testing is obtained t count> t table or (8,576> 1,986). Tour guides and promotion strategies simultaneously have a significant effect on tourist interest with the regression equation Y = 9.311 + 0.372X1 + 0.406X2. The contribution of the effect was 53.7%, the hypothesis test obtained F count> F table or (53.950> 2,700).
INTRODUCTION
The development of tourism in Indonesia is growing rapidly. Various regions compete to show the various advantages of their regions. Various tourist attractions generally contain typical values of an area (Houge Mackenzie et al., 2020;Huerta-Álvarez et al., 2020;Husain et al., 2018aHusain et al., , 2018b. There are various kinds of tourist objects that exist with their various uniqueness. According to Pan et al., (2020) a tourist attraction is something that becomes the center of tourist attraction and can provide satisfaction to tourists. The existence of a tourism object, the main problem arises, namely how to utilize tourism which is able to stimulate the development of the environment and society properly and prevent adverse effects or consequences.
The meaning of the word tourism has not been widely expressed by Indonesian language and tourism experts (Gao et al., 2012;Grissemann & Stokburger-Sauer, 2012;Papalapu et al., 2016;Said et al., 2017). the word tourism comes from two syllables, namely pari and tourism. Pari means many, many times and circling, while tourism means traveling or traveling. So, tourism means travel or traveling that is done many times or around. Tourism is the equivalent of the term tourism in English. The definition of tourism according to Law Number 10 of 2009, tourism is a variety of tourist activities and is supported by various facilities and services provided by the community, businessmen, government and local governments.
Tourism today is full of competition. One Tourist Destination Area (DTW) always tries to attract more tourists than other DTWs, to be able to win the competition not only by promotion, but what is important is to provide good service, namely services that can satisfy tourists visiting the DTW (Malenkina & Ivanov, 2018). Excellent service can only be provided by professional tour guides, namely those who are always oriented towards tourist satisfaction. In order to become a professional tour guide, apart from having experience, you must always have the ability, both theoretically and technically, in providing services to tourists in general. In addition, a tour guide must also have knowledge that is supported by the ability and selfconfidence to face routine and changing tasks (Houge Mackenzie et al., 2020).
In everyday life, people are more familiar with the term guide than guide. Everyone who accompanies and accompanies tourists, visits tourism objects, watches shows, etc. is always connoted as a guide. According to Yoeti (2010) , a tour guide is a person in charge of providing guidance, information and guidance about attractions or destinations. A tour guide must be able to give pleasure or satisfaction to everything he brings. Therefore, to find out the desires and tastes of tourists, a tour guide should combine his knowledge, skills, and feelings in order to achieve the pleasure desired by the tourists he brings.
Tourism will not develop if other people are reluctant to visit because they are blind to the information about tourism. Therefore we need various tourism promotions. Promotion itself is an attempt to increase the attractiveness of a tourist attraction to potential tourists. Tourists and their needs are not handled, but the tourist products are more tailored to the demands of tourists (Gao et al., 2012;Grissemann & Stokburger-Sauer, 2012;Muhtasom et al., 2019;Pan et al., 2020;Said et al., 2017). Promotion is an activity that mainly includes distributing promotional materials, such as films, slides, advertisements, brochures, booklets, leaflets, folders, through various kinds. channels (channels) such as: TV, radio, magazines, cinemas, direct-mail both for potential tourists, namely a number of people who meet the minimum requirements for travel, because they have a lot of money, their physical condition is still strong, but they don't have free time traveling as tourists or actual tourists, namely people who are on a tourism trip to a certain destination; with the aim of transferring information and influencing potential tourists to visit a tourist destination.
Tourists are an inseparable part of the world of tourism. Tourists are very diverse, all of whom have different desires and expectations. When viewed from the meaning of the word tourist which comes from the word tourism, it is actually not an appropriate substitute for the word tourist in English. The word comes from the Sanskrit word for tourism which means the same trip or can be equated with the word travel in English. So when people travel in this sense, tourists have the same meaning as the word traveler because in Indonesian it is customary to use the suffix wan to describe people with their profession, expertise, position of position and one's position (Said et al., 2017).
The city of Bali is one of the major cities in Indonesia which is one of the biggest cities in Indonesia.
METHOD
Population is a set of objects that are determined through certain criteria which will be categorized into the object to be studied. According to defining population is the number of generalization areas consisting of objects or subjects that have the qualities and characteristics set by the researcher and then draw conclusions. The population in the study amounted to 96 respondents according to Creswell & Clark (2017), namely "The sample is the number and characteristics of the population". Meanwhile, Creswell & Clark (2017) argues that "The sample is part or representative of the population under study". The sampling technique in this research is saturated sample, where all members of the population are sampled. Thus the sample in this study amounted to 96 respondents. This type of research used is associative, where the aim is to find out the relationship between. In analyzing the data used instrument test, classical assumption test, regression, coefficient of determination and hypothesis testing.
Test Intruments
In this test used the validity test and reliability test. The validity test is intended to determine the accuracy of the data regarding the suitability between what is being measured and the measurement results. According to Sugiyono (2016) "Valid means that there is a similarity between the collected data and the real data". Meanwhile, Ghozali (2013) argues that "A questionnaire is said to be valid if the questions on the questionnaire are able to reveal something that will be measured by the questionnaire." To test the validity, the 2 tailed significance value is seen compared to 0.05 provided that: 1) If the 2 stringed significance value <0.05, then the instrument is valid, 2) If the 2 stringed significance value> 0.05, then the instrument is invalid, From the test results, it is obtained that each item statement for all variables obtained a 2 tailed significance value of 0.000 <0.05, thus the instrument is valid. The next test is the reliability union. The reliability test analysis model used in this study is the Alpha Cronbach model. According to Ghozali (2013), "reliability is a tool for testing the consistency of respondents' answers to the questions in the questionnaire. A questionnaire is said to be reliable if a person's answer to a question is consistent or stable over time ". The measurement is done by using Cronbach's Alpha analysis. Ghozali (2013) classifies the value of Cronbach's Alpha as follows: 1) 1) If the value of Cronbach's Alpha> 0.60, it is declared reliable, and 2) If the value of Cronbach's Alpha <0.60, it is declared unreliable, The test results are presented in table 1. The classical assumption test is intended to determine the accuracy of a data. According to Singgih Santoso (2011) "A regression model will be used to make forecasts, a good model is a model with minimal forecast errors". Therefore, a model before it is used should satisfy several assumptions, which are commonly called classical assumptions. In this study, the classical assumption tests used were: Normality Test, Multicollinearity Test, Autocorrelation Test, and Heteroscedasticity Test. The results are as follows:
Normality test
The normality test is carried out to test whether the regression model, the dependent variable and the independent variable are normally distributed or not. The results of the normality test using the Kolmogorov-Smirnov Test are at Table 2. *. This is a lower bound of the true significance. a. Lilliefors Significance Correction Based on the test results in the table above, the significance value is 0.160 where the value is greater than the value of α = 0.050, or (0, 160> 0.05 0). Thus, the assumptions for the distribution of the equations in this tester are normal.
Multiconilierity Test
Mutlycolinearity testing is conducted to ensure that the independent variables do not have multicollinearity or do not have a correlation effect between the variables that are determined as models in the study. The multicollinearity test is carried out by looking at the Tolerance Value and Variance Inflation Factor (VIF). The test results are presented in table 3. Based on the test results in the table above the value of tolerance of each independent variable is 0, 614 <1.0 and a value Variance Inflation Factor ( VIF) of 1, 629 <10 d ith so this does not happen regression model multikolinearitas.
Autocorrelation Test
Autocorrelation testing is used to determine whether or not there are correlation deviations between sample members. The test was carried out with the Darbin-Watson test (DW test Testing the results in table 4 shows that the Durbin-Watson value of 1, 9 9 2 is between the scores of 1550-2460. With a regression model that states there is no autocorrelation disorder.
Heteroskesdasticity test
Heteroscedasticity testing is intended to test whether in a regression model there is an inequality of residual variance . The test results are presented in table 5. The test results using the Glejser test obtained the value of S ig. > 0.05. Thus the regression model does not have heteroskesdasticity disorder.
Descriptive Analysis
In this test, it is used to determine the minimum and maximum score, mean score and standard deviation of each variable. The results can be seen in table 6. Guides obtained variance minimum of 32 and a variance of maximum 4 8 with a mean score of 3, 8 43 with s tandar deviation of 3 , 849 . Promotion strategy obtained the variance minimum of 3 0 and variance maximum 4 5 with a mean score of 3, 8 40 with a standard deviation of 3,661. Traveler interest earned variance minimum of 32 and a variance of maximum 4 6 with a mean score of 3, 91 9 with a standard deviation of 3 , 585 .
Verificative analysis
This analysis aims to determine the effect of the independent variable on the dependent variable.
Multiple Linear Regression Analysis
This regression test is intended to determine changes in the dependent variable if the independent variable changes. The test results are presented in table 7. Based on the test results in the table above, the regression equation Y = 9.311 + 0.372 X1 + 0.406 X2 is obtained . From this equation it is explained as follows: 1) A constant of 9.311 means that if the tour guide and promotion strategy do not exist, there is already a tourist interest value of 9.311 points.
2) The regression coefficient guides at 0.372 , this figure is positive it means that whenever there is an increase in the guide by 0.372 then the tourists will be increased by 0.372 points.
3) The regression coefficient promotion strategy of 0.406 , this figure is positive it means that whenever there is an increase in the promotion strategy of 0.406 then the tourists will be increased by 0.406 points.
Correlation Coefficient Analysis
Correlation coefficient analysis is intended to determine the level of strength of the relationship between the independent variable and the dependent variable either partially or simultaneously. The test results are presented in table 8. Based on the test results obtained a correlation value of 0.657 means that the tour guide has a strong relationship with tourist interest. .000 Based on the test results obtained a correlation value of 0.657 means that the promotion strategy has a strong relationship with tourist interest . Based on the test results obtained a correlation value of 0.733 means that the tour guide and promotion strategy simultaneously have a strong relationship with tourist interest .
Analysis of the coefficient of determination
Analysis of the coefficient of determination is intended to determine the percentage of influence of the independent variable on the dependent variable either partially or simultaneously. The test results are as follows: Based on the test results obtained a determination value of 0.4 32 means that the tour guide has an influence contribution of 43.2% on tourist interest. Based on the test results obtained by the value of determination of 0.4 39 means a promotion strategy has contributed influence of 43.9% against the interest of tourists .
Hypothesis testing a. Partial hypothesis test (t test)
Hypothesis testing with the t test is used to determine which partial hypothesis is accepted. The first hypothesis: There is a significant influence between tour guides on tourist interest. Based on the test results in the table above, the value of t count> t table or ( 8.452 > 1.986 ), so the first hypothesis proposed that there is a significant effect only between the guides to tourists received. Based on the test results in the table above, the value of t count> t table or ( 8,576 > 1,986 ) is obtained , thus the second hypothesis that is proposed is that there is a significant influence between promotional strategies on tourist interest is accepted.
Simultaneous Hypothesis Test (Test F)
Hypothesis testing with the F test is used to determine which simultaneous hypothesis is accepted. The third hypothesis There is a significant influence between guides and promotional strategies on tourist interest. Based on the test results in the table above, the calculated F value> F table or ( 53,950 > 2,700 ) is obtained , thus the third hypothesis that is proposed that there is a significant influence between guides and promotional strategies on tourist interest is accepted.
The Influence of Tour Guides on Tourist Interest
From the analysis, it was found that the guide variable had a significant effect on tourist interest with a correlation value of 0.657, meaning that the two variables had a strong relationship with the contribution of the influence of 43.2% . Hypothesis testing obtained t value> t table or ( 8,452 > 1,986 ). Thus the first hypothesis proposed that there is a significant effect between guides on tourist interest is accepted.
The Effect of Promotion Strategy on Tourist Interest
From the analysis, it was found that the promotional strategy variable had a significant effect on tourist interest with a correlation value of 0.657, meaning that the two variables had a strong relationship with the contribution of the influence of 43.9% . Hypothesis testing obtained t value> t table or ( 8,576 > 1,986 ). Thus, the second hypothesis proposed that there is a significant effect between promotional strategies on tourist interest is accepted.
The Influence of Tour Guides and Promotion Strategies on Tourist Interest
The results of analysis of variables guides and promotional strategies significantly influence the interest of tourists to the regression equation Y = 9.311 + 0.372 X1 + 0.406 X2 , the value of a correlation of 0.733 means that the two variables have a strong relationship with the contribution of the effect of 53.7% while the rest of 46.3% is influenced by other factors. Hypothesis testing obtained the value of F count> F table or ( 53,950 > 2,700 ). Thus, the third hypothesis proposed that there is a significant effect between tour guides and promotion strategies on tourist interest is accepted.
CONCLUSION
Tour guides have a significant effect on the interest of tourists with a correlation value of 0.657 or strong with an influence contribution of 43.2% . Hypothesis testing obtained the value of t count> t table or ( 8,452 > 1,986 ). Thus, there is a significant influence between tour guides on the interest of tourists visiting Bali. The promotion strategy has a significant effect on tourist interest with a correlation value of 0.657 or strong with an influence contribution of 43.9% . Hypothesis test obtained t value> t table or ( 8,576 > 1,986 ). Thus, there is a significant influence between promotional strategies on the interest of tourists visiting Bali. Tour guides and promotion strategies have a significant effect on tourist interest with a correlation value of 0.733 or strong with an influence contribution of 53.7% while the remaining 46.3% is influenced by other factors. Hypothesis test obtained F count> F table or ( 53,950 > 2,700 ). Thus, there is a significant influence between the tour guide and the promotion strategy simultaneously on the interest of tourists visiting Bali .
|
2021-04-16T23:10:07.520Z
|
2020-05-31T00:00:00.000
|
{
"year": 2020,
"sha1": "e20c31aafc686d03d92fcda8777491e7349e78fe",
"oa_license": "CCBY",
"oa_url": "https://harpressid.com/index.php/IJEAMaL/article/download/5/4",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e20c31aafc686d03d92fcda8777491e7349e78fe",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
251448579
|
pes2o/s2orc
|
v3-fos-license
|
Research on older people's health information search behavior based on risk perception in social networks—A case study in China during COVID-19
Objective COVID-19 has caused great loss of human life and livelihoods. The dissemination of health information in online social networks increased during the pandemic's quarantine. Older people are the most vulnerable group in sudden public health emergencies, and they have the disadvantage of infection rates and online search for health information. This study explores the relationship between the health risk perception and health information search behavior of older people in social networks, to help them make better use of the positive role of social networks in public health emergencies. Method Based on the Risk Information Search and Processing model, and in the specific context of COVID-19, this study redefines health risk perception as a second-order construct of four first-order factors (perceived probability, perceived severity, perceived controllability, and perceived familiarity), and constructs a research model of the health risk perception and health information search behavior of older people. An online survey of people over 55 years old was conducted through convenience sampling in China from February 2020 to March 2020. Results A total of 646 older adults completed the survey. The structural equation model showed that health risk perception is a second-order factor (H1), that health risk perception has significant positive effects on health information search behavior (H2: β = 0.470, T = 11.577, P < 0.001), and that health risk perception has significant positive effects on affective response (H3: β = 0.536, T = 17.356, P < 0.001). In addition, affective response has a significant positive mediating effect on information sufficiency (H4: β = 0.435, T = 12.231, P < 0.001), and information sufficiency has a significant positive mediating effect on health information search behavior (H5: β = 0.136, T = 3.081, P = 0.002). Conclusion The study results indicate that the health risk perception of older people during the COVID-19 outbreak not only directly affected their health information search behavior, but also had an indirect impact on their health information search behavior by affecting affective response and information sufficiency.
Objective: COVID-has caused great loss of human life and livelihoods. The dissemination of health information in online social networks increased during the pandemic's quarantine. Older people are the most vulnerable group in sudden public health emergencies, and they have the disadvantage of infection rates and online search for health information. This study explores the relationship between the health risk perception and health information search behavior of older people in social networks, to help them make better use of the positive role of social networks in public health emergencies.
Method: Based on the Risk Information Search and Processing model, and in the specific context of COVID-, this study redefines health risk perception as a second-order construct of four first-order factors (perceived probability, perceived severity, perceived controllability, and perceived familiarity), and constructs a research model of the health risk perception and health information search behavior of older people. An online survey of people over years old was conducted through convenience sampling in China from February to March .
Results: A total of older adults completed the survey. The structural equation model showed that health risk perception is a second-order factor (H ), that health risk perception has significant positive e ects on health information search behavior (H : β = .
, T = . , P < . ), and that health risk perception has significant positive e ects on a ective response (H : β = .
). In addition, a ective response has a significant positive mediating e ect on information su ciency (H : β = .
, P < . ), and information su ciency has a significant positive mediating e ect on health information search behavior (H : β = .
Conclusion:
The study results indicate that the health risk perception of older people during the COVID-outbreak not only directly a ected their
Introduction
The worldwide spread of COVID-19 has caused immense loss of human lives and livelihoods (1). During the COVID-19 pandemic, governments in many countries such as China, Italy, and the United States tried to prevent further spread by isolating confirmed and suspected cases and restricting the movement of people (2,3). When individuals cannot obtain sufficient information from traditional approaches, they often use social networks as an alternative source of information to meet their information needs (4). During the period of forced isolation, a large amount of information related to the pandemic spread rapidly through online social networks (5).
Online social networks play an important role in disseminating health information, shaping perceptions of health risk, and providing guidance on prevention behaviors. This role was seen with the Ebola outbreak in West Africa from 2014 to 2016 and the Middle East Respiratory Syndrome (MERS) outbreak in South Korea in 2015 (6,7). The current media era has spawned more complex information dissemination routes, a larger volume of information, and more diversified information subjects and objects (8). Misinformation or disinformation (9) may harm people's health and trigger an "information epidemic" or an "infodemic." Information about the COVID-19 pandemic on social networks is mixed, and a large amount of unverified health information of various types has been continuously created and disseminated online during the pandemic, so that people often fail to identify scientifically valid health information (10).
The older people, as digital immigrants, are more vulnerable to information overload due to the deterioration of their physiological function, the limitations of their educational level, and their lack of network cognition (11). The World Health Organization showed that older people remain one of the most seriously affected groups during emergency situations (12) because they are not only physically vulnerable to infection, but also at a disadvantage when it comes to accessing health information. Meanwhile, the increasing health risk with age makes older people pay more attention to their health condition and health cognition level in general (13), and their desire for health information becomes more and more urgent. During the early phase of COVID-19 pandemic, it was critical for older people to obtain scientific and effective health information through social networks, which guide their lives and stabilize their mental state (14,15).
Given this background, this study focused on 646 people over 55 years old during period of the COVID-19 outbreak in China (February 21, 2020-March 15, 2020). The study asked these questions: (1) What was the risk perception of COVID-19 among older people during the outbreak? (2) What was the health information search behavior of older people during the outbreak? (3) What is the relationship between health risk perception and health information search behavior for older people?
Theoretical background
Online social networks A social network is a relationship network formed by interactions between individuals or families and their relatives, friends, colleagues, neighbors, etc. (16). With the development of information technology and the emergence of social media, the online social network has become the mapping of users' interpersonal relationships in virtual space (17). These networks include those created with instant messaging software and dating software, blog or other online network platforms, mediasharing networks, and short videos (18). In recent years, diverse social networks have provided the most extensive channels for the generation, access, and sharing of health information (19).
Health risk perception
The term "health risk perception, " which derives from "risk perception, " has not achieved a unified definition in academic circles (20). Health risk perception is an important concept composed of health and risk transmission (21, 22) and has been confirmed to have a high correlation with health behavior, which plays an important role in health behavior theory (23, 24). Some scholars have paid attention to the public risk perception when new infectious diseases occur, such as MERS or H1N1 (25, 26), and some scholars have developed a public risk perception scale for public health emergency events (27). Other investigators .
/fpubh. . Consequences, likelihood Influenza (34) have identified risk perceptions as being linked to information activities (28). Risk perception is a multidimensional concept (29). Research on the dimensions of health risk cognition is still not unified or systematic. Table 1 shows various dimensions of health risk cognition used by some scholars in recent years.
In summary, in order to better understand the health risk perception of older people in the early stage of the COVID-19 outbreak, this study refers to the existing research conclusions to construct health risk perception variables from the four dimensions of perceived probability, perceived severity, perceived controllability, and perceived familiarity.
Health information search behavior
Health information search behavior derives from information search behavior. The definition of information searching behavior that is widely recognized by scholars is, "Information searching behavior refers to the information searching activities conducted by users to meet certain target needs, " as proposed by Wilson (35). More recently, the development of user-centered online social networks has not only constructed a complex virtual interpersonal network for users, meeting their interaction and entertainment needs, but also formed a complex information base that greatly expands the health information search behaviors of users (36).
Many scholars have carried out numerous studies on the representation, content, influencing factors, search barriers, and other aspects of health information search behavior in online social networks, and drawn many scientific and effective conclusions (37). Both Manafo and Wong and Hutto et al. found that older adults do not have enough experience to construct effective online searches, that they search for information based only on their previous experience, lack the ability to evaluate the health information in social networks (38). Therefore, we focus our research on older people and try to explore the relationship between older people's health risk perception and their health information search behavior. (39). The RISP model's main variables are perceived hazard characteristics, affective response, information sufficiency, information subjective norm, perceived information-gathering capacity, and relevant channel beliefs (22).
Risk information search and processing model
"Perceived hazard characteristics" are individuals' assessment and prediction of risk status, including perceived probability, perceived severity, institutional trust, and personal control. "Affective response" in the RISP model refers to the uncertainty, worry, and fear generated by perceived risk characteristics. "Information sufficiency" is the central variable in the RISP model, and refers to the confidence in information an individual needs to deal with a risk event [i.e., the information sufficiency threshold (39)]. It is reflected in the individual's grasp of their own risk information in the face of risks, and represents the gap between the information the individual has available and the information necessary to effectively deal with the risk. "Information subjective norm" refers to the social pressure that an individual feels when taking a specific behavior. "Perceived information-gathering capacity" measures an individual's self-efficacy in information collection. "Relevant channel beliefs" reflect the trust level of social media (22,39).
The RISP model proposes that an affective response generated by perceived risk characteristics will affect the confidence one wants to have in one's knowledge about the risk (information sufficiency threshold), and that one will be motivated to have more information-seeking behavior. The specific path of the model is shown in Figure 1.
Since the RISP model was proposed, the Griffin team and other scholars have used this model to carry out a number of studies: Griffin et al. verified the impact of risk perception, worry, and subjective norms on information sufficiency in drinking water safety risk perception and fish product consumption research (22). Hunnrne et al. conducted a comparative study of information search and processing behavior for public safety risks between American and Dutch citizens and found that the RISP model has applicability and effectiveness in different cultural backgrounds (40). Hovick et al. validated cancer risk information with the RISP model (41). Thus, the RISP model has good adaptability and should also be suitable for the study of health risk perception, providing a theoretical basis for the situational study of health information user behavior. However, some variables in the RISP model may not have a significant impact on this study and will need adjustment, as described in the following.
Considering the context of COVID-19, the study focuses on the characteristics of sudden public health events [e.g., high levels of attention to health information (15)], the impossibility of offline investigation under strict control measures (42), and the group characteristics of the older people (e.g., limited energy). Therefore, we deleted three variations: "information subjective norm, " "relevant channel beliefs, " and "perceived information-gathering capacity, " and replaced "perceived hazard characteristics" with "health risk perception." The reasons for our changes are as follows: (1) With the RISP model, the "information subjective norm" is an important motivation to seek out and deal with non-personal risks (41). This study focuses on the risk perception of older people, which is directly related to individuals. Some studies did not consider the "information subjective norm" when they applied the RISP model to study health risk cognition, like Johnson (43). (2) Griffin et al. added media images into the RISP model and used "relevant channel beliefs" to reflect the trust level of social media (22). This study focuses on the relationship between risk perception and health information search behavior in older people facing COVID-19 and did not involve a differentiated study of various information channels. Therefore, "relevant channel beliefs" are not considered in this study. (3) Many studies have found that older people have ambiguous cognition of their own information collection ability (37). Other studies have found that self-efficacy is related to perceptions of the effectiveness of medications and confidence in self-knowledge (44), which is consistent with the "perceived familiarity" and "perceived controllability" of "health risk perception." Therefore, "perceived information-gathering capacity" is classified as "health risk perception" and will not be studied separately. We retained the variables of "affective response" and "information sufficiency, " which will be discussed in Section Research model and hypotheses.
Research model and hypotheses
Based on RISP theory, we propose a model of health risk perception and health information search behavior based on study of older people's online information searching behavior during the COVID-19 outbreak. Figure 2 provides a research framework for how health risk perception affects health information search behavior.
Health risk perception and health information search behavior
Risk is commonly defined as a multiplicative combination of the probability of a hazardous event occurring and the (27). Perceived severity of risk refers to an individual's assessment of the degree of harm associated with the risk (30). At the end of 2019, COVID-19 had the characteristics of being a highly unknown disease, highly contagious and having a high mortality rate. Severely ill patients were mostly older people and those with underlying diseases. Many older people began to pay attention to the risk of infection to themselves, their families, and the other people surrounding them, as well as the serious consequences after infection. Therefore, this study believes that perceived severity and perceived probability are very important parts of older people's assessment of the health risks caused by COVID-19.
In the objective result of risk, the greater the individual's ability to control the risk result, the more his risk perception will be inclined to the favorable side of the risk uncertainty result. Slovic proposed that the two dimensions of perceived possibility and risk severity (proposed in the psychometric paradigm of risk cognition) are not enough to fully reveal the characteristics of health risk cognition, and that risk controllability is also an important influencing variable (45). In their diabetes research, Walker et al. defined personal risk control as a means of behavior control taken by individuals to achieve health (32). In addition, public resources, especially medical resources provided by the government, public health funds, etc., affect the level of public health risk awareness (46). Since the outbreak of COVID-19, the Chinese government has continuously publicized a series of specific measures to prevent the spread of the virus, including wearing masks, reducing gatherings, etc. The media has continued to report on the progress of drug and vaccine research and development, and the current situation of the epidemic in various places. However, the home isolation measures taken by the government during the epidemic period made it difficult for older people to obtain information through offline information access channels, so as to assess the effectiveness of various prevention and control measures taken by the government and other public departments against the epidemic. Therefore, this study proposes that perceived risk controllability is also an important part of older people's evaluation of the health risk caused by COVID-19.
Risk familiarity is an important factor in risk assessment. Many scholars have found that risk perception is closely related to people's risk experience and knowledge of risk events (33). Slovic proposed in his risk cognition model that familiarity is an important factor affecting risk cognition, and defined familiarity as people's understanding or the visibility of risk events (45). The severity of the negative consequences that an infectious disease may cause, the possibility of contracting the infectious disease, and the ability to control its spread may all be relevant aspects of the public's assessment of the possible health risks of the infectious disease. COVID-19 was a sudden, new infectious disease. In the early days of the epidemic, the public had little knowledge of COVID-19. Limited by cognitive ability and the ability to search for information, the understanding of the infectious disease in older people was far below the average level. The familiarity of older people with COVID-19 directly affects their awareness of potential health risks.
Therefore, considering the reactions of older people during the COVID-19 outbreak as the context of this study, the first hypothesis of the study is developed:
H1.
Health risk perception is a second-order factor of perceived probability, perceived severity, perceived controllability, and perceived familiarity.
Risk perception has been confirmed to be associated with information search behavior (47). In the health field, Patel found that risk perception has an impact on health information search behaviors in cases of breast cancer (48). With the development of social networks, Guo et al. (49) found that risk perception significantly affects the health information behavior of social media users.
The public's perception of risk has an important impact on their information searching behavior in emergency situations (7). For example, Bish and Michie found that risk perception can promote protective motivation in public health emergency events, which increases preventive behavior during infectious disease outbreaks (50). In addition, the theory of planned behavior holds that the behavioral attitude and subjective norm affect individual behavioral intention, and behavioral intention leads to behavior (51), which is consistent with the logic that health risk perception affects health information search behavior. For older people, degenerative changes in human tissues and organs inevitably bring about a weakening of the function of the immune system, and the phenomenon of "survival with disease" is more common. Older people with various chronic and underlying diseases are at higher risk of death during public health emergency events (52). As a result, older people are more concerned about their health and life safety, and they are more active in searching for health information.
Therefore, this study makes the following hypothesis: H2. Health risk perception is associated with the health information search behavior of older people during public health emergency.
Health risk perception and a ective response
The estimations and judgments made by individuals in the face of the same risk event are often different (22).
The RISP model divides the process of an individual facing risk information into three stages: cognition, emotion, and information search and processing (22). When individuals have differences in their perception of the possibility of risk, the severity of the consequences, the degree of trust in the risk management organization, and their own assessment of their ability to control the risk, their affective states will also be different (39). Health risk perception is the basis for the rational response of the public to public health emergency events (53). Previous studies have shown that people with different health risk perceptions differ in their negative emotions about infectious diseases (54). A positive correlation between Koreans' perception of the epidemic's severity and their negative emotions was confirmed by a study of the 2015 MERS outbreak (55).
Older people are more vulnerable to the public health crisis caused by COVID-19 because of their physical and social vulnerability (12). They tend to be less confident about their own immunity, to believe that they have a high possibility of infection and serious consequences of infection, to not be confident in epidemic control, and to not be familiar with COVID-19. Therefore, they not only have concerns about infection damaging their health, but also have concerns and fears about maintaining their personal lives and mental health. More importantly and universally, a public health crisis also weakens their social support network (12). Under these effects, older people are more likely to experience excessive worry, anxiety, and other negative emotions.
Therefore, this study makes the following hypothesis: H3. Health risk perception is associated with the affective response of older people during public health emergency.
A ective response and information su ciency
Affective response, as a negative affective state, will affect the information sufficiency threshold, that is, the confidence one wants to have in one's information about the risk (22). This point has been verified by subsequent studies by numerous scholars. For example, Hovick et al. found that negative emotions such as anxiety mediate the relationship between health risk perception and information sufficiency, in their study on cancer risk information searching behavior (41). During the outbreak of an infectious disease, the public's negative emotions related to the disease, such as anxiety or fear, tend to be more intense, especially on social networks (56). Moreover, the panic caused by such public health emergencies can easily spread quickly, an effect that is more prominent in the information age of big data (55). For older people, their ability to process social network information is not as good as that of other age groups (57), and a large amount of health information will cause confusion over the information and health concerns (9).
On the other hand, the perception of aging among older people is more sensitive than that of other groups, and with the decline of physical function and the increase of health troubles, the concern about health is inevitably stronger than in other groups (58). During the COVID-19 outbreak, older people knew little about the newly emerged novel coronavirus and they were not confident in their own immunity. As a result, they had stronger negative emotions, such as anxiety and fear of contracting the disease and the serious consequences of infection. Negative emotions made older people more likely to have a lower understanding of health information and a greater demand for health information (i.e., higher thresholds of information sufficiency). In addition, home quarantine measures during the epidemic further limited the access of older people to information.
Therefore, this study makes the following hypothesis: H4. Affective response is associated with the information sufficiency of older people during public health emergency.
Information su ciency and health information search behavior
The theory of RISP indicates that information sufficiency mediates the relationship between affective response and information search behavior. The threshold between the existing risk information and the information necessary to effectively respond to the risk gives individuals a strong need for information. It urges individuals to seek and process information more actively and systematically. Eventually, it affects the individual information search and processing methods (21,22). Subsequent studies by many scholars have confirmed that individuals often meet their subjective information needs through more active information search behaviors. Public health information needs tend to increase significantly during public health emergency events, as confirmed by a survey by Tausczik et al. during the H1N1 epidemic (59). Considering the health inequalities of older adults (12), these adults show more intense health information needs and desire for understanding of health information than other groups. During the COVID-19 outbreak, traditional health information access routes (e.g., newspapers) were interrupted due to Chinese governments' home isolation measures; people had more free time and could rely more on social networks to obtain COVID-19 information. These conditions were more likely to trigger health information search behaviors on social networks. Therefore, this study makes the following hypothesis: . /fpubh. .
H5.
Information sufficiency is associated with the health information search behavior of older people during public health emergency.
Research methods
In this study, a questionnaire survey was used to empirically test the health risk perception and health information search behavior research model. All items on the questionnaire were pre-validated in the existing literature and modified in combination with the specific context of the COVID-19 outbreak as well as the characteristics of the elderly population. All items included in this study were measured on a five-point Likert scale.
Before the formal survey, the validity and reliability of the questionnaire was measured in the early stages with a pre-survey sent on January 15, 2020: 276 questionnaires were distributed through Wenjuanxing (www.wjx.cn, a platform providing questionnaire distribution functions), 250 valid questionnaires were returned, and then the questionnaire was revised according to the results. This sample was only used for questionnaire corrections.
Construct measurement
The main variables of the model in this study are health risk perception of COVID-19 (including perceived probability, perceived severity, perceived controllability, and perceived familiarity), affective response, information sufficiency, and information searching behavior related to COVID-19. The measurement indexes of the variables are shown in Table 2.
The questionnaire is composed of four parts. The first part gathers basic information, including age, gender, educational background, occupation, self-assessment of health status, and whether there are cases of infection in the region (city), which lays the foundation for subsequent research and analysis.
The second part is the investigation of the health risk perception of older people, which is mainly based on the studies of Slovicp 7), and so on. The measurement items required by this study are modified according to the actual situation. Health risk perception is divided into four dimensions in this study: possibility (individuals believe that the risk event could occur to themselves), severity (individuals perceive the severity of loss after the risk event), controllability (individuals take measures to avoid or reduce the degree of loss caused by the risk event), and familiarity (individuals' knowledge of the risk event). Each question was measured using a five-point Likert scale variable.
To improve the quality of the questionnaire, PS3 and PC2 questions are set as reverse scoring questions.
The third part of the survey is about affective response and information sufficiency. Referring to the research of Griffin et al. (39), Hovick et al. (41), and Chew and Eysenbach (62), the affective response variables selected were anxiety, fear, and worry as the main affective factors; three questions measure the degree of anxiety, fear and future worry of older people about COVID-19. On a scale of 0-5, the higher the score, the higher the level of anxiety or fear or future concern. Information sufficiency uses Griffin et al. (22), Yang et al. (63), and other studies to set a total of two questions to measure the degree of understanding and demand of older adults for COVID-19 information. According to the 0-5 score, the higher the score, the higher the individual's understanding or demand for information on COVID-19. Question IS2 is set as a reverse scoring question.
The fourth part surveys the health information search behavior of older people. This part refers to the literature related to older people's health information search behavior, the Baidu index, the Weibo hot words list, the "COVID-19 Public Cognition and Information Communication Research Report" of the State Information Center and the Institute of Network Communication of Nanjing University, and the "COVID-19 Search Big Data Report" of Baidu. Health information search behavior was measured by the following four questions: "I often use social networks to seek health information, " "During the outbreak, I was able to search on social networks for all the forms of information (text, video, pictures, etc.) I wanted about COVID-19, " "During the outbreak, I could search in social networks for all kinds of information I wanted about COVID-19, " and "During the outbreak, information related to COVID-19 on social networks could meet my needs."
Participants
The World Health Organization defines people aged 60-74 years as young elderly, people over 75 years as middleage elderly, and people over 90 years as long-lived elderly (64), but some studies have set people over 50 years of age as old people; for example, from a psychological development viewpoint, psychologist Anders Ericsson believes that after the age of 50, people enter psychological "old age" (65). We also take into account that China's legal retirement age for enterprise employees is 60 years for men, 50 years for women, and 55 years for female civil servants (66). Therefore, we focused on adults aged 55 years and older. In addition, during the epidemic period, China's epidemic control policies, such as "home isolation, " "one household with one person going out for shopping every 2 days, " and "concentrated isolation" for key groups (such as those returning home from high-risk areas such as Hubei Province and foreign countries), made it inconvenient to conduct offline investigations. Therefore, we used instant messaging platforms such as WeChat and QQ to conduct an online survey. In .
AR3
Your concern about the future risks of COVID-19. addition, older people with experience using smartphones and without literacy disorders were selected in this study to ensure that participants could fill out the questionnaire independently or with assistance.
Data collection and procedure
In this study, the questionnaire was powered by www.wjx.cn and forwarded through WeChat and QQ. In addition, considering the physiological conditions, education, and smartphone proficiency of older people under isolation during the epidemic, the surveyor assisted participants in filling in the questionnaire, either through online assistance via WeChat or through the children of the respondents, working offline (i.e., the surveyor first informed the children of the respondents of the matters needing attention in filling out the questionnaire).
The formal survey, which started on 21 February 2020 and ended on 15 March 2020, collected 685 online questionnaires. In the process of data collection, 39 surveys were eliminated because some participant responses were incomplete or true, so 646 complete surveys were obtained, with an efficiency rate of 94.31%. The participants were over 55 years old, as shown in Table 3. In addition, all participants in this study have experience searching for health information on social networks.
Measurement model
First, we measured how well the model fits. The results are shown in Table 4. As shown in Table 4, the result of SRMR is <0.8, and the values of d_ULS and d_G are both less than the corresponding value of 95% of the bootstrap quantile (HI95). Therefore, confirmatory composite analysis results show that the model has a good fit (67). The maximum variance expansion coefficient (VIF) was 5.440, much lower than the prescriptive diagnosis of 10.0 (67). This result shows that there is no multicollinearity problem in this model. In this study, we used Harman's one-factor test to test the possible common method bias (67). The test results show that the largest factor accounts for 50.451, which is acceptable according to the recommendations of Fuller et al. (68). Therefore, the common method bias problem in this study is very small.
The sample data was analyzed by descriptive statistics, and the results are shown in Table 5. According to the analysis results, the average values of all variables in the model range from 3.506 to 3.987, and the standard deviation ranges from 1.089 to 1.330. In addition, the Cronbach's α of each latent variable is higher than the threshold value of 0.7, which indicates that the data reliability of each latent variable is high.
The validity of the questionnaire can be divided into convergent validity and discriminant validity (69). Convergent validity refers to the similarity of measurement results when different measurement methods are used to measure the same characteristics (70). Factor load, combined reliability (CR), and average variance extracted (AVE) are three effective indexes to test the convergent validity. The factor load of the observed variables exceeds the recommended value of 0.6, which indicates that the observed variables of the model are highly correlated with the structural variables (71). The data shows that the factor load of each observation variable exceeds the recommended value of 0.6. The CR describes the degree to which the observed variable determines the latent structure, and a value >0.7 can be considered as having a good internal consistency of the variable (71). As shown in Table 5, all CR values are >0.7. The AVE reflects the total difference of potential structure, and a value >0.5 indicates that the observed variables in this model explain each measurement dimension well. The AVE value in this study is >0.5 (72).
Discriminant validity is the extent to which a construct is truly distinct from other constructs by empirical standards. There are three main ways to evaluate discriminant validity: cross-loadings, the Fornell-Larcker criterion, and the heterotrait-monotrait (HTMT) ratio; A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM) (2nd edition) was used (73). Discriminant validity can be measured by the square root of each AVE observation variable. The square root of AVE observation variables is greater than the correlation (74). In this study, the square root of AVE observation variables is greater than the correlation coefficient between AVE observation variables and other observation variables. Table 6 shows the details. In summary, the questionnaire has good reliability and validity.
Structural model and discussion
Total e ect analysis We calculate the model path coefficients using SmartPLS 3.3.2. The results are shown in Figure 3 and Table 7. The results show that H2, H3, H4, and H5 are supported. The model prediction ability of this study is evaluated by the internal model interpretation utility R 2 . The greater the value of R 2 , the stronger the explanatory ability of the measured variable to the latent variable. In this study, 32.1% of the model is explained by health information search behavior, 29.0% by affective response, and 19.0% by information sufficiency.
In addition, we use blindfolding to test the predictive correlation of the model. Q 2 is used to analyze the validity of β is used to generate the best component score of the predictive effectiveness of the potential independent variable against the related potential dependent variable or the observed variable (75). structural models; when the Q 2 value is greater than zero, the model has the ability to predict, and the greater the Q 2 value, the stronger the correlation of the model prediction (74). In this study, all of the Q 2 values (AR = 0.104, IS = 0.141, and HISB = 0.280) are greater than zero. In summary, the model has a good correlation of prediction.
Mediating e ect analysis
We used the Bootstrapping function of SmartPLS 3.3.2 to verify the mediating effects of affective response on health risk perception and information sufficiency, and information sufficiency on affective response and health information search behavior (76). The sample size of bootstrapping is set at 5,000. The results of the mediating effect analysis showed that the total effect, direct effect, and indirect effect are significant (P < 0.001). The specific results are in Table 8.
According to a standard test of the mediating effect proposed by Hair et al. (77), the ratio of indirect effects to total effects (VAF) can be used to measure the strength of the mediating effect. It is generally considered that the ratio of indirect effects to total effects is >80% for complete mediation, 20-80% for partial mediation, and <20% for no mediation. The results show that the mediating effects of affective response between health risk perception and information sufficiency account for 33.86%, and the mediating effects of information sufficiency between affective response and health information search behavior account for 70.47%, both >20%. Therefore, affective response is partially mediated between health risk perception and information sufficiency, and information sufficiency is partially mediated between affective response and health information search behavior.
Second-order analysis
In this study, health risk perception is a second-order factor divided into perceived probability, perceived severity, perceived controllability, and perceived familiarity. For hierarchical latent variable models, two approaches are commonly adopted to estimate the parameters by using PLS-SEM: the repeated indicator and two-stage approaches (78). In this study, the twostage approach was used because the first-order model had four formative constructs. We estimated a repeated indicator model in the first stage and used the first-order construct scores in a separate second stage (79). The results of the second-order analysis are illustrated in Table 9. HRP is a second-order variable composed of four first-order potential structures: PC, PF, PP, and PS, supporting H1.
Key findings
Overall, the data analysis results show that all the hypotheses in this study are valid. The results provide the following key findings: First, older people's health risk perception for COVID-19 is a second-order construct composed of four first-order (i.e., lowerorder) constructs, including perceived probability, perceived severity, perceived controllability, and perceived familiarity. These four first-order constructs can well represent the impact of health risk perception on health information search behavior and affective response.
Second, during the early phase of COVID-19 pandemic, the health risk perception of older people not only directly affected their health information search behavior, but also influenced their health information search behavior indirectly by affecting their affective response and information sufficiency. The higher the individual's health risk perception level, the stronger the affective response, the lower the individual's grasp of their own .
/fpubh. . health information, and the more health information needed (the higher sufficiency threshold); and thus, the more active the health information search behavior. Third, affective response plays an intermediary role in health risk perception and information sufficiency, and information sufficiency plays an intermediary role in affective response and health information search behavior.
Specifically, we found that health risk perception had significant positive effects on health information search behavior. The high infectivity and mortality rates of COVID-19, as well as the rapid spread of epidemic information through social networks, greatly affect the risk perception of COVID-19 among older people. The older people, who perceive themselves to be at higher risk of infection and severe consequences, which is coupled with their lack of confidence in epidemiological control and knowledge of COVID-19 information, are more motivated to seek health information.
We also found that affective response and information sufficiency have a mediating effect. The older people have declining physiological functions, low immunity, and many underlying diseases, which makes them more likely to be infected with infectious diseases and makes the consequences of infection more serious. During the COVID-19 outbreak, the new coronavirus pneumonia was a new infectious disease with a high infection rate and high mortality rate, and older people had insufficient awareness. The increase in reported cases of infection and the spread of unproven health information have placed older adults in an information disadvantage, making them more difficult to recognize health risks. Older people often show a more sensitive affective response to public health emergencies. They are more likely to experience negative emotions such as anxiety, fear, and future worry, which are manifested by a lower understanding of health information, stronger demand for health information, and higher information sufficiency threshold. Therefore, when the sufficiency threshold is higher than the health information the older person feels he or she currently has, more active health information search behaviors will be motivated. In addition, due to the influence of the Chinese governments' home isolation measures, traditional access to health information (such as newspapers) was interrupted, but older people had more free time and could rely more on social networks to obtain information on the novel coronavirus, a state that was more likely to trigger their social network health information search behavior.
Implications
COVID-19 is a new, intense, highly contagious, and high mortality infectious disease (80). In this study, we established a research framework for the correlation between health risk perception and health information search behavior in the context of COVID-19, which aims to provide a reference for future research on the correlation between public health risk perception and health information search behavior in the case of major infectious diseases. There are two main contributions: theoretical and practical.
The theoretical contributions of this study have three aspects. First, considering the characteristics of the early outbreak period of COVID-19 and the characteristics of older people's perception of public health emergencies, we set the health risk perception variable as a second-order construct, including four first-order constructs: perceived risk probability, perceived risk severity, perceived risk controllability, and perceived risk familiarity; We formed measurement items with reasonable reliability and validity. This measurement method of health risk perception improves on the measurement methods of health risk in previous studies on chronic diseases, bad living habits, conventional infectious diseases, etc. (31-33), and provides an effective reference for the development of public risk perception measurement tools in public health emergencies.
Second, we introduced health risk perception into the study of health information search behavior, replaced the perceived hazard characteristics in the RISP model with health risk perception, and focused on the internal correlation between older people's individual health risk perception and health information search behavior during the early phase of COVID-19 pandemic. Our results show that although older people are a vulnerable group in the information age (81), they will have active online health information search behaviors in the face of health risks, and their health risk perception not only directly affects their health information search behavior, but also indirectly affects their health information search behavior through affective response and information sufficiency. This meaningful discovery is not only a verification of the adaptability of the RISP model, but also a modification about the specific contextual application of the RISP model in the context of the global pandemic. More importantly, it provides a reference research framework for interactive research on older people's social network health risk perception and health information search behavior in specific epidemic situations from the three levels of cognition, emotion, and behavior, and expands the depth of RISP model research.
Finally, we found the mediating role of affective response and information sufficiency. During the COVID-19 outbreak, older people who are vulnerable to health risks were more likely to experience anxiety, fear, worry, etc.; lower understanding of health information; and greater demand for health information (higher thresholds of information sufficiency). In addition, home quarantine measures during the epidemic further limited the access of older people to information. When the information adequacy threshold is higher than the health information currently possessed by older people, they will be more motivated to search for health information. This mediation path fully reflects the uniqueness of older people's health information In addition to the theoretical significance, this study also has practical significance, as detailed below.
First of all, our research helps older people take the initiative in public health emergencies. Older people can obtain scientific health information on social networks to protect their own health and respond to an outbreak. Older people should be encouraged to improve their ability to search online and screen health information.
Second, health information providers can provide personalized and accurate health information services for older people based on health risk perception. For example, for low-risk awareness users, providers can push information on health risk hazards, epidemic conditions, and other related information to raise users' awareness; for high-risk awareness users, providers can provide relevant health information or facts about effective treatment to relieve users' tension. In addition, information sufficiency has a significant effect on health information search behavior, which suggests that information service providers should focus on improving this information sufficiency and attach importance to the health information needs of older people, as well as realizing the great potential. For example, governments and service providers should build a "suitable for older people" health information network platform. The platform should be designed to reduce the complexity of searching, with features such as enlarged font, automatic voice broadcasting, short video explanations, and voice commands for entering searches.
In addition, it is very important for public health departments to control the quality of the mass of information provided by social media. During public health emergencies, massive amounts of information are spread on social media (9), but the quality of the information is worrisome (82). In particular, older people whose health information recognition and processing ability is weak need more information assistance from the government. For example, authorities could detect false epidemic information and flag it as problematic, conduct quality certifications of health websites, improve the reporting channels of health information on the Internet, and disclose accurate epidemic information while exposing false health information. In addition, the government could encourage authoritative health websites and popular science platforms to create separate sections or versions for older people.
Limitations and further studies
In this study, which was affected by the early phase of the COVID-19 pandemic, an online network questionnaire was used instead of a traditional experimental study of search behavior; and the search behavior described, using a self-rating scale, was not as accurate, objective and sufficient as traditional experimental data would have been. In addition, the Chinese government implemented a strict home quarantine policy from January to April 2020 (42), and an offline survey was not possible in this quarantine period. Therefore, non-random sampling was used in this study, and the survey subjects were limited to older people who used smartphones and had no reading or writing impairments. The study ignored older adults who were less educated, did not use smartphones or were in poor health. Moreover, the effective sample size was only 646 cases; such a small sample size is not conducive to the development of the study results. Therefore, future research will expand the scope of the questionnaire and improve the sample representation.
There may be questionnaire comprehension bias in the elderly group due to their decline in physiological function, weakened comprehension and education level limitations. In future research, experimental research will be considered to determine the characteristics of the health information search behavior of older people. Further research can also explore whether the health risk cognition of older people changes before and after the search, and the influencing factors of the change.
Furthermore, the study did not take into account the influence of different social media platforms (including instant messaging software, dating software, short video platforms, and so on) on older people's information searching behavior, which can be further analyzed and discussed in subsequent studies. In addition, from January to March 2020, the cumulative confirmed cases and cumulative deaths of COVID-19 varied greatly in different regions of China. The Hubei province had the highest number of confirmed cases and cumulative deaths, followed by Zhejiang, Guangdong, Henan, and Hunan (83). However, this study did not fully consider the influence of regional infection rates on the information search behavior of older people. This is also a point that further research should focus on.
In addition, this study lacked a comparative study and analysis between older people and young people and did not consider the effects of aging characteristics on health risk perception and health information search behavior. In future research, we can conduct a differential study on the impact of health risk perception on online health information search behavior among different age groups, and add research variables of aging characteristics into the study, such as vision, hearing, thinking ability, and chronic diseases.
Finally, our study focused on the association between health risk perception and health information search behavior in older adults during the COVID-19 outbreak. Although COVID-19 is a global public health emergency, its specificity limits the results of our study. Due to the diversity of public health emergencies, the impact of a pandemic may be different from the impact of other emergencies. We cannot also compare COVID-19 with another pandemic. In future studies, we can look for more data to study the association between health risk perception and .
health information search behavior of older people in a variety of public health emergencies. This will make a huge contribution to the health of all people.
Conclusion
Considering the vulnerability of older people to public health emergencies, especially in the context of COVID-19, we proposed a research framework for health risk perception and health information search behavior based on the RISP model. Using a questionnaire and verifying model, we found that the health risk perception of older people during the COVID-19 outbreak not only directly affected their health information search behavior, but also had an indirect impact on their health information search behavior by affecting affective response and information sufficiency: the stronger the affective response, the lower the grasp of health information and the more health information needed (the higher sufficiency threshold), and thus the more active the health information search behavior. This study also redefined the variable of health risk perception and provided a tool for the measuring health risk perception during the epidemic. It provides valuable advice to older people, government public health departments, and health information service providers-advice that can improve the active role of online social networks during sudden public health events.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Research Ethics Committee of Wenzhou Medical University. The patients/participants provided their written informed consent to participate in this study.
Author contributions
CZ, WL, and YM contributed to conception and design of the study. CZ and WL organized the database and wrote the first draft of the manuscript. WL performed the statistical analysis. YM wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
Funding
This work was supported in part by Wenzhou Key Research Base of Social Sciences (18jd07) and the National Natural Science Foundation of China (71771075).
|
2022-08-10T13:44:10.956Z
|
2022-08-10T00:00:00.000
|
{
"year": 2022,
"sha1": "b9619be7d7b6139aafb3496fa729f8b0633e5c9c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "b9619be7d7b6139aafb3496fa729f8b0633e5c9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118280526
|
pes2o/s2orc
|
v3-fos-license
|
Molecules in \eta\ Carinae
We report the detection toward \eta\ Carinae of six new molecules, CO, CN, HCO+, HCN, HNC, and N2H+, and of two of their less abundant isotopic counterparts, 13CO and H13CN. The line profiles are moderately broad (about 100 km /s) indicating that the emission originates in the dense, possibly clumpy, central arcsecond of the Homunculus Nebula. Contrary to previous claims, CO and HCO+ do not appear to be under-abundant in \eta\ Carinae. On the other hand, molecules containing nitrogen or the 13C isotope of carbon are overabundant by about one order of magnitude. This demonstrates that, together with the dust responsible for the dimming of eta Carinae following the Great Eruption, the molecules detected here must have formed in situ out of CNO-processed stellar material.
Introduction
η Carinae is well known to have experienced a major outburst in the 1840s, during which it became the second brightest star in the entire sky (e.g., Humphreys & Davidson 1999). Known as the Great Eruption, this outburst was associated with an episode of extreme mass-loss (about 10 M ⊙ of material was expelled in about 20 years) that resulted in the creation of the bipolarshaped Homunculus Nebula whose current size is about 16 ′′ × 10 ′′ , or 0.18 × 0.11 pc assuming a distance of 2.3 kpc (Walborn 1995, Allen & Hillier 1993. Over the following decades, the visual brightness of η Carinae faded by many magnitudes, but early infrared observations by Neugebauer & Westphal (1968) revealed that the bolometric luminosity in the second half of the 20 th century remained comparable to that during the Great Eruption. The dimming at optical wavelengths resulted from obscuration by dust particles, presumably formed in situ out of the ejected material.
A scant handful of molecular species have been detected toward η Carinae. Molecular hydrogen, traced by its 2.12 µm line, appears to be distributed over the outer surface of the Homunculus Nebula, and is strongest towards the polar caps where the intercepted column is largest (Smith 2002;2006). Two other simple diatomic molecules (CH and OH) were identified in Hubble Space Telescope STIS spectra through their UV absorption lines (Verner et al. 2005). Both also originate in the thin outer layer of the Homunculus. Finally, radio emission from ammonia (NH 3 ) was detected by Smith et al. (2006) using the Australia Telescope Compact Array (ATCA). The ammonia emission is confined to a region roughly 1 arcsec across, and shares the kinematics of the H 2 2.12 µm line in the same region.
Interestingly, carbon monoxide (CO) has never been detected toward η Carinae in spite of sensitive searches at millimeter, infrared, and UV wavelengths (Cox & Bronfman 1995;Smith 2002;Verner et al. 2005). As discussed by Smith et al. (2006), this lack of CO detection could reflect the C/O depletion and N enrichment of the material ejected during the Great Eruption. Indeed, the ionized gas surrounding the Homunculus nebula is known to be composed of such nitrogen-rich CNO-processed material (Davidson et al. 1982;Davidson et al. 1986;Dufour et al. 1997;Hillier et al. 2001;. In addition, the abundance of the nitrogen-bearing ammonia molecule in the Homunculus itself is estimated to be about 2 × 10 −7 , roughly one order of magnitude higher than in cold interstellar clouds . It should be emphasized, however, that the existing, unsuccessful, searches for CO in η Carinae are insufficient to place meaningful limits on the [CO]/[NH 3 ] abundance ratio in the Homunculus ). Thus, the low abundance of carbon monoxide in η Carinae is still not firmly established.
The formation and survival of molecules in the harsh environment of η Carinae, within 1 ′′ (0.01 pc) of a 100 M ⊙ star, remain poorly understood. Other classes of massive stars (such as red supergiants and Wolf-Rayet stars) are known to be surrounded by significant quantities of molecular gas, although at greater distances, which might indicate that is represents swept-up ambient interstellar material (Pulliam et al. 2011;Cappa et al. 2001;Rizzo et al. 2001Rizzo et al. , 2003. It is unclear whether or not there is relation between the mechanisms at work in these objects and those occurring in the Homunculus. To tackle these issues, it is important to characterize the molecular content of the Homunculus, and to determine the physical properties and spatial distribution of the molecular gas. In this Letter, we present new sub-millimeter spectroscopic observations of η Carinae designed to search for several new molecular species, including carbon monoxide.
Observations
The observations were performed in 2011 October 12-17 and December 15-20 with the Atacama Pathfinder EXperiment telescope (APEX; ) located at an altitude of 5100-m on Llano Chajnantor, Chile. The molecular transitions targeted are listed in Table 1. Two different receivers were used: a modified version of the First Light Apex Sub-millimeter Heterodyne receiver (FLASH;Heyminck et al. 2006) for transitions in the 345 and 460 GHz bands, and the Carbon Heterodyne Array of the MPIfR (CHAMP+; Güsten et al. 2008) for transitions in the 690 GHz band. While FLASH is a single-beam receiver, CHAMP+ provides spectra simultaneously at 7 positions. Those positions correspond to the central (directly on-source) pixel and to 6 lateral points distributed in a hexagonal pattern around the central pixel and separated from it by about 19 ′′ (Güsten et al. 2008). The Fast Fourier Transform spectrometer backends provided 32,768 frequency channels, each 76.308 kHz wide during the FLASH observations, and 8,192 channels, each 183.1 kHz wide during the CHAMP+ observations. This yields velocity resolutions of 0.07-0.08 km s −1 at all frequencies, but the spectra were Hanning-smoothed to 4-5 km s −1 during post-processing to improve their signal to noise ratio.
The observations were obtained in ON-OFF position switching mode, with the OFF position at α J2000.0 =10 h 48 m 28 s . 0, δ J2000.0 = −59 h 25 m 45 s . 0. This position is known to be devoid of molecular emission. Calibration and pointing scans were interspersed with the science spectra throughout the observations. The weather conditions and overall system performance were excellent, and the resulting spectra of very good quality. As a consequence, very few scans had to be discarded, and only low-order polynomial baselines had to be removed. In some cases, oscillatory patterns were present in the spectra, and were removed in the Fourier domain. The intensity scale was converted from T * A to T mb using the efficiencies listed in Table 1. We estimate the final flux calibration to be accurate to 15%.
Results and Analysis
All the targeted lines were detected toward the source (Figure 1), but no emission was seen in the lateral pixels of the CHAMP+ observations. This demonstrates that the molecular emission is confined to the Homunculus itself. In addition, the line profiles are quite broad (up to about 300 km s −1 full width at zero point, with "cores" of roughly 100 km s −1 full width at half maximum; a θ mb is the beam size at each frequency. b η mb is the main beam efficiency, used to convert the measured antenna temperatures to main beam temperatures. c The values reported in this column were obtained by integrating over the entire velocity range where emission is detected: W = T mb dv. d CN(3-2) was not specifically targeted, but happened to be detected near the edge of one of the observed bands. As a consequence, only about half of the hyperfine components were included in the band (see Figure 1), and this observation will have to be repeated in the future. Smith et al. (2006). In comparison, the Homunculus has expansion velocities of roughly 600 km s −1 , while the Weigelt knots near the star have outward velocities less than 50 km s −1 (e.g. Hofmann & Weigelt 1988;Weigelt et al. 1995;Davidson et al. 1995). The similarity between the NH 3 spectra ) and those reported here likely indicates that the emission originates in the central few arcseconds of the Homunculus. The emission emission is centered at V lsr ∼ +20 km s −1 , a value somewhat more positive than the systemic velocity of η Carinae (-20 km s −1 ; Davidson et al. 1997;Smith 2004). This is, again, similar to the situation with ammonia . The observed profiles are clearly not gaussian. Instead, they exhibit significant sub-structure suggesting that the emission might come from a clumpy material. Indeed, all of our spectra are consistent with four velocity components at v lsr = −76.2, −8.9, +30.5, and +63.5 km s −1 . Particularly noteworthy is the narrow component at v lsr = -76.2 km s −1 seen in most of our spectra, and most likely associated with the strong H 2 1-0 S(1) emission detected by Smith (2004; see also Smith et al. 2006) at the same radial velocity. 1 The combination of observed CO and 13 CO spectra can be used to constrain the physical conditions of the emitting material. First, we note that the relative peak intensities of the CO 3→2, 4→3, and 6→5 lines (0.15, 0.25, and 0.5 K; see the red marks on Figure 1) are almost exactly in the inverse proportion of the corresponding beam areas (1 ÷ 1.7 ÷ 3.8). This shows that the CO lines are optically thick, and come from a region smaller than all the beams (even that at 690 GHz). For such optically thick lines, there is a degeneracy between temperature and filling factor. This degeneracy can be removed using the 13 CO line intensities, and we find that all the CO and 13 CO lines can be reproduced for an excitation temperature of order 70 K and a source size of order 1 ′′ . A higher excitation temperature (of, say, 200 K) could reproduce the CO lines provided the source size were 0.5 ′′ , but would predict a 13 CO(6-5)/ 13 CO(3-2) line ratio of about 8.5, inconsistent with the observed value of 5. Given the similarities between all the spectra observed here (Figure 1), it is reasonable to assume that all the molecular emission comes from the same material, so we conclude that all the lines reported here originate in a source about 1 ′′ in size where the gas is at a temperature of order 70 K.
To estimate the molecular column densities, we used the myXCLASS program 2 (see Comito et al. 2005 and references therein), and modeled the emission as a superposition of the four distinct velocity components identified earlier. We used line widths for each component consistent with the observed widths of the optically thin lines (∆v = 13.2, 35.8, 21.6, and 36.4 km s −1 , respectively for the four velocity components), and found reasonable fits to all the lines for excitation temperatures of 40, 50, 40, and 90 K, respectively for the four components. Our approach to column density determinations entails a number of approximations. First, the calculations are made under the assumption of local thermodynamic equilibrium (LTE). To check that this did not strongly affect our results, we used the publicly available non-LTE radiative transfer code RADEX (van der Tak et al. 2007) to verify that the excitation conditions were consistent with LTE. Secondly, our calculations do not consider the opacity due to possible spatial overlap between different velocity components.
To decide to which extent this problem might affect our conclusions, high angular resolution observations will be necessary. The column densities resulting from our analysis are given for each species in Table 2, and the corresponding model spectra are shown in Figure 1. We note that the CO column density itself is somewhat uncertain due to its substantial opacity.
The abundance of each species was calculated relative to CO and to molecular hydrogen, assuming a column density N(H 2 ) = 3 × 10 22 cm −2 as estimated to be appropriate for this part of the Homunculus by Smith et al. (2006). We note, however, that a somewhat larger column density of H 2 might also be plausible. Based on sub-millimeter and far-infrared observations, Gomez et al. (2010) recently determined the mass of dust surrounding η Carinae to be about 0.4 M ⊙ . For a standard gas-to-dust ratio of 100, this would yield an average H 2 column density of about 2 × 10 23 cm −2 . If the dust distribution were clumpy, however, the column density appropriate for the individual clumps might be several times larger, and the abundances quoted in Table 2 could be proportionately lower.
Discussion
Within its uncertainty, the abundance of CO derived here for η Carinae is similar to its canonical interstellar value of 10 −4 . It is also similar to the typical CO abundances found in O-rich circumstellar envelopes (Ziurys et al. 2009). Thus, CO does not appear to be under-abundant in the Homunculus Nebula, contrary to previous claims based on unsuccessful CO searches. The abundance of HCO + is similar to its value in dense massive cores (∼ 2 ×10 −8 ; Vasyunina et al. 2011) and in the dense envelopes surrounding low-mass stars (∼ 1.2 × 10 −8 ; Hogerheijde et al. 1997). It is also in the mid-range of observed abundances in evolved stars with oxygen-rich circumstellar envelopes (0.5-13 × 10 −8 ; Pulliam et al. 2011), but significantly larger than the abundance in carbon-rich stars, such as IRC+10216 (where it is 4× 10 −9 ; Pulliam et al. 2011). We conclude that both CO and HCO + have roughly standard abundances in the Homunculus.
The nitrogen-bearing molecules, on the other hand, are found to be highly over-abundant in η Carinae. While the average abundance of HCN and HNC in low-and high-mass dense cores is 2-7 × 10 −9 (Vasyunina et al. 2011), their abundances in η Carinae are 0.7-2 × 10 −7 (Table 2). Similarly, the abundance of N 2 H + is about two orders of magnitude higher in the Homunculus than in dense cores (where it is, on average, 2 × 10 −9 ; Vasyunina et al. 2011). The situation is much the same for ammonia showing that nitrogen-bearing molecules are consistently one order of magnitude more abundant in the Homunculus than in the dense interstellar medium. Although chemical effects might affect the abundance of specific individual molecules, this combination of results suggests that the abundance of nitrogen itself must be enhanced by one order of magnitude in the Homunculus. The comparison between the abundances of N-bearing Table 2: Estimated column densities and abundances Species N (cm −2 ) N/N(H 2 ) a N/N(CO) CO 6.5 × 10 18 2.2 × 10 −4 1 13 CO 1.4 × 10 18 4.7 × 10 −5 2.2 × 10 −1 CN 9.0 × 10 15 3.0 × 10 −7 1.4 × 10 −3 HCO + 1.7 × 10 15 5.7 × 10 −8 2.6 × 10 −4 HCN 5.5 × 10 15 1.8 × 10 −7 8.5 × 10 −4 H 13 CN 3.1 × 10 15 1.0 × 10 −7 4.8 × 10 −4 HNC 2.1 × 10 15 7.0 × 10 −8 3.2 × 10 −4 N 2 H + 6.1 × 10 15 2.0 × 10 −7 9.4 × 10 −4 a Abundance of each species relative to H 2 , assuming N(H 2 ) = 3 × 10 22 cm −2 . species in η Carinae and those in O-rich circumstellar envelopes is somewhat confusing. While HCN is about one order of magnitude less abundant in η Carinae than in O-rich envelopes, the abundance of HNC is similar in both kinds of objects. As a consequence, the [HCN]/[HNC] ratio in η Carinae is of the order of a few, similar to its value, of order unity, in quiescent interstellar cores (Padovani et al. 2011), but very different from its value (a few hundred) in oxygen-rich envelopes (Ziurys et al. 2009). CN, on the other hand, is about one order of magnitude more abundant in η Carinae than in O-rich envelopes (Ziurys et al. 2009).
An important conclusion of our observations concerns the relative abundance of isotopic forms of carbon. The [HCN]/[H 13 CN] ratio is estimated to be about 2, while the [CO]/[ 13 CO] ratio is of order 5. This is much smaller than the interstellar 12 C/ 13 C isotopic ratio at the galactocentric radius of η Carinae (∼ 70; Milam et al. 2005). On the other hand, such a low value of the 12 C/ 13 C is an expected result of the CNO cycle. In particular, for the CNO cycle at a temperature of 10 8 K, the expected equilibrium value of the 12 C/ 13 C ratio is of order 4 (Rose 1998, Chap. 6), in very good agreement with the isotopic ratio measured here. The high abundance of nitrogen in the ionized gas surrounding the Homunculus and in the Homunculus itself (as documented above) are also expected consequences of the CNO process. Thus, the molecular observations presented here strongly support the idea that the material expelled during the Great Eruption is CNO-processed stellar matter.
We mentioned in the Introduction that dust grains must have formed out of the material ejected by η Carinae during the Great Eruption. The present results demonstrate that large quantities of molecular material have also formed out of this material. It will be interesting to analyze the chemistry that led to the formation of these molecules from the theoretical standpoint, because the elemental composition of the gas (particularly the N and 13 C enrichment) and the physical conditions (especially the strong UV field) are very different from those in the interstellar gas. Additionally, the chemistry at play occurred in just a few decades. From the observational point of view, it will be important to further characterize the molecular content of η Carinae. Searching for additional nitrogen-bearing molecules such as HC 3 N would be particularly interesting. To further characterize the isotopic composition of the gas, it would also be important to search for molecules containing the 15 N, 17 O, and 18 O isotopes because CNO nucleosynthesis models make specific predictions for the relative abundance of these elements. Finally, it would be very interesting to characterize the spatial distribution of the molecular material in the Homunculus. Our observations suggest a source size of order 1 ′′ , but it is clear from the composite nature of the line profiles, that observations at sub-arcsecond resolution would enable a detailed study of the spatial distribution of the molecular material and of its kinematics. ALMA will, of course, be the instrument of choice for such observations.
Conclusions and perspectives
In this Letter, we have reported the detection of six new molecules, including carbon monoxide, and two of their less abundant isotopic forms toward η Carinae. This triplicates the number of molecules known in this object. While the abundances of CO and HCO + are found to be standard, molecules containing nitrogen or the 13 C isotopic form of carbon are over-abundant by about one order of magnitude. This indicates that the material expelled by η Carinae during the Great Eruption is CNO-processed stellar matter.
Additional single-dish and interferometric observations will be very important to further characterize the chemical composition of the gas on the Homunculus, and to establish its spatial distribution. Observations of additional nitrogen bearing molecules and of species containing specific isotopes of carbon, oxygen, and nitrogen will be particularly interesting. Herschel spectroscopic observations, currently being collected, will also provide very interesting, complementary information.
|
2012-03-07T18:07:32.000Z
|
2012-03-07T00:00:00.000
|
{
"year": 2012,
"sha1": "0af33c11e12999a3c98def14914b01f7d3bd7c8f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.1559",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6cc364959238fd4a55d4bddc4ab98d6d0473957e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
115286950
|
pes2o/s2orc
|
v3-fos-license
|
SWATCH: Common software for controlling and monitoring the upgraded CMS Level-1 trigger
The Large Hadron Collider at CERN restarted in 2015 with a 13 TeV centre-of-mass energy. In addition, the instantaneous luminosity is expected to increase significantly in the coming years. In order to maintain the same efficiencies for searches and precision measurements as those achieved in the previous run, the CMS experiment upgraded the Level-1 trigger system. The new system consists of the order of 100 electronics boards connected by approximately 3000 optical links, which must be controlled and monitored coherently through software, with high operational efficiency. These proceedings present the design of the control software for the upgraded Level-1 Trigger, and the experience from using this software to commission and operate the upgraded system.
Introduction
After a two-year shutdown, the Large Hadron Collider (LHC) at CERN, restarted collisions in 2015, with a significantly higher proton-proton centre-of-mass energy, reaching 13 TeV. The Level-1 trigger system of the Compact Muon Solenoid (CMS) experiment selects 100 kHz of the most interesting collision events from the 40 MHz rate delivered by the LHC. This is achieved in a time window of 3.2 μs using coarse data from the calorimeter and muon detectors, while the full resolution data is held in pipeline memories in the front-end electronics.
The Level-1 trigger of the CMS experiment underwent a major upgrade during 2015 and early 2016 [1] in order to cope with the increasingly demanding beam conditions delivered by the LHC. The VME-based system used in the Run-1 and the beginning of Run-2 of the LHC has been replaced by custom-designed processors based on the uTCA specification. The new design offers increased flexibility due to the use of high-bandwidth optical links between processor cards, modern FPGAs and larger memories for the trigger logic. In addition, the diversity of the hardware has been greatly reduced to a small number of general-purpose boards. This upgrade plan involved changing about 90% of the previous Level-1 trigger hardware. natural consequence, a significant fraction of the firmware, low level drivers, control software and the databases that are used to configure, control and monitor the system had to be adapted to the new system. The increased homogeneity of the hardware offered the possibility for a similar consolidation of the online software, identifying and taking advantage of common components wherever possible. In these proceedings, we present the design of the software that has been developed and used to control and monitor the upgraded Level-1 trigger system and the experience obtained from using this software to commission and operate the upgraded system.
The upgraded CMS Level-1 trigger
The CMS Level-1 trigger is composed of 9 subsystems (CPPF to be installed in 2017 during the Extended Technical Stop), connected as shown in Figure 1. Each subsystem comprises one or more processor boards, housed in uTCA crates capable of hosting up to 12 Advanced Mezzanine Card (AMC) modules. A common module, the AMC13 [2], provides the clock, data acquisition services and a feedback mechanism for the Trigger Throttling System, that monitors the status of the data buffers to avoid them becoming full. The data processing logic within each of the AMC cards follows a common model, shown in Figure 2. This is implemented on modern Xilinx Virtex-7 FPGAs and data is transported via high-speed serial optical links. The logic of each board follows a common pattern with each board consisting of: • an algorithm block that performs reconstruction and processes the data • a Trigger Timing and Control (TTC) block that receives the clock and fast (fixed latency) control commands • zero or more ports and • a readout block that sends the data from the input/output buffers to the Data Acquisition system of CMS. The recorded data is then used to validate the correct functionality of the firmware.
Three varieties of boards have been designed based on this common processor model, each optimized for a different task: • the Calorimeter Trigger Processor cards have dedicated connections to the uTCA backplane for data sharing within the same crate • the Master Processor cards are optimized for data sharing via a large number of optical inputs and outputs and • the Muon Track Finder boards are capable of providing the large memory resources necessary for the CMS Muon Trigger.
The SWATCH Control Software
The similarities in the upgraded Level-1 trigger hardware lead naturally to a generic software design that can fully exploit them. The SWATCH (SoftWare for Automating the conTrol of Common Hardware) framework provides a set of interfaces for controlling and monitoring the hardware of the trigger system while remaining independent of the driver software, thus reducing code and effort duplication for the subsystems. The architecture of each subsystem (processors, ports and interconnections) is stored in subsystem-agnostic data structures with each component represented by an abstract interface class. Subsystem-specific functionality is implemented in classes inheriting from the generic interface classes. The objects that represent a subsystem are built using the factory pattern, with the subsystem-specific implementations of the generic system, process and DAQ-TTC manager classes registered in subsystem-specific "plugin" libraries.
Configuration
In order to take full advantage of the common architecture across subsystems as identified in the processor model, the SWATCH framework provides a generic interface for controlling each electronics board based on three concepts: • Commands: Stateless actions and the basic building block of hardware control. They are represented by an abstract base class, customized by subtype polymorphism for each subsystem. • Command sequences: A stateless action, composed of a series of commands that is executed in sequence. Level-1 trigger hardware as it enabled quick testing using file-based configurations either at institutes outside of CERN or at the CMS Control Room. The backend-agnostic gatekeeper interface ensured a smooth transition when the control software was switched to reading the parameter values from the database when regular data taking commenced. A schematic overview of the SWATCH hardware configuration interfaces can be seen in Figure 3
Monitoring
In order to efficiently monitor the status of the hardware, SWATCH offers two mechanisms inheriting from a generic monitoring interface ( Figure 4): • Metrics represent individual items of monitored data, read directly from the hardware. Each metric can have associated error/warning conditions which determine its state. • Monitorable objects can contain metrics and/or other monitorable objects as child nodes.
The overall state of a monitorable object is determined by the cumulative status of all its child metrics and monitorable objects. In SWATCH, each of the common firmware components within processors and DAQ-TTC managers are represented as a monitorable object.
The history of all monitored information is stored in an Oracle database, with common visualization tools under development. For the future, using a NoSQL database and Elastic Search is under consideration in order to take advantage of the greater flexibility offered in both data storage and retrieval.
Database
All SWATCH configuration data are stored using an Oracle relational database. A single, common database schema exists for all the trigger subsystems. This eases operations as the development and maintenance of the database structure is controlled by a small group of experts and in addition all information and data formats are standardized across trigger subsystems.
The values for the configuration parameters are split in four distinct XML-based modules, each containing a set of criteria belonging to a specific group: • Hardware contains the full system architecture description (boards, links etc.)
Distributed control
Besides hardware control and monitoring, the Level-1 trigger software has to provide the means to integrate the system into the CMS run control hierarchy so that it can be operated by the shift crew at the CMS control room. The Trigger Supervisor [3] is a C++ framework which provides the required interfaces to create online applications that can be controlled via web graphical user interfaces and can communicate over the network by exchanging SOAP messages. Global running is coordinated by the Central Cell, a Trigger Supervisor application orchestrating the configuration of all Level-1 trigger components, enforcing the rules for their appropriate starting order and the relative configuration timing alignment between subsystems. All these components are using SWATCH for hardware control and the Trigger Supervisor to create the user interface and to handle network communications. The Central Cell is controlled by the Level-1 Trigger Function Manager, a Java application responsible for relaying messages from the central run control. All user interfaces are rendered using HTML5 technologies and Polymer [4].
Baseline SWATCH system
To further facilitate the development of subsystem code, a basic SWATCH skeleton code was developed and made available, through which monitoring and control features are exposed. Thanks to the common processor and system achitecture, a set of subsystem-agnostic common panels in order to control and monitor the hardware were developed (Figure 6). These allow subsystem experts and shift crew to: • Control the hardware by executing commands, command sequences or FSM transitions and get immediate feedback on on the results of these actions • View the monitoring status of a system and its individual processors, ports etc.
• Check monitored metric values and their warn/error conditions • Plot metric values in real time for easy inspection.
The added flexibility from the subsystem-agnostic panels significantly reduced the necessary personpower needed to develop the online software during the commissioning period.
Commissioning and operations
Thanks to the SWATCH architecture, a high operational efficiency was achieved in a short period of time, with only one month between the first integration of the online software in the central run control and the first commissioning with beam. This was largely due to its hardwareagnostic design which takes advantage of the commonalities between the electronics boards and maps them on the control software. The low-level hardware drivers were SWATCH-independent and the bridge was realized via "plugin" libraries. However, due to the existence of common hardware between trigger subsystems, only one SWATCH "plugin" had to be developed per board type. Extensive hardware-independent testing allowed many issues to be diagnosed and patched before software deployment. The framework was continuously tested by a comprehensive nightly build test suite and a parallel installation that mirrored the production system using "dummy" hardware was set up to test overall configuration.
Conclusions
The SWATCH software framework provides a generic way to control and monitor the upgraded CMS Level-1 trigger hardware. It facilitated the succesful replacement of a long-running and stable trigger system, despite the short transition time. Its subsystem-agnostic approach led to significant advantages over the previous control and monitoring software, by reducing code duplication and allowing the developers to focus on subsystem-specific issues. The existence of a uniform graphical user interface accross the different subsystems simplified the training of the operations personnel at the CMS control room and experts alike. The use of a common database schema to store the hardware description and configuration parameters has been adopted. The introduction of the SWATCH framework has allowed the CMS Level-1 trigger to achieve high operational efficiency in a short amount of time with only minimal downtime attributed to trigger issues during the 2016 data taking period.
|
2019-04-15T13:06:22.095Z
|
2017-02-17T00:00:00.000
|
{
"year": 2017,
"sha1": "a71fd62e2f9ddddb1922bc90734a464c1dc609bc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/898/3/032040",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f1a27435bc726f092e92a0581592cc0cdaa4723f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
119149962
|
pes2o/s2orc
|
v3-fos-license
|
Schur's theory for partial projective representations
This article focuses on those aspects about partial actions of groups which are related to Schur's theory on projective representations. It provides an exhaustive description of the partial Schur multiplier, and this result is achieved by introducing the concept of a second partial cohomology group relative to an ideal, together with an appropriate analogue of a central extension. In addition, the new framework is proved to be consistent with the earlier notion of cohomology over partial modules.
Introduction
It is becoming relatively common to encounter the word partial as a prefix for some classical terminology in algebra. This phenomenon is due to the increasing awareness about the algebraic relevance of partial symmetries which explicitly or implicitly appear in various local-global aspects of well-known mathematical topics. It was indeed the theory of C * -algebras which motivated the introduction of the notions of partial actions and partial representations of groups [17]. In this regard, the most prominent application of the partial actions is their involvement in the construction of a crossed product, which encloses relevant classes of algebras [7]. Partial representations are intimately related to partial actions in various ways, in particular, they play a fundamental role in providing partial crossed product structures on concrete algebras. This can be seen applied to a series of important algebras, among the most recent examples being C * -algebras related to dynamical systems of type (m, n) [2], the Carlsen-Matsumoto algebras of subshifts [8] and algebras related to separated graphs [1]. In the latter paper, a partial-global passage was remarkably employed to a problem on paradoxical decompositions.
The general concept of a twisted partial group action, used to define more general crossed products, involves a kind of a 2-cocycle (see [9,16]), raising the desire to fit it into some cohomology theory. In order to obtain some testing material for the development of such a theory, partial projective group representations were introduced and studied in [13,14,15,28]. The notion of a partial multiplier naturally appears in analogy with the classical Schur's theory of projective representations. Since the algebraic structures involved in partial representations are not only groups but also inverse semigroups, the new theory diverges from the original one and the multiplier results in a semilattice of abelian groups, called components. That is briefly to say, a collection composed of abelian groups and parametrized by a meet semilattice, where the operations of group-multiplications are compatible with the partial order. Subsequently, a cohomology theory based on partial actions was developed in [10,11,12]. Notice that partial cohomology turned out to be useful to deal with the ideal separation property of (global) reduced C * -crossed products in [25], where partial 2-cocyles appear as factor sets (twists) of partial projective representations (called twisted partial representations in [25]) naturally related to (global) C * -dynamical systems.
With respect to the partial Schur multiplier, it was shown in [10, Theorem 2.14] that each component is a union of cohomology groups with values in non-necessarily trivial partial modules. Furthermore, in the case of an algebraically closed field K, it was proved in [15,Theorem 5.9] that each component of the partial Schur multiplier is an epimorphic image of a direct product of copies of K × . However, computations of the partial Schur multiplier of concrete groups show that each component is, in fact, isomorphic to a direct product of copies of K × (see [6,15,26,27,28,29]), raising the conjecture that this should be true for all groups.
The present article is inspired by the aforementioned conjecture, of which it provides an affirmative answer for finite groups (Corollary 5.3), and it actuates an essential refinement for our understanding of partial projective group representations. To this aim, the theory is reformed to encompass a broader setting, specifically to allow arbitrary coefficients and to establish a partial analogue for Schur's theory of central extensions (in particular, see Theorem 3.3). Influenced by known properties of partial factor sets, the concepts of a pre-cohomology group, of a second partial cohomology group relative to an ideal, and of a second partial cohomology semilattice of groups with values in an arbitrary abelian group are introduced. Then the partial Schur multiplier is shown to coincide with the second partial cohomology semilattice of groups with values in K × . This new level of abstraction conveys standard techniques into the scene, which promptly produce new theorems.
Most remarkably, in the finite case the pre-cohomology groups are completely determined by minor information about the underlying group (Theorem 4.3). Also, taking the ideals into accounts, a general statement is established for the second partial cohomology groups (Theorem 5.2). Furthermore, considering appropriate partial G-modules, one is able to conclude, in particular, that each component of the partial Schur multiplier is isomorphic to a single partial cohomology group (see Theorem 6.1) in the sense of [10], bringing thus the partial Schur theory closer to the classical one.
The text is structured as follows: in section 2 some fundamental concepts in the theory of partial actions of groups and in Schur's theory will be recalled; in section 3 new concepts will be introduced, and the partial analogue for Schur's theory with arbitrary coefficients will be developed; in section 4 and, respectively, in section 5 the pre-cohomology groups and the second partial cohomology semilattice of groups will be studied in details; lastly, in section 6 the consistency with the previous notion of partial cohomology will be proved.
The universal inverse semigroup of a group
Inverse semigroups are the algebraic structures which capture the idea of partial symmetry [21]. In an inverse semigroup S the system of equations admits a unique solution for y in terms of x, commonly denoted by y = x −1 . This is easily recognized to be a generalization of groups, still it incorporates the semilattices, which are the semigroups consisting of commuting idempotents. A partial bijection over a set X is any bijection between subsets of X.
Multiplication of partial bijection is defined as follows: if α : A → A α and β : B → B β , then β • α is their composition on the largest subset where it makes sense, i. e. α −1 (A α ∩ B). Necessarily, one has to admit the existence of the map ∞ : ∅ → ∅, and this serves as the zero element. As the reader has noticed, we convene to denote with ∞ the zero element of a semigroup, if this ever exists, coherently with the additive notation for abelian groups.
The set of partial bijections over X turns out to be an inverse semigroup in the obvious way, that is by inversion of functions, which is named the symmetric inverse semigroup I(X). We recall that, given an ideal I of a semigroup S, the Rees congruence is defined by setting x ∼ y when either x = y or both x and y belong to I. The corresponding quotient is denoted by S/I. In general, if ϑ : S → T is a homomorphism of semigroups and T admits the zero ∞, then ϑ −1 (∞) is an ideal of S.
Inverse semigroups apply to the theory of groups through the notion of partial actions and partial homomorphisms of groups [17]. A partial action is a map ψ : G → I(X) satisfying the compatibility conditions that ψ(1) = id X and the composition ψ(g) • ψ(h) coincides with the restriction of ψ(gh). In turn, this notion results to be a special case of the following. A partial homomorphism is a map ψ from a group G into a semigroup M satisfying the conditions for any pair of elements g and h of G. These relations, which openly emerge from the compatibility requirement of partial action, are used to introduce the universal inverse semigroup S(G) associated with G (see [17]). Precisely, S(G) is the abstract semigroup generated by symbols Π(g) subject to relations analogue to (2)-(4), so that the map Π : G → S(G) is an injective partial homomorphism, and this construction is universal in sense that every partial homomorphism ψ extends to a semigroup homomorphismψ uniquely by means of Π. In fact, S(G) is isomorphic to the Birget-Rhodes expansion of G [24] (see also [3,4]). Thus, a generic element of S(G) is identified with a pair (R, g) where R is a finite subset of G containing 1 and g, whereas multiplication obeys to the rule (R, g) · (S, h) = (R ∪ gS, gh) .
In addition, with respect to (6), taking y = 1 and either x = g or x = gh in the above characterization (8) yields These identities prove that, given a partial homomorphism ψ, the subset of G × G consisting of those pairs (g, h) for which ψ(g) · ψ(h) = ∞ is invariant under the action of the symmetric group S 3 generated by the transformations This action, which has been discovered in [13], will be central in the present essay.
Another family of ideals can be described by the simple observation that, for any pair of elements (R, g) and (S, h) of S(G), since |R ∪ gS| ≥ |S| and |R∪gS| ≥ |R|, left and right multiplication are "non-decreasing operations".
Consequently, for any positive integer k, the set In the present essay the crucial case is when k = 3. Indeed, in the notation of (5), for any partial homomrphisms ψ considered hereby it will be assumed that the ideal I =ψ −1 (∞) is proper in S(G) and it contains N 3 . This assumption is motivated by the theory on partial projective representations where the semilattice plays a prominent role (see [15]). Taking this into account, suppose that Π(x, y, z) is not contained in I, for I =ψ −1 (∞) lying in Λ. Accordingly with (7), one has |{1, x, xy, xyz}| ≤ 3, which can be rewritten as Therefore, the failure on this condition implies ψ(x) · ψ(y) · ψ(z) = ∞.
A sketch of classical Schur's theory
The ordinary multiplier is the fundamental object in the theory of projective representations and central extensions [22,23,31]. Hereby, few of the very elementary concepts are mentioned in order to fix the notation and facilitate the description of their partial analogue.
A central extension of a group M is a group E endowed with a surjective homomorphism π : E → M such that A = ker π is contained in the center Z(P ). This situation is summarized in the diagram The interest is to consider such a central extension together with a group G and a homomorphism ψ : G → M . The archetypes are: i. When M = G and ψ is the identity, then E is a central extension of G.
ii. When E = GL(n, C) and M = PGL(n, C) for some positive integer n, then ψ is a projective representation of G.
Hence, the choice of a section ϕ : G → E completes the commutative diagram: One can not expect that the map ϕ is a homomorphism. However, the failure for this to happen is encoded in the equation where σ is a function in A G×G . Clearly, not any function is admissible: adopting the additive notation for group multiplication in A, associativity of E proves that σ satisfies the equality for any x, y and z in G. In connection with cohomology this fact is restated as follows. The coboundary homomorphism is defined by means of the relation and the group of cocycles is the kernel of this homomorphism Thus, the function σ determined by (17) is an element of Z 2 (G, A). Luckily enough the process is reversible and any σ ∈ Z 2 (G, A) defines a central extension (15) of the group G as follows. The underlying set is A × G and multiplication is given by Therefore, there is a correspondence between the central extensions and the cocycles of G which, however, depends on the choice of section. Nonetheless, any change of section, as it can be seen easily, corresponds to addition of a coboundary Hence, it is of interest to introduce the group and to consider the second cohomology group The terminology Schur multiplier applies to the case when G is a finite group and A = C × , which answers to the original problem of classifying the projective representations. For general coefficients, the facts here discussed provide a one-to-one correspondence between the isomorphism classes of the central extensions of G by A and the elements of H 2 (G, A) Of course, the above definition is a special case in the theory of group cohomology (see [5]). Indeed, in the present manuscript it is always assumed that the (global) group actions over the coefficient modules are trivial. In view of this, the first cohomology group simply is
The partial analogue notions
Also the partial counterpart of the theory originates with the study of partial projective representations in K × -cancellative monoids [13,14,15] (previously named K-cancellative). However, the most general situation can be extended to semigroups and partial homomorphisms.
Firstly, referring to the diagram (15), in the current framework E will be taken as an A-cancellative central extension of a semigroup M by an abelian group A, in the following sense. A central extension is a semigroup E which contains A as a central subgroup, and M is the quotient semigroup E/A with respect to the following congruence: x ∼ y whenever there exist a and b in A such that ax = by. Given a central extension E endowed with zero ∞, this extension is called A-cancellative if, for every a, b in A and every x in E \ {∞}, the equality ax = bx implies a = b. Observe that, this condition yields that, for every a in A, the equation ax = ∞ has no solution for x in E other than x = ∞.
In second place, the due modification in the diagram (16) is given by an A-cancellative central extension of M together with a group G and a partial homomorphism ψ : G → M . Notice that, when ∞ ∈ E, then M has at least two elements. In this case, it is convenient to assume that ψ(1) = ∞ since, otherwise, ψ(g) = ∞ for all g. With respect to (3)-(4) and (17), this time the failure for the section ϕ to be a partial homomorphism can be shown to be encoded in the equations where σ(g, h) is an element of A. Actually, it is convenient to regard σ(g, h) as an element of the semigroup A ∞ = A ∪ {∞}, obtained adjoining the zero ∞ to the abelian group A, and set σ(g, h) = ∞ whenever the above equations reduce to the triviality ∞ = ∞.
Similarly with the case of a partial projective representation over a field, the fact that the same function σ appears in both (21) and (22) is not obvious, but the arguments given in the proof of [13,Theorem 3] can be easily adapted to this case. Moreover, also in the current situation the map σ has not to satisfy the cocycle identity [13, page 260]. However, by means of (10), associativity of E applied to the relation ϕ(x −1 )·ϕ(x)·ϕ(y)·ϕ(z)·ϕ(z −1 ) shows that δ 2 σ(x, y, z) = 0 whenever Π(x, y, z) does not belong to the ideal I = ψ −1 (∞). Accordingly to (14), it is of interest to introduce the following: The group of pre-cocycles is denoted by pZ 2 (G, A), and is the pre-cohomology group of G.
The zero ∞, which does not appear in the above definition, is reintroduced as follows. Any fixed ideal I in the semilattice Λ, introduced in (13), defines an additive characteristic function ǫ I : G × G → A ∞ obeying to In these terms, the map A G×G → A G×G ∞ associating α → α + ǫ I is a homomorphism of semigroups.
Definition 2. In the current notation, define
Then, the quotient
is the second partial cohomology group relative to I with coefficients in A.
We shall see below (Theorem 6.1) that this definition is coherent with the notion of partial cohomology groups given in [10]. Moreover, the fact justifies that only ideals containing N 3 are considered.
Clearly, the identity element of H 2 (G, I; A) is the class [ǫ I ]. In addition, in A G×G ∞ one has ǫ I + ǫ J = ǫ I∪J . Thus, by allowing external sums, that is [σ + ǫ I ] + [τ + ǫ J ] = [σ + τ + ǫ I∪J ], it is possible to join all of these groups as follows: From the above discussion, it follows that a map σ defined by (21)-(22) is an element of Z 2 (G, I; A) for I = ψ(∞) −1 . The following construction, which is analogue to (19), provides the converse of this statement. Namely, for any ideal I in Λ, every element of Z 2 (G, I; A) can be obtained as a map σ arising from an A-cancellative central extension together with a partial homomorphism, as in (21)-(22).
which is associated with σ.
Proof. To prove the associativity of E, given X = (R, x), Y = (S, y) and Z = (T, z) in S(G) \ I together with λ, µ and ν in A, one considers the two ways for computing the product (λ, X)·(µ, Y )·(ν, Z). There is no loss of generality assuming that XY Z does not belong to I, since otherwise the product results ∞. In particular, associativity corresponds to the cocycle identity δ 2 σ(x, y, z) = 0 under this condition. Write XY Z = (R ∪ xS ∪ xyT, xyz).
It is routine to check that the isomorphism class of the extension E only depends on the partial cohomology class of σ in H 2 (G, I; A), consequently: In particular, in the case A = K × , any partial cocycle in Z 2 (G, Λ; K × ) defines a K × -cancellative monoid together with a partial projective representation, and vice-versa. Therefore, the partial multiplier pM (G) introduced in [13] turns out to be a partial cohomology semilattice of groups: Proposition 3.4. Let G be a group and K a field. Then the partial multiplier pM (G) over K coincides with H 2 (G, Λ; K × ).
Pre-cohomology
The pre-cocycles can be characterized in several different ways: ii. 1 ∈ {xy, xyz} ⇒ δ 2 σ(x, y, z) = 0 iii. For every g, h ∈ G the following holds: Moreover, if the above properties are satisfied, then: Proof. Denote by i.x the property [x = 1 ⇒ δ 2 σ(x, y, z) = 0, ∀y, z], and so on, so that the property i is the conjunction of the properties: i = i.x ∧ i.y ∧ i.z ∧ i.xy ∧ i.yz ∧ i.xyz. The proof is divided in eight steps.
The above description of the pre-cocycles allows one to relate these functions with the action of S 3 defined in (12).
Proposition 4.2.
Let G be any group. Introduce on G × G the following equivalence relation: for any element g of G set and, for any pair of elements g and h of G satisfying g = 1 = h = g −1 set .
Then the group of pre-cocycles pZ 2 (G, A) is isomorphic with the group of functions A Ω over the set of equivalence classes
Proof. Fix a choice of representatives ∆ : Ω → G × G satisfying, for convenience, ∆(1, 1) ∼ = (1, 1). As for the homomorphism pZ 2 (G, A) → A Ω simply take the composition σ → σ • ∆. Whereas, the homomorphism A Ω → pZ 2 (G, A), such that f → σ, is defined as follows. First, set σ(g, 1) = σ(1, g) = f (1, 1) ∼ , σ(g, g −1 ) = σ(g −1 , g) = f (g, g −1 ) ∼ for any g ∈ G. Then, for any other class ω of Ω, that is ∆(ω) = (g, h) where and, in accordance with the relations iii.a, iii.b iii.e, iii.f iii.g of Lemma 4.1, define the values of σ over the remaining elements of the orbit as follows: It is easy to check that these homomorphisms are inverse of each other.
In the case of a finite group G, the previous result proves that pZ 2 (G, Z) is a finitely generated abelian group of functions, so that standard techniques can be used (see [19, §97]). Theorem 4.3. Let G be a group of order n < ∞, and denote by G (k) the set of elements of order k in G. Then, for any abelian group A, Proof. Since G is finite then Ω is also finite and, consequently, the group A Ω is a finitely generated A-module. Denoting m = |Ω|, the above formula for m in terms of n, |G (2) |, and |G (3) | can be easily achieved (a similar computation is given in [6]). Furthermore, if Ω = {ω 1 , . . . , ω m } then where h ω i denotes the characteristic function h ω i : ω j → δ ij (the Kronecker symbol). Hence, By Proposition 4.2, unlike the analogue case in the classical theory, it follows that pZ 2 (G, A) ≃ A ⊗ pZ 2 (G, Z) .
In view of this fact, one first considers the case A = Z. Hence, in pZ 2 (G, Z) the subgroup Z 2 (G, Z) admits a complement: indeed, given any σ ∈ Z G×G , and a positive integer q, one has δ 2 (qσ) = 0 if and only if δ 2 σ = 0, so that Z 2 (G, Z) is a pure subgroup and thus, since pZ 2 (G, Z) is finitely generated, it is complemented [18,Theorem 28.2]. Hence, denoting by r the free-abelian rank of Z 2 (G, Z), one has pZ 2 (G, Z) = Z m−r ⊕ Z 2 (G, Z), and so It is well known that, since G is finite, (see [5, Proposition 6.1, Corollary 10.2]). Therefore, in order to prove the theorem's claim for A = Z it is left to show that r = n. To see this, it is sufficient to find a finite index subgroup of Z 2 (G, Z) isomorphic with Z n , and this goal is achieved by taking B 2 (G, Z). Indeed, since |H 2 (G, Z)| < ∞, it has finite index and, since n < ∞ implies that H 1 (G, Z) = 0, the desired isomorphism is given by the differential δ 1 : Returning to the case of generic coefficients one has to consider the quotient pZ 2 (G, A)/B 2 (G, A) and, more precisely, to describe B 2 (G, A) as a subgroup of pZ 2 (G, A). The only difficulty is that, denoting by h g the characteristic function relative to g in C 1 (G, Z) = Z G , the canonical (finite) set {δ 1 h g | g ∈ G} freely generates B 2 (G, Z) as a Z-module, but in B 2 (G, A) the corresponding generating set {1 A ⊗ δ 1 h g | g ∈ G} is not necessarily free. Consequently, a more delicate analysis of pZ 2 (G, Z) is needed in order to find a basis presenting a good behavior. To this aim, one considers the inclusion B 2 (G, Z) ⊆ Z 2 (G, Z) of finitely generated free Z-modules, and uses Smith's normalization theorem to obtain simultaneous basis (see [30,Theorem 8.61]); therefore, there exists elements f 1 , . . . , f n of Z 2 (G, Z) together with nonnegative integers λ 1 , . . . , λ n , which are uniquely determined by imposing the condition that λ i divides λ i+1 for every i < n, satisfying Z 2 (G, Z) = Zf 1 ⊕ · · · ⊕ Zf n , B 2 (G, Z) = λ 1 Zf 1 ⊕ · · · ⊕ λ n Zf n .
As already remarked, there is an isomorphism in cohomology which allows to compute the integral coefficients as follows: At this moment one chooses a basis {g m+1 , . . . , g n } of the complement Z m−n to obtain a decomposition pZ 2 (G, Z) = Zf 1 ⊕ · · · ⊕ Zf n ⊕ Zg n+1 ⊕ · · · ⊕ Zg m . It is readily seen that 1 A ⊗ f 1 , . . . , 1 A ⊗ g m freely generates pZ 2 (G, A) as an A-module. Therefore, denoting by T = λ 1 Af 1 ⊕ · · · ⊕ λ n Af n , it is left to prove that B 2 (G, A) = T . Since each δ 1 h g ∈ B 2 (G, Z) can be expressed as an integral combination of the elements λ 1 f 1 , . . . , λ n f n , it follows that 1 A ⊗ δ 1 h g ∈ T for every g ∈ G, so that B 2 (G, A) ≤ T . On the other hand, since the elements {δ 1 h g | g ∈ G} constitute a basis of B 2 (G, Z), then every λ i f i is an integral combination of these elements, proving that λ i 1 A ⊗ f i ∈ B 2 (G, A), so that T ≤ B 2 (G, A).
Partial cohomology relative to an ideal
In this section the results about the pre-cohomology groups are generalized to the partial cohomology groups relative to ideals I ∈ Λ.
Moreover, the group Z 2 (G, I; A) is isomorphic with the group of function
Proof. Clearly, any function of the form σ = σ ′ + ǫ I where σ ′ ∈ pZ 2 (G, A) satisfies i. To see that σ also satisfies ii, firstly one has to show that no summand of δ 2 σ is equal to ∞. Equivalently, that no-one among Π(x, y), Π(xy, z), Π(x, yz), and Π(y, z) is contained in I. To this aim, the cases of Π(x, y) and Π(y, z) are trivial; on the other hand, assuming that Π(xy, z) ∈ I yields Π(x −1 , x, y, z) = Π(x −1 , xy, z) ∈ I, but then also Π(x, y, z) = Π(x, x −1 , x, y, z) ∈ I contradicting the hypothesis, and the case of Π(x, yz) is similar. Therefore, when Π(x, y, z) / ∈ I then δσ(x, y, z) = δσ ′ (x, y, z). However, since I contains N 3 , Π(x, y, z) / ∈ I implies that 1 ∈ {x, xy, xyz, yz, z}, so that δσ ′ (x, y, z) = 0. Vice versa, given σ satisfying the properties i and ii, one has to produce a pre-cocycle σ ′ for which σ = σ ′ + ǫ I . Accordingly to Proposition 4.2, one defines f in A Ω by setting f (ω) = 0 if σ • ∆(ω) = ∞, and f (ω) = σ • ∆(ω) otherwise. Then, the desired σ ′ is obtained by means of the isomorphism Finally, the above defined function f ∈ A Ω can be regarded as an element of A Ω I simply by restriction. Conversely, any function in A Ω I can be extended to an element f of A Ω by imposing the value 0 over the complementary set Ω \ Ω I , resulting as above in an element σ ∈ Z 2 (G, I; A).
For the general case of partial cohomology groups relative to ideals, one is able to come at a weaker form of Theorem 4.3: Theorem 5.2. Let G be a finite group. Then where m I = |Ω I | ≥ n I = rk Z B 2 (G, I; Z), and µ 1 , . . . , µ n I are positive integers ordered by recursive division. Moreover, Proof. The result follows in a way similar to the proof of Theorem 4.3, with the following minor modification. In this case, using Lemma 5.1, one also has Z 2 (G, I; A) ≃ A ⊗ Z 2 (G, I; Z). Then, applying Smith's normalization theorem, one obtains a free basis f 1 , . . . , f m I of Z 2 (G, I; Z) together with positive integers µ 1 , . . . , µ n I such that µ 1 f 1 , . . . , µ n I f n I is a free basis of B 2 (G, I; Z). Finally, the fact B 2 (G, I; A) = µ 1 Af 1 ⊕ · · · ⊕ µ n I Af n i is proved considering the canonical generating set {δh g + ǫ I | g ∈ G}.
In particular, when A is a divisible group, then for every ideal I in Λ each summand A/µ i A in the decomposition of H 2 (G, I; A) is trivial. This observation confirms the conjecture mentioned in the introduction: Corollary 5.3. If G is finite and K is an algebraically closed field, then any component of the partial multiplier is a finite direct sum of copies of K × .
An example
It can be seen in the proof of Theorem 4.3 that the torsion part of pH 2 (G, Z) comes from the classical cohomology group H 2 (G, Z). A similar statement can not be formulated for a generic ideal I. Moreover, the next example shows that the torsion part of H 2 (G, I; Z) is not necessarily a quotient of H 2 (G, Z). The general procedure to compute H 2 (G, I; Z) explicitly, which can be reproduced for various small groups using GAP [20], is summarized as follows: one determines m = |Ω| together with a set of representatives ∆(ω 1 ), . . . , ∆(ω m ) for the classes of Ω; the matrix yields the coefficients of the natural basis δ 1 h g 1 , . . . , δ 1 h gn of B 2 (G, Z) in terms of the characteristic functions h ω 1 , . . . , h ωm ; given an ideal I in Λ, one determines Ω \ Ω I = {ω i 1 , . . . , ω i m−m I } to obtain, removing from A the columns i 1 , . . . , i m−m I , a matrix M I ; finally, the coefficients µ 1 , . . . , µ n I which describe the torsion part of H 2 (G, I; Z) in Theorem 5.2 are the nonzero diagonal entries of the Smith's normal form of the matrix M I . Example 4. Consider the cyclic group G = Z 6 . It is directly seen that G admits 8 = (6 2 + 2 · 2 + 3 · 1 + 5)/6 classes in Ω, which are represented by the following pairs: Thus, the coefficients of the matrix M = (δ 1 h g i • ∆(ω j )) i,j ∈ Mat 6×8 (Z) can be read in the following table: It can be checked that the Smith's normalization algorithm provides two matrices P ∈ GL(6, Z) and Q ∈ GL(8, Z), such that the diagonal entries of D = P M Q are λ 1 = · · · = λ 5 = 1 and λ 6 = 6. In accordance with Theorem 4.3, then pH 2 (Z 6 , Z) ≃ Z ⊕ Z ⊕ Z 6 . Consider now I = N 3 ∪ Π(1, 2), Π(1, 3) S , so that Ω \ Ω I = {ω 6 , ω 7 }. In particular n I = |Ω I | = 6, and Smith's normalization for M I provides a matrix with diagonal entries µ 1 = . . . = µ 4 = 1, µ 5 = 2 and µ 6 = 6. Therefore, H 2 (Z 6 , I; Z) ≃ Z 2 ⊕ Z 6 that, as anticipated, is not a quotient of H 2 (Z 6 , Z) = Z 6 . Figure 1: The partial cohomology semilattice of groups H 2 (Z 6 , Λ; Z). The semilattice structure is the same of (Λ, ∪). At the vertex corresponding to the ideal I appears H 2 (Z 6 , I; Z). The principal-modulo-N 3 ideals are marked by their generator, thus "ω 5 : Z⊕Z 6 " stands for H 2 (G, Π ∼ (ω 5 ); Z) ≃ Z⊕Z 6 .
The full description of H 2 (Z 6 , Λ; Z) is shown in Figure 1.
Consistency with the previous notion
It is left to show that for any abelian group of coefficients A and any ideal I in Λ, the group H 2 (G, I; A) is a partial cohomology group according to [10]. To this aim, one has to associate the pair (I; A) with a unital partial G-module B (that is to say, a commutative monoid together with a unital partial G-action). Moreover, B is a commutative monoid with identity element (0 A , Π(1)), and it is naturally endowed with the following unital partial action: for any g ∈ G, one considers the idempotent e g = (0 A , Π(g, g −1 )) if Π(g) / ∈ I ∞ otherwise together with the ideal B g = Be g of B, and the partial action ϑ is given by the family of isomorphisms {ϑ g : B g −1 → B g | g ∈ G}, defined by: ϑ g (a, ε) = (a, Π(g)εΠ(g −1 )) if Π(g)εΠ(g −1 ) / ∈ I ∞ otherwise.
Therefore, taking n = 2, Lemma 5.1 shows that σ ∈ Z 2 (G, B) if and only ifσ ∈ Z 2 (G, I; A). Finally, one checks the case n = 1 to conclude that σ ∈ B 2 (G, B) if and only ifσ ∈ B 2 (G, I; A), so that the map σ →σ induces the desired isomorphism of cohomology groups.
|
2019-04-12T06:20:33.134Z
|
2017-11-17T00:00:00.000
|
{
"year": 2017,
"sha1": "d603f49b2a1acca20223ac09d383154991a2fd6c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1711.06739",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "10bda11eaac7fe373df060ae5f26d16f4015e927",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
121139714
|
pes2o/s2orc
|
v3-fos-license
|
Invariant classification of metrics using invariant formalism
Metrics obtained by integrating within the generalised invariant formalism are structured around their intrinsic coordinates and this considerably simplifies their invariant classification and symmetry analysis. We illustrate this by presenting a simple and transparent complete invariant classification of the conformally flat pure radiation metrics (except plane waves) in such intrinsic coordinates. By performing this classification we have corrected and completed statements and results by Edgar and Vickers, and by Skea, about the orders of Cartan invariants at which particular information becomes available.
Introduction
The integration procedure developed in the GHP formalism [1] has recently been generalised to the GIF (generalised invariant formalism [2]) [3], [4], [5], [6]. Compared to the familiar integration procedures in NP formalism [7] this procedure is much more efficient and avoids detailed complicated gauge calculations which have the potential for errors. It also supplies the metric in a natural form, with coordinates chosen as far as possible, in an intrinsic and invariant manner, permitting a comparatively simple invariant classification procedure, equivalence problem and symmetry investigations. The equivalence problem is the problem of determining whether the metrics of two spacetimes are locally equivalent, and the original contribution of Cartan [8] directed attention to the Riemann tensor and its covariant derivatives up to (q + 1)th order, R q+1 , calculated in a particular frame. In going from R q to R q+1 for a particular spacetime, if there is no new functionally independent Cartan scalar invariant and R q and R q+1 have equal isotropy group, then all the local information that can be obtained about the spacetime is contained in the set R q+1 . The set R q+1 is called the Cartan scalar invariants and provide the information for an invariant classification of the spacetime. Two metrics are equivalent and represent the same spacetime if all their respective Cartan scalar invariants in R q+1 can be equated consistently. It is important to note that although there will be no new information about essential coordinates in the step from R q to R q+1 , there may be other new information, in particular about inconsistencies and also the nature of apparently non-redundant functions (including constants). A practical method for invariant classification was developed by Karlhede [9], using fixed frames. In this algorithm the number of functionally independent quantities is kept as small as possible at each step by putting successively the curvature and its covariant derivatives into canonical form, and only permitting those frame changes which preserve the canonical form. Although the Karlhede algorithm is more efficient than the original procedure proposed by Cartan, it may need to go as far as to R 7 [10], and as a consequence, for some spacetimes, long complicated calculations are required, which usually need computer support, e.g., using the programme CLASSI [11], or the Maple-based GRTensor programme. The Karlhede algorithm can be exploited to determine the structure of the isometry group of the spacetime, as well as subclasses within the spacetime which have additional isometries [12]; more recently the scheme has been the basis for an algorithm which determines whether a spacetime admits a homothetic Killing vector [13].
Invariant classification of Edgar-Ludwig spacetime in intrinsic coordinates from GIF
Edgar and Vickers [3] have rederived all CFPR spacetimes, which are not plane waves, using GIF, obtaining in coordinates t, n, a, b the metric where m(t), e(t), s(t) are non-redundant functions of the coordinate t; this form includes the possibility of any of m(t) or e(t) or s(t) being constant. This form represents the most general metrics for CFPR spacetimes (with zero cosmological constant). We begin by repeating the zeroth, first and second order invariants quoted in [3]. At zeroth order, there is only the one Cartan spinor invariant At first order, there are four Cartan spinor invariants ι ι ι is a second spinor which is generated in the GIF analysis, p and q are weighted scalar invariants which represent the spin and boost freedom; q is real while complex p satisfies pp = 1/2. It is easy to see that we may invert (2) and (3) for a, p and q in terms of Cartan spinor invariants, and also for (p ι ι ι +p ι ι ι); therefore ι ι ι is not uniquely determined, (and so neither is n) and at this level there clearly would remain the gauge freedom of a one parameter subgroup of null rotations.
Since new information about the essential coordinates has arisen, we must go to the next order. At second order, a complete set of independent Cartan spinor invariants is 2 a 3 4p 2 ι ι ι 2 + 5ppι ι ιῑ ι ι +p 2ῑ ι ι 2 + 12 q 3 n a 4 (pι ι ι +pῑ ι ι) together with complex conjugates. The GIF commutator equations enable us to concentrate on this reduced list of independent invariants. We can now invert these equations and obtain explicit expressions for the spinor ι ι ι, as well as for n and the scalar combination s(t) − 2am(t) + 1 2 b 2 , in terms of Cartan invariants. Thus, at second order, if we make this choice of ι ι ι as our second spinor, we will have fixed the frame completely (there is no isotropy freedom remaining), and we also have determined three essential base coordinates. Moreover, making this choice of ι ι ι as the second dyad spinor enables us to transfer to the simpler GHP formalism (see [3] for a fuller discussion on when this is possible) since we require only the scalar parts of the remaining non-trivial GHP Cartan invariants, Since we have obtained new information about essential coordinates we need to go one step further.
At third order, a comparison of the second order expressions with the zeroth and first order ones shows that the only possibly independent new information will come from the following GHP Cartan invariants: and complex conjugates; m (t) and s (t) denotes differentiation with respect to t. The GHP commutator equations reduce the number of independent invariants. At fourth order, a comparison of the third order expressions with the zeroth, first and second order ones shows that the only possibly new independent information will come from the operator I o acting on the scalar X defined by This gives where m (t) and s (t) denote differentiation twice with respect to t, and e (t) denotes differentiation with respect to t. Solving for a fourth coordinate is a little more complicated, since we have to go to third order, and the details will depend on the nature of the functions s(t), m(t), e(t).
In summary we get the two following cases: • if at least one of the functions m(t), s(t), e(t), is not constant then all four essential base coordinates are obtained from GHP Cartan invariants at third order, and the procedure will therefore formally terminate at fourth order.
• when all of the functions m(t), s(t), e(t) are constants, then only three essential base coordinates can be obtained from Cartan GHP invariants; these are obtained at second order, and the procedure will therefore formally terminate at third order. Because of the very simple structure of this metric and the close relationship of its coordinates with its GHP Cartan invariants it has been easy to draw conclusions on its classification from a direct examination of its GHP Cartan invariants. Continuing in this manner, the three apparently non
Summery
The simplicity and transparency of this GIF version (1), combined with the fact that we are able to carry it out by hand, gives us a clear unambiguous overview of the invariant classification of this class of metrics. The results obtained in Section 2 have some minor, but subtle and interesting, disagreements with the conclusions by Edgar and Vickers [3], and by Skea [14]. The traditional CLASSI analysis of the spacetime in the intrinsic coordinates was carried out in [15] confirming the results in Section 2.
In addition, using this version of the spacetime, we were able to obtain trivially the Killing vector properties, as well as the homothetic Killing vector properties by a straightforward application of the Koutras-Skea algorithm [15].
|
2019-04-19T13:02:23.958Z
|
2010-05-01T00:00:00.000
|
{
"year": 2010,
"sha1": "a0c27d7a2f31bc8c18fcd92c11574b5ff339220f",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/229/1/012049",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "57f46c96556f1358cb7ec3f88b3fdfeacecce895",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
255480013
|
pes2o/s2orc
|
v3-fos-license
|
Inhibition of Neuronal p38α, but not p38β MAPK, Provides Neuroprotection Against Three Different Neurotoxic Insults
The p38 mitogen-activated protein kinase (MAPK) pathway plays a key role in pathological glial activation and neuroinflammatory responses. Our previous studies demonstrated that microglial p38α and not the p38β isoform is an important contributor to stressor-induced proinflammatory cytokine upregulation and glia-dependent neurotoxicity. However, the contribution of neuronal p38α and p38β isoforms in responses to neurotoxic agents is less well understood. In the current study, we used cortical neurons from wild-type or p38β knockout mice, and wild-type neurons treated with two highly selective inhibitors of p38α MAPK. Neurons were treated with one of three neurotoxic insults (L-glutamate, sodium nitroprusside, and oxygen-glucose deprivation), and neurotoxicity was assessed. All three stimuli led to neuronal death and neurite degeneration, and the degree of neurotoxicity induced in wild-type and p38β knockout neurons was not significantly different. In contrast, selective inhibition of neuronal p38α was neuroprotective. Our results show that neuronal p38β is not required for neurotoxicity induced by multiple toxic insults, but that p38α in the neuron contributes quantitatively to the neuronal dysfunction responses. These data are consistent with our previous findings of the critical importance of microglia p38α compared to p38β, and continue to support selective targeting of the p38α isoform as a potential therapeutic strategy.
Introduction
Mitogen-activated protein kinase (MAPK) pathways are pivotal in linking stimuli to cellular responses. The involvement of MAPK pathways in many stress-and disease-induced responses throughout the body has heightened the interest to develop selective small molecule kinase inhibitors to modulate these signal transduction pathways. For example, the p38 branch of the MAPK family is a well-established therapeutic target for diseases with inflammation as a common mechanism. In the central nervous system (CNS), most studies of p38 function have focused on p38 in glia and its role in aberrant proinflammatory responses in acute and chronic neurodegenerative conditions (for reviews, see Bachstetter & Van Eldik 2010;Correa & Eales 2012). Much less is known about the relationship between neuronal p38 and CNS pathophysiology. In addition, whether the two major p38 isoforms in the CNS, p38α and p38β, play similar or distinct roles in neuronal responses to pathological stimuli is a major unanswered question.
Investigations to define the relative importance of neuronal p38α and p38β in stress-induced neuronal responses have been hampered by a lack of specific reagents. Mice with a genetic knockout of the p38β gene (p38β knockout (KO)) are healthy and fertile (Beardmore et al. 2005;O'Keefe et al. 2007), and therefore are a useful reagent to test the involvement of the p38β isoform in particular cellular functions. However, a similar approach cannot be taken with p38α knockout mice because these mice are embryonic lethal (Adams et al. 2000;Allen et al. 2000;Mudgett et al. 2000;Tamura et al. 2000). In addition, many small molecule p38 inhibitors such as the commercially available SB203580 compound do not distinguish between p38α and p38β, and actually react with a number of other cellular targets, including thromboxane synthase (Borsch-Haubold et al. 1998), cyclooxygenases (Borsch-Haubold et al. 1998), c-Raf (Hall-Jackson et al. 1999, and other kinases (Clerk & Sugden 1998;Lali et al. 2000;Godl et al. 2003;Bain et al. 2007). While one might assume that the effects of SB203580 are dependent on p38α, this assumption has not been rigorously tested with p38α-and p38β-specific reagents.
We recently reported (Watterson et al. 2013) the development of two highly specific small molecule p38α inhibitors, termed MW-108 and MW-181. The high level of selectivity of the inhibitors was demonstrated by large-scale kinome activity screens, functional GPCR agonist and antagonist assays, and cellular target engagement analyses. MW-108 targets a single kinase, p38α, and does not cross-react with p38β. MW-181 inhibits p38α, and has weaker cross-reactivity with p38β. The availability of these p38α inhibitors, along with the p38β KO mouse, provided us the opportunity to directly test the contribution of neuronal p38α and p38β in neurodegenerative responses to specific toxic stimuli.
The goal of the current study was to determine whether neuronal p38α or p38β is important for neurotoxic responses induced by three clinically relevant insults: L-glutamate (excitotoxicity), sodium nitroprusside (SNP; a nitric oxide donor), and oxygen-glucose deprivation (OGD; hypoxia ischemia). We chose these three neurotoxic insults because there is precedent for p38 playing a role in neurotoxicity responses induced by these agents (Kawasaki et al. 1997;Lin et al. 2001;Legos et al. 2002;Chen et al. 2003;Cao et al. 2004;Pi et al. 2004;Tabakman et al. 2004;Guo & Bhat 2007;Molz et al. 2008;Strassburger et al. 2008;Li et al. 2009;Lu et al. 2011). We used primary cortical neurons from wildtype (WT) and p38β global KO mice to determine if deletion of p38β affected the neuronal damage responses. To test the contribution of p38α to the neurotoxic responses and to determine if targeting a single kinase was neuroprotective, we treated WT mouse neurons with the neurotoxic agents in the presence of our p38α inhibitors MW-181 and MW-108 (Watterson et al. 2013). Consistent with our previous findings of a distinct role for p38α and p38β in microglia upon inflammatory insult (Xing et al. 2011;Xing et al. 2013), we report here that the absence of p38β in cortical neurons does not suppress the neurotoxic responses to any of the three insults. However, selective inhibition of p38α in neurons not only reduces cell death but also reduces the neurite damage in the surviving neurons. These results demonstrate the importance of the neuronal p38α isoform in neurotoxicity induced by multiple disease-relevant insults.
Ethics Statement
All mouse experiments were conducted in accordance with the principles of animal care and experimentation in the Guide for the Care and Use of Laboratory Animals. The Institutional Animal Care and Use Committee of the University of Kentucky approved the use of animals in this study (protocol #2010-0615).
Animals
The p38β global KO mice were generated as described (O'Keefe et al. 2007), and a colony bred and maintained at University of Kentucky. C57BL/6 mice were purchased from Harlan Laboratories. The p38β gene KO was confirmed by Transnetyx, Inc (Cordova, TN, USA).
Determination of p38 Isoform RNA Levels
The levels of expression of p38α, β, δ, and γ RNA were determined as previously described (Xing et al. 2013). Briefly, RNA was isolated from primary cortical neuron cultures using RNeasy minicolumns with on-column DNase treatment (Qiagen), and RNA quantity and quality were determined by measuring the A 260 /A 280 ratio by NanoDrop (Thermo Scientific). Reverse transcription (RT) was done with a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Cat. no. 4368814), with no template and no RT controls included. Real-time PCR was done with the TaqMan Gene Expression assay kit (Applied Biosystems) on a ViiA 7 Real-Time PCR System (Applied Biosystems). The following TaqMan probes (Applied Biosystems) were used: p38α (MAPK14, Mm00442507_m1), p38β (MAPK11, Mm00440955_m1), p38δ (MAPK13, Mm00442488_m1), p38γ (MAPK12, Mm00443518_m1), and 18S rRNA (Hs99999901_s1). Relative gene expression was calculated by the 2 −ΔΔCT method. Levels of p38β expression in WT neurons were normalized to 1.0.
Primary Neuronal Culture
Primary neuronal cultures were derived from embryonic day 18 WT or p38β KO mice, as previously described (Xing et al. 2011). Cells were dissociated from dissected cerebral cortices by trypsinization for 20-25 min at 37°C, followed by passing through a 70-μm nylon mesh cell strainer. The cells were seeded at a density of 5×10 4 cells/well onto poly-d-lysinecoated 12-mm glass coverslips for L-glutamate and OGD experiments, or at 2×10 4 cells/well in 24-well plates for SNP experiments. Neurons were grown in neurobasal medium containing 2 % B27 supplement (Invitrogen), 0.5 mM lglutamine, 100 IU/ml penicillin, and 100 μg/ml streptomycin; no serum or mitosis inhibitors were used. Every 3 days, 50 % of the media was replenished with fresh medium.
Cell Culture Treatments
Neurons from WT and p38β KO mice were subjected to Lglutamate, SNP, or OGD insults at 7 days in vitro (DIV7), and neurotoxicity measured at 24 h after insult. For L-glutamate studies, neurobasal/B27 medium was carefully removed from primary neuron cultures and saved. Neurons were then treated with 25 μM L-glutamate for 10 min in CSS buffer (120 mM NaCl, 5.4 mM KCl, 0.8 mM MgCl 2 , 1.8 mM CaCl 2 , 20 mM HEPES, and 15 mM glucose) (Schubert & Piasecki 2001). The cells were then washed three times with Hank's balanced salt solution (HBSS), and returned back into the original neurobasal/B27 media for 24 h. WT neurons were treated with the p38α inhibitors MW-181 or MW-108 (60 μM) for 60 min before L-glutamate addition. For SNP studies, neurons were treated with 1 mM SNP dissolved in culture medium for 24 h before neurotoxicity assays. MW-181 or MW-108 (60 μM) was added at the same time as the SNP solution. For OGD studies, primary neurons were treated with the p38α inhibitor MW-181 or MW-108 (60 μM) for 60 min prior to OGD. OGD was done for 1 h in an anaerobic chamber saturated with 5 % CO 2 and 95 % N 2 in glucose-free DMEM medium. The OGD condition was terminated by switching cells back to normal culture conditions and incubating for 24 h until neurotoxicity assays were done. Control cells were incubated in DMEM with glucose in a normoxic incubator for the same period.
Neuronal Viability Assay
Neuron viability was assayed by trypan blue exclusion (Xie et al. 2004). Neuron-containing coverslips were incubated with 0.2 % trypan blue in HBSS for 2 min in a 37°C incubator and then gently rinsed three times with HBSS. Neurons were viewed under bright field microscopy at×200 final magnification. Five to eight fields were chosen randomly per coverslip, and a total of 485 to 761 cells were counted per coverslip.
Trypan blue-positive and negative neurons were counted per field and the ratio of positive cells to the total cells was taken as the percent neuronal death.
Immunocytochemistry
Cells were fixed with 3.7 % formaldehyde containing 0.1 % Triton X-100 in PBS for 10 min at room temperature. After washing three times with PBS, the coverslips were incubated with blocking buffer (PBS containing 6 % goat serum, 3 % bovine serum albumin (fraction V), 0.1 % Triton X-100) for 30 min at room temperature. Primary chicken anti-MAP2 antibody (1:1,000, Neuromics, Cat. no. CH22103) was diluted in blocking buffer and incubated with the cells at room temperature for 2 h. For detection of MAP2 staining, the cells were incubated with secondary biotin SP-conjugated goat antichicken antibody (1:1,000, Jackson ImmunoResearch) for 1 h, followed by streptavidin Alexa Fluor® fluorescent 488 (1:1,000, Invitrogen) incubation in blocking buffer at room temperature for 1 h. Wide field fluorescent photomicrographs were obtained using a Nikon Eclipse Ti microscope with an Axiocam MRc5 digital camera (Carl Zeiss).
Semi-automated Sholl Analysis
The semi-automated Sholl assay was used to measure the neurite degeneration of MAP2-labeled neurons, essentially as we previously described with a manual Sholl analysis (Xing et al. 2011). The original images were binarized and thresholded using NIH ImageJ. Sholl semi-automated analysis program was loaded from ImageJ plugins (http://imagej.nih. gov/ij/plugins/). The central point on the soma of each neuron was selected, and a series of concentric circles were drawn automatically, with the radius of the smallest sampling circle at 8 μm from the central point and the radius of the largest sampling circle at 50 μm with a radius step size of 0.167 μm. The Sholl analysis then determined how many times the neurites intersected the sampling circles, and measured the average intersections over the whole area occupied by the neurite per neuron. The mean of average intersections of 107-188 neurons per group was calculated, and the mean from control group was normalized to 0 % damage.
Statistics
Statistical analysis was conducted using GraphPad prism software V.6 (GraphPad Software). Unless otherwise indicated, values are expressed as mean±SEM. Groups of two were compared by unpaired t test. One-way ANOVA followed by Bonferroni's multiple comparison test was used for comparisons among three or more groups. Statistical significance was defined as p<0.05.
Validation of p38β KO in Primary Cortical Neurons
As a first step, it was important to confirm the deletion of p38β in primary cortical neurons from the p38β KO mouse and verify that significant compensatory changes in the p38α, p38δ, and p38γ isoforms were not present. RNA was prepared from primary cortical neuron cultures derived from WT or p38β KO mouse fetuses, and the expression levels of the p38 isoforms were determined by qPCR. As expected, p38β mRNA was readily measurable in WT mice but was not detected in the p38β KO mice (Fig. 1). The mRNA level of p38α in both WT and p38β KO neurons was~40-fold higher than that of p38β in WT neurons, but there was no significant difference between the p38α levels in the WT compared to the p38β KO mice. The levels of p38δ and p38γ mRNA were similar and very low in both WT and p38β KO mice (data not shown). Altogether, the data verify that, as expected, p38β is deficient in neurons from the p38β KO mice and there are no significant compensatory changes in any of the other p38 isoforms.
Neurotoxicity Induced by L-Glutamate L-glutamate is a standard neurotoxic stimulus that is a model of excitotoxic cell death (Choi et al. 1987), and p38 has been reported to be involved in excitotoxic pathways leading to neuron damage/death (Kawasaki et al. 1997;Chen et al. 2003;Pi et al. 2004;Chaparro-Huerta et al. 2008;Molz et al. 2008;Bakuridze et al. 2009;Izumi et al. 2009). Therefore, we compared the degree of neuron death and neurite degeneration induced by L-glutamate in primary cortical neurons derived from WT and p38β KO mice. Under the culture conditions used, L-glutamate induced~22 % neuron death as measured by trypan blue assay (Fig. 2a). L-glutamate also induced significant (22-25 %) neurite damage in the surviving neurons as measured by Sholl analysis (Fig. 2b), where the percentage of average intersections over the whole area occupied by the neurite is determined. L-glutamate treatment resulted in extensive neurite fragmentation, swelling, and blebbing (Fig. 2c). The degree of neuron death/neurite damage was not significantly different between WT and p38β KO neurons. In contrast, inhibition of neuronal p38α by MW-181 or MW-108 1 h prior to L-glutamate treatment significantly reduced both the neuron death and the neurite degeneration (Fig. 2a, b). As shown in Fig. 2c, the neurons treated with MW-181 or MW-108 showed less fragmentation and blebbing of the neurites.
Neurotoxicity Induced by SNP
To determine whether the findings with L-glutamate implicating p38α but not p38β in neurotoxicity were generalizable to a different neurotoxic insult, we tested the effect of SNP on neuron death and neurite damage. SNP is a nitric oxide donor commonly used to induce neuronal apoptosis, and p38 activation has previously been implicated in promoting nitric oxide induced neuronal damage (Ghatan et al. 2000;Lin et al. 2001). SNP (1 mM) treatment for 24 h killed 32 % WT neurons and 28 % p38β KO neurons (Fig. 3a) and induced 27-32 % neurite damage in both groups (Fig. 3b, c). Although the KO neurons appeared to be slightly less susceptible to SNP toxicity compared to WT neurons, the levels of neuron death/neurite damage between WT and p38β KO neurons were not significantly different. Similar to the findings with L-glutamate, inhibition of p38α by MW-181 or MW-108 treatments of WT neurons significantly reduced SNP-induced neuronal death (Fig. 3a), and protected neurons against neurite degeneration (Fig. 3b, c).
Neurotoxicity Induced by OGD
We also tested the relative contribution of p38α and p38β to neurotoxic responses induced by OGD, a model of ischemic injury (Kaku et al. 1991;Legos et al. 2002). Treatment with OGD for 1 h induced 45-50 % of neuron death measured at 24 h after insult, in both WT and Fig. 1 Verification of p38β KO in neurons. Primary cortical neurons from WT and p38β KO mice were prepared as described in the "Methods" section and plated at 5×10 4 cells/well in 24-well plates. Total RNA was isolated from neuronal cultures derived from WT (black bars) or p38β KO (white bars) mice, and the mRNA levels of different p38 MAPK isoforms were determined by qPCR. The result shows that p38β mRNA was readily measureable in WT mice but was not detected in the p38β KO mice. The p38α MAPK isoform in both WT and p38β KO neurons was expressed at much higher levels compared to p38β, but there was no significant difference between the levels of p38α in WT and p38β KO mice. The levels of p38δ and p38γ mRNA were very low to undetectable in both WT and p38β KO mice (data not shown). Results are expressed as fold change compared to p38β expression levels in WT neurons, and represent the mean± SEM of four to eight determinations p38β KO groups (Fig. 4a), and again no significant difference in the degree of cell death was found between these two groups. OGD treatment induced 30-34 % neurite damage in both groups (Fig. 4b, c), and there was no significant difference in the degree of neurite degeneration between WT and p38β KO neurons. Similar to the results with L-glutamate and SNP, treatment of WT neurons with MW-181 or MW-108 led to a significant reduction in the neuronal death (Fig. 4a) and neurite degeneration (Fig. 4b) induced by OGD. Again, the neurites in the compound-treated cultures appeared smoother and had more neurite branches compared to OGD treatment in the absence of compounds (Fig. 4c).
Discussion
In this study, we tested the respective contribution of the p38α and p38β MAPK isoforms in the neurodegeneration induced by three neurotoxic insults, and addressed the question if targeting a single kinase is sufficient to provide neuroprotective effects. Our results demonstrate that targeting p38α MAPK in neurons provides significant protection against three different neurotoxic insults, while loss of neuronal p38β MAPK does not affect the neurodegenerative responses to any of the three insults. These findings complement and extend our previous studies (Xing et al. 2011;Xing et al. 2013) that documented the importance of glial p38α MAPK in stressor-induced proinflammatory cytokine production and microglia-mediated neuron death. Altogether, our data demonstrate key roles of p38α MAPK signaling in both glial and neuronal responses that are linked to neuronal dysfunction, and continue to indicate the potential of this kinase as a CNS drug discovery target.
A number of previous studies have suggested that activation of p38 MAPK signaling in neurons in response to disease-relevant cellular stressors contributes to neuron Fig. 2 p38α inhibition but not p38β KO protects neurons against Lglutamate insult. WT or p38β KO mouse primary cortical neurons were plated on cover slips at 5×10 4 cells/well and grown for 7 days in vitro (DIV7). After 1 h pretreatment of WT neurons with 60 μM MW-181 or MW-108, the media was removed and saved, then WT and p38β KO neurons were treated for 10 min with culture medium alone, L-glutamate (25 μM) alone, or L-glutamate plus 60 μM MW-181 or MW-108. After 10 min of incubation, cells were washed three times with HBSS, and the original culture media was added back into the appropriate wells. Trypan blue exclusion assay for neurotoxicity and Sholl analysis for neurite damage were performed after 24 h. a L-glutamate induced~22 % neuronal death in both p38β KO and WT neurons. In contrast, p38α inhibition by MW-181 or MW-108 significantly reduced the neuron death after Lglutamate insult. b Similarly, L-glutamate-induced neurite fragmentation and blebbing in both p38β KO and WT neurons, with no significant difference between the two groups. In contrast, inhibition of p38α MAPK by MW-181 or MW-108 significantly protected neurites against L-glutamate-induced damage. c Representative photomicrographs of MAP2 immunocytochemistry show the morphology of neurons after 24 h. Arrows point to the appearance of damaged neurites after L-glutamate insult in both p38β KO and WT neurons (****p<0.0001 vs. control; #p<0.05 vs. L-glutamate treatment; ###p < 0.001 vs. L-glutamate treatment; ####p<0.0001 vs. L-glutamate treatment, Bonferroni's multiple comparison test). Data are from three independent experiments. Scale bar 10 μm dysfunction and neuron death, and that inhibition of p38 MAPK in the neuron is neuroprotective. For example, the p38 MAPK pathway has been implicated in neuron death induced by a number of agents, including excitotoxic stimuli (Cao et al. 2004;Semenova et al. 2007;Chaparro-Huerta et al. 2008), nerve injury (Wang et al. 2005;Wittmack et al. 2005), hypoxia/ischemia (Wang et al. 2002;Guo & Bhat 2007), and potassium deprivation (Yeste-Velasco et al. 2009). Neuronal p38 MAPK has also been reported to be involved in diabetic neuropathy (Sweitzer et al. 2004), hyperpolarization-activated and voltage-gated channel activation after injury (Wittmack et al. 2005;Wynne 2006), neurofilament pathology in amyotrophic lateral sclerosis (Ackerley et al. 2004), hyperalgesia and spinal pain (Svensson et al. 2005), activity-induced dendritic spine reduction (Sugiura et al. 2009), kainite-induced seizures and neuronal damage (Namiki et al. 2007), presynaptic serotonin transporter activity (Zhu et al. 2006), and various cytokine-mediated neuronal damage responses (Li et al. 2003;Wang et al. 2005;Chaparro-Huerta et al. 2008;Xing et al. 2011). Almost all the mechanistic data supporting the role of p38 in neuron dysfunction has been generated using small molecule p38 inhibitors such as SB203580. The commercial availability of SB203580 has led to its widespread use; however, SB203580 is not selective for the p38α versus p38β isoform or even for the p38 family alone. SB203580 and second-generation SB compounds such as SB202190 inhibit multiple other kinases, including casein kinase-1 delta, glycogen synthase kinase-3beta, protein kinase A, receptor interacting protein-2, and cyclin G-associated kinase (Clerk & Sugden 1998;Lali et al. 2000;Godl et al. 2003;Bain et al. 2007). Thus, despite the extensive evidence provided by work using SB compounds that inhibiting p38 is neuroprotective, the relative role of p38α and p38β in the neuroprotective responses and whether targeting a single kinase (p38α or p38β) is sufficient to exert the neuroprotective effects had not been tested. To address these important questions, we utilized our recently developed, highly selective p38α inhibitors, MW-181 and MW-108 (Watterson et al. 2013), as well as a global p38β knockout mouse. The use of these reagents in primary cortical neuron cultures allowed us to directly demonstrate for the first time the involvement of neuronal p38α, and not p38β, in the neurotoxic responses to glutamate, SNP, and OGD. Fig. 3 p38α inhibition but not p38β KO protects neurons against SNP insult. DIV7 neurons on coverslips were treated with culture medium alone, SNP (1 mM) alone, or SNP plus 60 μM MW-181 or MW-108 for 24 h, followed by trypan blue exclusion assay and Sholl analysis. a SNP induced~28-32 % neuronal death in both p38β KO and WT neurons, with no significant differences between the genotypes. In contrast, p38α inhibition by MW-181 or MW-108 significantly reduced the neuron death induced by SNP. b SNP induced a similar degree of neurite damage in both p38β KO and WT neurons. In contrast, WT neurons treated with SNP in the presence of the p38α inhibitors showed reduced levels of neurite degeneration. c Representative photomicrographs of MAP2 immunocytochemistry show the morphology of neurons after 24 h. Arrows point to the appearance of damaged neurites induced by SNP treatment in both p38β KO and WT neurons (****p<0.0001 vs. control; ##p<0.01 vs. SNP; ####p<0.0001 vs. SNP, Bonferroni's multiple comparison test). Data are from three independent experiments. Scale bar 10 μm Glutamate is a major CNS excitatory neurotransmitter, but excessive glutamate release and overstimulation of glutamate receptors can induce excitotoxic neuron death. Activation of neuronal p38 MAPK signaling is a well-characterized response to glutamate insult, but few previous studies have explored the importance of p38α versus p38β in excitotoxic neuron death. One relevant study (Cao et al. 2004) implicated p38α in glutamate-induced damage of primary cerebellar granule neurons in culture through the use of a dominantnegative p38α construct, but did not explore p38β involvement because no p38β was detected in the cultured neurons. Our results demonstrating the involvement of p38α in primary cortical neurons are consistent with this study, and also show that p38β is not required for glutamate-induced neuron death.
Nitric oxide overproduction has been linked to neuron death in acute and chronic neurological disorders Lee et al. 1999;Sattler et al. 1999;Arundine & Tymianski 2004). Several studies have utilized nitric oxide donors as a neurotoxic stimulus and p38 MAPK inhibitors such as SB203580 to explore the role of p38 MAPK in mediating neurodegenerative responses of cultured neurons to nitrosative stress. In general, these studies have demonstrated neuroprotection against nitric oxide insult, through several proposed mechanisms including reduced mitochondrial dysfunction and inhibition of peroxynitrite/reactive oxygen species formation (Ghatan et al. 2000;Lin et al. 2001;Bossy-Wetzel et al. 2004;Thomas et al. 2008;Nashida et al. 2011). However, as far as we are aware, no previous study tested specific isoforms of p38 MAPK in the neurotoxic responses.
We also investigated the role of p38 MAPK in neurotoxicity induced by OGD, a model of hypoxia-ischemia. Several previous reports have implicated p38 MAPK signaling in OGD-induced neurotoxicity through the use of the multikinase SB family of inhibitors. For example, SB239063 protects neuron-enriched forebrain cultures against OGD insult (Legos et al. 2002), SB203580 reduces OGD-induced death in PC12 cells (Li et al. 2009), and SB203580 or expression of an antisense p38 MAPK construct only in neuronal cells reduces oxidative stress and neuron death in hippocampal slice cultures (Lu et al. 2011). Importantly, a seminal paper (Guo & Bhat 2007) used p38 isoform-specific siRNAs to show that Fig. 4 p38α inhibition but not p38β KO protects neurons against OGD insult. DIV7 neurons on coverslips were pretreated for 1 h with either 60 μM MW-181 or MW-108, and the medium was removed and saved. After 1 h OGD treatment, the old culture media was then added back into appropriate wells for 24 h, followed by measurement of neuronal survival and neurite damage. a OGD induced~50 % neuronal death in both p38β KO and WT neurons, with no significant differences between the two groups. In contrast, p38α inhibition by MW-181 or MW-108 significantly reduced the neuronal death after OGD insult. b OGD induced a similar degree of neurite damage (~33 %) in both p38β KO and WT neurons. In contrast, p38α inhibition by MW-181 or MW-108 significantly protected neurites against OGD-induced damage. c Representative photomicrographs of MAP2 immunocytochemistry show the morphology of neurons after 24 h. Arrows point to the appearance of damaged neurites induced by OGD in both p38β KO and WT neurons (****p<0.0001 vs. control; #p<0.05 vs. OGD; ##p<0.01 vs. OGD, Bonferroni's multiple comparison test). Data are from three independent experiments. Scale bar 10 μm p38α and not p38β was a major contributor to OGD-induced death in the NSC34 motoneuron cell line. Our results reported here using highly specific p38α inhibitors in WT primary cortical neuron cultures and using neurons cultured from the p38β global knockout mouse are consistent with that study, and extend the results to primary neurons.
Although the dispensable role of p38β MAPK in cortical neurons in our study might be attributed to its relatively low expression in these cells compared to the expression of p38α MAPK, the data are consistent with our previous study showing that p38β KO microglia did not provide neuroprotection for co-cultured WT neurons upon lipopolysaccharide treatment (Xing et al. 2013). We did not explore other potential mechanisms or cell types where p38β MAPK may contribute. However, some studies have suggested that p38β MAPK may be more important in glial cells, rather than neurons. For example, studies using transient global ischemia, transient focal ischemia, and kainic acid-induced seizure models all showed a delayed activation of astrocytes with p38β MAPK immunoreactivity, but not p38α (Che et al. 2001;Piao et al. 2002;Piao et al. 2003). In addition, p38β was upregulated after injury in different cell types with different temporal profiles, with an early and transient induction of p38β in neurons, followed by a later and prolonged induction in astrocytes (Piao et al. 2003). Furthermore, the strong substrate preference of ATF2 by p38β compared to p38α and differential regulation by upstream kinases (Jiang et al. 1996) also suggest that the two kinases may act on different downstream targets and exert different functions in response to injury. The available data suggest a more restricted repertoire of functions of p38β MAPK that might be cell-specific and signalingspecific temporally and spatially in the CNS.
It should be noted that our understanding of the role of neuronal p38α and p38β MAPK signaling in neurotoxic responses is in its infancy. From the literature, it appears that the quantitative importance of the p38 MAPK pathway relative to other stress-induced signaling pathways can vary depending on the cell type, developmental status, toxic stimulus, timing of activation, and cell-cell interactions. For example, even in the same neuronal cell type at the same developmental stage, the involvement of p38 can be dependent on the neurotoxic stimulus. Specifically, p38 was reported to be involved in glutamate-induced death of cerebellar granule neurons, whereas death induced by withdrawal of trophic support involved JNK but not p38 (Cao et al. 2004). It is also clear that multiple signaling pathways can be induced in response to specific stimuli, and therefore the importance of one particular pathway may depend on the time points analyzed. Another important consideration is that glial p38 signaling in response to toxic stimuli can affect neuronal viability (Izumi et al. 2009;Xing et al. 2011), which can complicate the interpretation of results in slice cultures or in vivo models. Finally, discrepant results could also be due to technical issues, such as different culture conditions, animal strains, type or age of neurons, and/ or stimulus paradigm. For example, the expression of the glutamate NMDA receptor subunit NR1 in neurons cultured for DIV7 is less than that in DIV11 neurons (Schubert & Piasecki 2001), the neuronal death induced by SNP is increased in DIV21 versus DIV14 neurons (Dawson et al. 1993), and hippocampal neurons are more vulnerable than cortical neurons to OGD treatment . Nevertheless, even with the above caveats, our results using three different neurotoxic insults in the same type (primary cortical neurons) and age (DIV7) cultures clearly document that suppression of p38α with highly specific kinase inhibitors provides neuroprotection whereas lack of p38β in the knockout mouse has no effect. The availability of these reagents should allow future exploration of the importance of p38 MAPK signaling in other models of neuronal death.
Conclusions
Activation of neuronal p38 MAPK occurs in response to a number of disease-relevant stressors, and pharmacological inhibition of p38 MAPK is neuroprotective in both cell and animal models. However, the relative contribution of neuronal p38α and p38β to neurodegenerative responses had not been addressed previously. In this study, we used p38α-and p38βspecific reagents to demonstrate that inhibition of neuronal p38α provides significant neuroprotection against three different toxic insults, but that loss of neuronal p38β has no effect. These results demonstrate isoform-specific functions of these p38 kinases in the neuron, and support an important role of the p38α isoform in neurodegenerative responses to injury.
|
2023-01-07T14:16:37.016Z
|
2014-07-11T00:00:00.000
|
{
"year": 2014,
"sha1": "03d9e7304d3d225c24676db9bab619a78276fab5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12031-014-0372-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "03d9e7304d3d225c24676db9bab619a78276fab5",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": []
}
|
262080192
|
pes2o/s2orc
|
v3-fos-license
|
Protocol for quantifying the in vivo rate of protein degradation in mice using a pulse-chase technique
Summary The ability to measure the in vivo rate of protein degradation is a major limitation in numerous fields of biology. Here, we present a protocol for quantifying this rate in mice using a pulse-chase technique that utilizes an azide-bearing non-canonical amino acid called azidohomoalanine (AHA). We describe steps for using chow containing AHA to pulse-label the animal’s proteome. We then detail the quantification of AHA-labeled proteins in whole-tissue lysates or histological sections using a copper-catalyzed azide-alkyne cycloaddition ‘click’ reaction. For complete details on the use and execution of this protocol, please refer to Steinert et al. (2023).1
AHA) should be employed.At a minimum, two experimental time points must be assessed including an immediate post-pulse group (Day 0) and at least one chase group (e.g., Day 3 or Day 7 following the pulse).There should also be an animal of the same age, genetic background, etc. that is placed on the control chow during the pulse period as tissues from this animal will be used as a negative control for the experimental procedures.
Note: Store the unopened chow at 4 C or lower.Chow should be used within 6 months of the manufacture date.
Make the embedding mold that will be used for IHC muscle freezing Timing: 15-30 min 3.To make an embedding mold that can be used to freeze muscles in optimal cutting temperature compound (OCT), cut off the bottom 3 cm (tapered end) of a 15 mL conical tube (Figure 1A).a. Cut the bottom section longitudinally in half (Figure 1B).b.Remove a small portion of the rounded bottom tip so that when the OCT is frozen, forceps can be used to push the block out of the large end of the embedding mold (Figure 1C).
MATERIALS AND EQUIPMENT
Buffer A: dissolve 0.5% Triton-X and 0.5% BSA in DPBS.
Buffer can be stored at 4 C for up to 3 days.5% milk in Tris-Buffered Saline with 0.1% Tween-20 (TBST): dissolve 5 g of powdered non-fat dry milk in 90 mL of TBST.Adjust total volume to 100 mL with TBST.
Buffer can be stored at 4 C for up to 3 days.
Buffer can be stored at 20 C-22 C for up to one year.
Urea/Tris Lysis Buffer: combine 4.8 g urea and 0.5 mL of 1 M Tris (pH 8.0) and adjust the total volume to 10 mL with diH 2 O.
Make fresh buffer for every use.The dissolution of urea is an endothermic reaction, so the solution will become cold.The solution can be gently heated to 20-25 C to aid complete dissolution.Refrain from heating above 30 C because cyanates that are detrimental to proteins may form. 2 It may take 30-45 min of agitation to fully dissolve the urea.
Note: On average, TA muscles are approximately 40 mg.This mass will vary based on age, strain, and interventions.
CRITICAL: Add PMSF last and use buffer within 15 min of this addition due to the short half-life of PMSF in an aqueous solution.
Note: Reagents should be added in the order listed with vortexing between each addition.
Note: It has been recommended that fresh TCEP be used to maximize 'click' reaction efficiency and avoid precipitation of the copper during the reaction. 3However, we have successfully completed numerous 'click' reactions with aliquots of stock TCEP that had been stored at À20 C for several weeks.Note: Reagents should be added in the order listed with vortexing between each addition.
Note: It has been recommended that fresh TCEP be used to maximize 'click' reaction efficiency and avoid precipitation of the copper during the reaction. 3However, we have successfully completed numerous 'click' reactions with aliquots of stock TCEP that had been stored at À20 C for several weeks.
CRITICAL: DTT should be added immediately before use.Any remaining buffer containing DTT should be discarded and a fresh batch should be made prior to the next use.The first three ingredients can also be combined at higher concentrations to make a 10x stock solution, and this solution can be stored at 4 C.
Protocol STEP-BY-STEP METHOD DETAILS
Label proteins with low-methionine and AHA-infused chow Timing: 11-18 days In this step, animals are put on a low-methionine chow to deplete methionine prior to the AHA ''pulse'' labeling period.Following the labeling period, regular control chow is reintroduced to eliminate any further incorporation of AHA. Figure 2 is a visual representation of the experimental timeline.
1. Switch chow in cages of experimental animals to low-methionine pellets.Maintain control animals on regular chow.a. Transfer mice to a clean cage prior to beginning the low-methionine chow to control for any leftover chow that may be buried in the bedding material.b.Provide the low-methionine chow and water ad libitum for 7 days.
Note: Approximately 3 grams of chow per mouse per day is sufficient.Use this information as a guide to avoid having to discard large quantities of partially eaten chow.
2. After 7 days of low-methionine chow, switch chow in the cages of the experimental animals to the AHA-infused pellets for the ''pulse'' labeling period.Maintain control animals on regular chow.a. Transfer mice to a clean cage prior to beginning AHA chow to control for any leftover chow that may be buried in the bedding material.b.Provide the AHA-infused chow and water ad libitum for 4 days.3.After 4 days of AHA-infused chow, switch chow in cages to regular control pellets for the 3-7 day ''chase'' period.a. Transfer mice to a clean cage prior to beginning regular control chow to control for any leftover chow that may be buried in the bedding material.b.Provide the regular control chow and water ad libitum for 3-7 days.
Note:
The rationale for selecting the indicated pulse and chase durations can be found in the parent manuscript, Steinert et al. (2023). 1 Harvesting muscles for WB-QUAD and IHC-QUAD Timing: 15-20 min per mouse Muscles that will be used for WB-QUAD and IHC-QUAD are collected and flash-frozen in liquid nitrogen or liquid nitrogen-chilled isopentane, respectively.4. Prepare the surgery area for muscle harvesting.
a. Label 1.5 mL microfuge tubes for all muscles being harvested for WB-QUAD.b.Fill a small Dewar with liquid nitrogen to freeze down muscles following harvest.c.Prepare the chamber for freezing down IHC-QUAD muscles.i. Place a small volume (50-70 mL) of isopentane in a 250-300 mL plastic beaker with a lid.A foam ice bucket can serve as an excellent chamber for this procedure.ii.Chill the isopentane by adding liquid nitrogen to the chamber containing the plastic beaker.
Note: Be sure that the liquid nitrogen level is at least half of the level of the isopentane to ensure adequate cooling (Figure 3A).Note: Exercise caution during the excision of the TA muscle so that unnecessarily large amounts of force are not placed on the tissue.
8. Harvest the contralateral TA muscle for IHC-QUAD.a.Before excising the TA muscle, check the isopentane for the presence of a layer of frozen isopentane with a pool of liquid isopentane in the center of the plastic beaker.i.If the isopentane has frozen completely, remove the beaker from the liquid nitrogen and allow some of the frozen isopentane to return to its liquid state.
Note: There must be enough liquid isopentane in the beaker to fully submerge the embedding mold (Figure 3A).
b. Open the skin on the anterior aspect of the lower limb and expose the TA muscle.c.Carefully excise the TA muscle from the mouse.d.Place the muscle on a moist paper towel, very slightly stretch the muscle out and then allow it to rebound to its resting length.Measure and record the length of the muscle (Figure 3B).Note: Dragging the muscle through the OCT will help to keep it on a straight axis and at/near its measured resting length (Figure 3C).
f. Use long forceps to place the embedding mold into the liquid nitrogen-chilled isopentane to freeze the muscle.i. Keep the embedding mold submerged in the isopentane for at least 30 s to ensure that the muscle is fully frozen, thereby avoiding freeze damage.g.Remove the frozen block of OCT from the embedding mold, package it in labeled aluminum foil, and transfer it to a storage vessel in a freezer set at À80 C (Figure 3D).
Pause point: Muscles can be stored at À80 C for at least 6 months.
Homogenize muscles for WB-QUAD Timing: 30-45 min Whole muscle homogenates are used to determine the rate of protein degradation with western blotting.
9. Prepare materials for the homogenization process.It is important to work quickly once homogenization has started, thus preparing all necessary materials in advance will aid in a smooth process.a. Label one 14 mL snap cap Falcon round bottom test tube for each sample that will be homogenized.b.Fill an insulated container with ice to hold the labeled Falcon tubes.c.Aliquot the amount of Urea/Tris lysis buffer necessary for the volume of Urea/Tris lysis buffer + inhibitors that will be used to homogenize all of the samples and chill on ice.
Note: Do not chill the buffer for too long or the urea will come out of solution.If you notice that the urea has come out of solution warm the solution slightly and it will go back into solution.d.Fill a small Dewar with liquid nitrogen.e. Remove frozen muscle samples from the À80 C freezer and transfer them to the Dewar filled with liquid nitrogen.f.Make Urea/Tris lysis buffer + inhibitors, and immediately following the addition of PMSF add 0.75 mL of the buffer to each of the labeled Falcon tubes.
CRITICAL: The Urea/Tris lysis buffer + inhibitors should be used within 15 min of adding the PMSF to the solution.
Note:
The volume of buffer needed to homogenize the samples will depend on the size and subsequent protein concentration of the muscle.For example, our lab typically uses 0.75 mL for the TA muscle which will produce a protein concentration of ⁓11 mg/mL.
10. Homogenize samples.a. Remove one 1.5 mL microfuge tube containing a single sample from the liquid nitrogen with long forceps and then quickly place the sample in the corresponding Falcon tube that already contains cold Urea/Tris lysis buffer + inhibitors.b.Using a Polytron homogenizer at full speed, homogenize the sample taking care to move the tube both vertically and horizontally to break up the whole sample leaving behind no large chunks.
Note: This step should take no longer than 15-20 s.
c. Replace the cap and return the Falcon tube to the ice.d.Repeat steps 10. a -10.c with the remaining samples until all samples have been homogenized.
Note:
The probe of the polytron should be washed thoroughly with deionized water between samples.Flush the inside of the probe and visually inspect for any unhomogenized sample.Make sure the probe is dry before homogenizing the next sample.
Note: Samples should be homogenized in batches with approximately 6 samples per batch.Fresh Urea/Tris lysis buffer + inhibitors should be made for each batch to restart the 15-min interval during which the PMSF is viable.Samples from all experimental groups should be present in each batch to ensure that any potential batch variance is distributed across all of the groups.
e. Centrifuge the homogenized samples at 6000 3 g for 2 min at 20 C-22 C.This will result in a layer of dense foam on the top of the supernatant.i. Check for any remaining chunks in the pellet.
ii.If there are unhomogenized pieces of muscle remaining, repeat step 10. b. f.Vortex the homogenized samples to break up the layer of foam.g.Centrifuge at 6000 3 g again for 2 min.
i. Repeat steps 10. f and 10.g until the foam layer is gone and has become incorporated into the homogenate.This may take as many as 6 repetitions.
Timing: 30-45 min
When preparing samples for the 'click' reaction it is important to standardize the protein concentration for all of the samples so that each 'click' reaction for each sample contains the same amount of Urea/Tris lysis buffer + inhibitors.
11. Use 1.5 mL of the homogenate to measure the protein concentration of each sample using the BioRad DC protein assay with BSA standards.12. Add 1.5 mL Urea/Tris stock solution to each of the BSA standards and also use this solution as the blank on the plate.a.While the protein assay is incubating, label two additional microfuge tubes for each sample.
Note:
The accuracy of the protein concentration measurements is critical when calculating the total amount of AHA-labeled proteins per sample.Thus, to minimize variance, all samples should be measured in triplicate and, whenever possible, all samples for a given experiment should be measured within a single assay.
13. Once the protein concentrations have been determined for each sample, transfer 100 mL of each sample to one of the newly labeled microfuge tubes.14.Dilute each sample to a final concentration of 8 mg/mL using the leftover Urea/Tris lysis buffer + inhibitors.15.In the other labeled tube, take 80 mL of the 8 mg/mL stock prepared in step 14 and dilute it to a final concentration of 1.15 mg/mL by adding 476.5 mL of DPBS.
Perform the 'click' reaction on the sample homogenates
Timing: 75-90 min The 'click' reaction will covalently bind a biotin tag to the AHA on the AHA-labeled proteins as explained in Figure 4.
16. Perform the 'click' reaction.a. Label a new microfuge tube for each sample that will be subjected to the WB-QUAD 'click' reaction.b.Make fresh WB-QUAD 'click' reaction master mix in a volume suitable for the number of samples being used.c.For each sample, combine 27 mL of the WB-QUAD 'click' reaction master mix with 173 mL of the diluted (1.15 mg/mL) sample.d.Vortex each sample thoroughly.
Note:
The 'click' reactions should be processed in batches with approximately 6 samples per batch.Samples from all experimental groups should be present in each batch to ensure that any potential batch variance is distributed across all of the groups.17.Incubate samples for 1 h on a 25 RPM nutating rotator at 20 C-22 C protected from light.
Pause point: At the end of the incubation period the samples can be flash frozen, returned to the freezer at À80 C, and stored for a short time (⁓1 week).
Timing: 2 days
The Western blot protocol allows for quantification of the amount of total protein and the amount of AHA-labeled proteins in each lane.With these values, the AHA-labeled protein to total protein ratio can be calculated for each sample.With samples that were collected immediately after the AHA ''pulse'' period (i.e., Day 0), as well as samples that were collected at 3 and 7 days into the chase period, one can calculate the time-dependent loss of AHA-labeled proteins (i.e., the rate of protein degradation).
Pause point: A separating gel can be prepared and stored at 4 C the day before the western blot is run.Be sure to bring the gel to 20 C-22 C before adding the stacking gel. .In WB-QUAD, an alkyne-bearing probe that is conjugated to biotin is used to tag the AHA, and then streptavidin conjugated to horseradish peroxidase along with enhanced chemiluminescence is used to visualize and quantify the amount of AHA-labeled proteins in tissue lysates.Similarly, with IHC-QUAD, an alkyne-bearing probe that is conjugated to a fluorophore (e.g., AZDye 594) is used to tag the AHA-labeled proteins and then a fluorescence microscope is used to visualize and quantify the amount of AHA-labeled proteins in histological sections.
21. Transfer proteins to a PVDF membrane.a. Wet the PVDF membrane in a chamber containing methanol for 5 min with continuous agitation on an orbital shaker at 60 RPM.
Note: All washes and blocking should be performed at 20 C-22 C.
b.After 5 min, replace the methanol in the chamber with transfer buffer and then return the chamber to the orbital shaker at 60 RPM.c. Wet the transfer filter paper and the sponges with transfer buffer.d.Gently remove the gel from the running apparatus and build the sandwich of materials for transfer as follows: sponge, filter paper, gel, PVDF membrane, filter paper, sponge.e. Place the transfer sandwich into the transfer apparatus with the gel oriented closest to the cathode and the membrane oriented closest to the anode.f.Perform a wet transfer at 300 milliamps for 1 h and 45 min in transfer buffer.g.Once the transfer is complete, wash the PVDF membrane for 3 3 5 min in diH 2 O at 200 RPM on an orbital shaker.22. Visualize the total protein content in each lane using the ''no-stain'' labeling protocol.
a. Follow the manufacturer's instructions to complete the ''no stain'' labeling procedure.b.Image the labeled membrane with an appropriate imaging system.
Note:
The ''no-stain'' fluorescent signal has an emission maximum at $ 590 nm.The fluorophore can be excited using a UV light transilluminator or a blue or green light source.
c. Capture the fluorescent signal with an appropriate emission filter while ensuring that the captured image(s) are not overexposed.23.Continue running the western blot to visualize the AHA-labeled proteins.
a. Wash the membrane for 3 3 5 min in diH 2 O at 200 RPM on an orbital shaker.b.Block the membrane with 5% milk in TBST for 1 h at 60 RPM on an orbital shaker.c.Wash the membrane 3 3 10 min with TBST at 200 RPM on an orbital shaker.d.Incubate the membrane with peroxidase-conjugated streptavidin diluted at 1:100000 in 1% a BSA-TBST for 12-16 h on a 25 RPM nutating rotator at 4 C. e. Wash the membrane for 2 3 5 min then 2 3 10 min in TBST at 200 RPM on an orbital shaker.f.Use enhanced chemiluminescence along with an appropriate imaging system to capture image(s) of the membrane and ensure that the captured image(s) are not overexposed.
Timing: 6-8 h
Using IHC analysis will allow for the quantification of AHA at the whole muscle section and myofiber-specific levels.By having samples collected immediately after the AHA ''pulse'' period, as well as samples that were collected at 3 and 7 days into the chase period, one can calculate the time-dependent loss of AHA-labeled proteins (i.e., the rate of protein degradation).The samples should be processed in batches with approximately 6 samples per batch.Samples from all experimental groups should be present in each batch to ensure that any potential batch variance is distributed across all of the groups.
24. Prepare the slides.a.Using a cryostat set to À20 C, cut 10 mm thick sections from the mid-belly (determined using the recorded muscle length) of the muscles that were frozen in OCT.b.Mount the sections on microscope slides with one section per slide.25.Complete the immunohistochemical staining procedure.
a. Fix sections in acetone for 10 min.i. Prior to fixing the sections, chill the acetone to À20 C in a Coplin staining jar.
ii.When acetone is chilled, add the slides to the vessel and fix for 10 min.
iii.After fixation, allow the sections to dry and warm to 20 C-22 C for 5 min.iv.Use a PAP pen to draw a hydrophobic circle around the sample.
Alternatives: Paraformaldehyde (PFA) fixation can also be used and is compatible with the 'click' reactions outlined in these methods.
b. Wash the sections for 3 3 5 min with DPBS at 20 C-22 C on an orbital shaker at 50 RPM.
Note: All subsequent washes and incubations should be performed at 20 C-22 C on an orbital shaker at 50 RPM.
c. Incubate the sections with Buffer A for 30 min d.Wash the sections 33 10 min with DPBS.Prepare the IHC-QUAD 'click' reaction master mix during the last wash.
Note: When preparing the IHC-QUAD 'click' reaction master mix, vortex the solution after the addition of each component (i.e., add TBTA then vortex, add copper then vortex, etc.).
e. Immediately after the final vortexing, add 50 mL of the IHC-QUAD 'click' reaction master mix to each sample and incubate for 1 h protected from light.
Note: At this point, look at the sample(s) with a light microscope.Fine precipitates should be visible.
Note: At this point, look at the sample(s) with a light microscope.The precipitates that were previously visible should be gone.
i. Incubate the sections with 50 mL of rabbit anti-laminin primary antibody diluted at 1:500 in Buffer A for 1 h.j.Wash the sections for 3 3 5 min with Buffer A. k.Incubate the sections with 50 mL of Alexa Fluor 488 goat anti-rabbit IgG secondary antibody diluted at 1:5000 in Buffer A for 1 h.l.Wash the sections for 3 3 5 min with Buffer A. m.Capture images of the signals for the Alexa Fluor 488 and AZDye 594 on an epifluorescence microscope and ensure that the captured images are not overexposed.
CRITICAL: Images for all of the samples from all of the batches should be captured with identical settings on the microscope (e.g., magnification, exposure time, excitation light intensity, etc.).
EXPECTED OUTCOMES
The purpose of the methodologies outlined in this manuscript is to enable visualization and quantification of the in vivo rate of protein degradation in whole tissue lysates with WB-QUAD or at the single myofiber level with IHC-QUAD.As shown in Figure 5A, the 'no stain' procedure allows for the quantification of total protein from the same Western blot membranes that are used to quantify the amount of AHA-labeled proteins.These values can be used to determine the AHA : protein ratio (Figure 5B), and from this value that total amount of AHA-labeled proteins per muscle can be determined.Insight into the rate of protein degradation can then be gained by assessing how the total amount of AHA-labeled proteins per muscle changes during the chase period.The same principle applies to IHC-QUAD (Figures 5C and 5D), but in this case, the total amount of AHA-labeled protein per cross-section or per myofiber is used to assess the rate of protein degradation.
As shown in Figure 5, it is important to include a negative control sample when running WB-QUAD and IHC-QUAD.The negative control sample should be from the same tissue used in the experimental samples and obtained from an animal placed on the regular control chow during the pulse period.The negative control sample is important because it will provide assurance that the bulk of the signal being observed / quantified is coming from AHA-labeled proteins rather than non-specific background and it is needed for an appropriate correction of the background signal.
QUANTIFICATION AND STATISTICAL ANALYSIS
Timing: 10-20 min per sample 1. Images obtained from WB-QUAD can be analyzed with ImageJ software.
a. Load the raw image into ImageJ.b.Use the rectangle tool to draw a rectangle that captures one full lane.Check to be sure that the size of the rectangle is large enough to capture each of the individual lanes on the blot as this will ensure that the same total area is quantified for each lane.
Note: If all of the lanes in the blot do not fit well within the rectangle as initially drawn, the rotation tool can be used to manipulate the shape of the rectangle to better accommodate the lane while preserving the total area.c.Use the measurement tool to record the mean intensity of each lane.d.Measure the amount of total protein.
i. Take at least three measurements of the background (i.e., regions on the membrane that do not contain samples).ii.Average values obtained from the three background measurements.the proteins that are labeled with AHA during the 4-day pulse are proteins that have a high turnover rate.To overcome this limitation, the duration of the AHA-labeling period could be increased.Alternatively, it may also be possible to overcome this limitation by measuring protein-specific turnover rates.For example, immunoprecipitation of myosin followed by WB-QUAD would enable one to obtain degradation rates that are specific to myosin.
Another limitation of this methodology is that there is no differentiation between the types of protein degradation involved in this process (e.g., ubiquitin-proteasome system (UPS) vs. autophagy).To overcome this limitation, animals could be treated with UPS-or autophagy-specific inhibitors during the chase period.
Lastly, the cost of the two custom rodent chows could be a limitation, especially if the technique is being used in larger animals such as rats.Specifically, 1 kg of each of the custom chows used in this protocol was purchased from Cambridge Isotope Laboratories at a combined cost of over $3000.A total of 1 kg of each chow is enough for the habituation and subsequent ''pulse'' labeling for approximately 70 mice.However, since the time of the initial experiments, we have discovered alternative distributors of AHA (e.g., Iris Biotech) and companies (e.g., Envigo Teklad) that can produce the two custom chows at approximately half the cost.It is our hope that, with the emergence of alternative sources for producing the custom chow and AHA, the WB-QUAD and IHC-QUAD methods of measuring protein degradation will stand as a simple and cost-effective means for visualizing and quantifying the rate of protein degradation.TROUBLESHOOTING Problem 1: Low/no AHA signal following WB-QUAD and/or IHC-QUAD If, upon the completion of steps 22 and 23 (WB-QUAD) or step 24 (IHC-QUAD), there is little or no signal, this may be an indication that there was a problem running the Western blot or a problem with the 'click' reaction.
Potential solution
The no-stain image should provide validation that there was protein run in each lane and that it was effectively transferred to the membrane.If there is a poor signal from the no-stain, use Coomassie blue to stain the membrane.If no signal is observed with either method then there was likely a problem with the transfer step.The use of a negative control sample is necessary to verify that even a low signal is, indeed, from the AHA-labeled proteins and not simply the result of non-specific background.If the Day 0 samples and the negative control sample have the same signal intensity, then it can be inferred that there was either a problem with the 'click' reaction or a problem with the AHA pulse labeling.The AHA-infused chow contains a blue dye and, therefore, blue-coloration of the feces can be used to verify that the animals are consuming the AHA-infused chow.A robust signal-to-noise ratio (e.g., difference in the signal from a Day 0 sample versus the negative control) with IHC-QUAD can be used to verify that there was effective AHA-labeling and therefore provide insight into whether a low western blot signal is due to a problem with the WB-QUAD 'click' conditions and/or the detection of the biotin tag (i.e., the peroxidase-conjugated Streptavidin).The inverse logic applies if there is an issue with the signal in IHC-QUAD.As mentioned in the materials and equipment section, it has been recommended that fresh TCEP be used to make the WB-QUAD 'click' reaction master mix to maximize the efficiency of the 'click' reaction.Our lab has had success using aliquots frozen at À20 C, but if little or no AHA signal is detected and fresh TCEP was not used it might be beneficial to repeat the experiment using fresh TCEP.
Problem 2: High AHA signal following WB-QUAD and/or IHC-QUAD, but a low signal-to-noise ratio If upon the completion of steps 22 and 23 (WB-QUAD) or step 24 (IHC-QUAD), there is a high AHA signal but a low signal-to-noise ratio this may be an indication that there was a problem with the 'click' reaction and further optimization is needed.The most likely problem is a sub-optimal concentration of the alkyne probe.
PeroxidaseFigure 1 .
Figure 1.Construction of an embedding mold for freezing muscles in OCT d. Prepare a surface to measure the length of the muscle.A paper towel wetted with deionized water works well. 5. Anesthetize the mouse.a.Prime the induction chamber using an isoflurane anesthesia apparatus; vapor setting 2.5-4% and oxygen flow rate of 1 L/min.b.Place the mouse in the induction chamber and wait until the mouse is fully anesthetized and unresponsive to a pedal pinch.6. Prepare the animal for the surgical procedure.a.Transfer the mouse from the induction chamber to the nose cone to maintain anesthesia.b.Shave the fur from the lower leg.7. Harvest the TA muscle for WB-QUAD.a.Open the skin on the anterior aspect of the lower limb and expose the TA muscle.b.Carefully excise the TA muscle from the mouse.Optional: Weigh the muscle and record the mass c.Quickly transfer the TA muscle to the labeled 1.5 mL microfuge tube and drop the closed microfuge tube into the Dewar of liquid nitrogen to flash freeze the sample.d.Upon completion of the muscle harvest, transfer the samples to a storage vessel in a freezer set at À80 C.
Figure 2 .
Figure 2. Timeline and experimental design of the pulse-chase labeling strategy for WB-QUAD and IHC-QUAD 1
Figure 3 .
Figure3.Illustration of the setup and procedure for freezing muscles for IHC-QUAD (A) Chamber for freezing muscles in OCT for IHC-QUAD.The plastic beaker is filled with a small volume of isopentane.Liquid nitrogen has been added to the bottom of the chamber surrounding the plastic beaker to chill the isopentane.Note the layer of frozen isopentane that is visible on the interior periphery of the beaker whereas the center contains isopentane in a liquid state.(B) TA muscle that has been excised at resting length.(C) TA muscle in 20 C-22 C OCT immediately before the embedding mold is placed in the chilled isopentane.Note that the TA is fully submerged in the OCT and it is oriented such that the long axis of the muscle is parallel to the side walls of the embedding mold.(D) Block of frozen OCT containing the TA muscle that has been removed from the embedding mold, and placed on aluminum foil prior to storage at À80 C.
19. Prepare the samples.a.Combine 26.25 mL of a thawed 'click' reaction aliquot with 8.75 mL of 4x Laemmli buffer that contains the freshly added DTT. b.Heat samples in boiling water for 5 min.20.Separate proteins using SDS-PAGE.a.Load 30 mL of each of the prepared samples to designated wells on the gel.b.Place the loaded gel into the electrophoresis apparatus and fill it with Tris-Glycine running buffer as instructed by the manufacturer.c.Separate samples by running at 100 V for 1.5 h or until the dye front has reached the bottom of the gel.
Figure 4 .
Figure 4. Visualization of AHA incorporation using a 'click' reaction AHA is a methionine analog that possesses a reactive azide group.During translation, AHA can be incorporated into newly synthesized proteins.The newly synthesized proteins can contain multiple AHAs and each AHA can be covalently bound to an alkyne-bearing probe via a copper-catalyzed azide-alkyne cycloaddition reaction (i.e., 'click' reaction).In WB-QUAD, an alkyne-bearing probe that is conjugated to biotin is used to tag the AHA, and then streptavidin conjugated to horseradish peroxidase along with enhanced chemiluminescence is used to visualize and quantify the amount of AHA-labeled proteins in tissue lysates.Similarly, with IHC-QUAD, an alkyne-bearing probe that is conjugated to a fluorophore (e.g., AZDye 594) is used to tag the AHA-labeled proteins and then a fluorescence microscope is used to visualize and quantify the amount of AHA-labeled proteins in histological sections.
Figure 5 .
Figure 5. Illustration of the expected outcomes (A) Representative Western blot of the AHA-labeled proteins from muscles collected at each time point indicated in A (note: the X condition indicates the negative control sample).(B) Quantitative analysis of the AHA : Protein ratio at each time point.(C and D) Representative images and quantification of the AHA-labeled proteins from muscle cross-sections collected at the indicated time points.The values in B and D are reported as the group mean + SEM and are expressed relative to the chase day 0, n = 4-5 muscles/group.Statistical significance was determined using a one-way ANOVA (B) or an unpaired t-test (D).# indicates a significant difference from chase day 0, $ from chase day 3, P < 0.05.Scale bars = 500 mm.Adapted from Steinert et al. (2023).1
|
2023-09-21T15:02:19.573Z
|
2023-09-19T00:00:00.000
|
{
"year": 2023,
"sha1": "22f68286d8c7a98c53b7bf890b72b84a9509e11b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xpro.2023.102574",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b053c47c4fd2511638d4af935c8ce281819675d1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56384906
|
pes2o/s2orc
|
v3-fos-license
|
Preserved boreal zone forest massif mass estimation during fire extinguishing by liquid aerosol
Abstract. This study contains results of experimental studies into establishing the principles of decreasing the model of fire source mass when extinguishing the ground cover of deciduous and mixed forests by liquid aerosol. The experiments were carried out with typical forest fuels birch leaves and mixed non-living components of temperate forest. Densities of forest fuel samples in a model of fire source were variated in the corresponding real practice ranges: 20.26–54.70 kg/m for birch leaves and 27.54–72.18 kg/m for a mixed forest fuel. Dimensions of droplets generated by the nozzle amounted to 0.01–0.12 mm. It is consistent with modern aerosol fire extinguishing systems. The dependence of the initial forest fuel sample mass on the remaining mass after ending of the pyrolysis reaction was established.
Introduction
Forest fires are considered to be spontaneous fire spread in forest zones with homogeneous and mixed vegetation [1].As a result, after forest fires in Russia thousands hectares of forest are damaged and killed every year, forest animals and birds die, huge quantities of carbon dioxide and smoke are emitted into the atmosphere, residential settlements are destroyed, millions of rubles are spent from the state budget to extinguish forest fires and restore territories suffered from fire.The optimal way to eliminate the fire zone is a local discharge of quenching liquid from the side of the aircraft into the combustion zone [2].Studies have shown [3] that with this approach to fire extinguishing most of the discharged liquid is absorbed into the ground.As a rule, it does not suppress the pyrolysis reaction in the soil covering of boreal zone.Only 5-7% of the total liquid volume intended for reducing the temperature during the combustion of forest fuel (FF) evaporates due to the termination of combustion and soil covering pyrolysis [3].It is important to estimate the mass of FF, which can be preserved in the eliminating the bottom ignition process in the boreal zone.
The aim of this study is to quantify the preserved forest massif during extinguishing the forest zone by liquid aerosol.
Experimental setup and procedure
Figure 1 shows the experimental setup used in carrying out experimental researches.Type-K needle thermocouples 8 (temperature measurement range 223-1473 K, accuracy ± 3 K, heat retention no more than 0.1 s) were used to measure temperature of the pervasive combustion process.Temperature readings were fixed on a multichannel recorder 9. We realized quenching using spray nozzle 2 (droplets' size R d =0.01-0.12mm; irrigation density ξ f =0.014-0.016l/m2 •c; water consumption μ w =0.00063 l/c).Liquid was supplied from the tank 4. The speed of the droplet flow was determined using the high-speed video camera 1 and the "Actual Flow" software for processing the results of the experiments using optical methods of "Particle Image Velocimetry" (PIV) diagnostics.Dimensions of dispersed stream droplets were calculated by the IPI method [5].
Two groups of samples were used during researches.The first group included only birch leaves; the second group consisted of mixed non-living component of temperate forests with a following mass ratio: 25% birch leaves; 15% pine needles; 60% branches of hardwoods.We used distilled water (GOST 6709-72) for extinguishing the standardized fire.At the first stage of the experiment, FF sample was weighed on the analytical microbalance ViBRA HT 84RCE 13 with an accuracy of 10 -5 g (mass m f0 was determined).After that, forest fuel was placed to the bottom of the cylinder.The initial weights and densities of samples are shown in figure 2. The initial weight of forest fuel was chosen so that its density varied from one experiment to another in a narrow range.Ignition of the samples was carried out uniformly over the entire area of the open FF surface.We with used three piezoelectric gas burners simultaneously.For each model of fire source we conducted from 15 to 20 experiments.The standardized fires were extinguished by a liquid aerosol with a droplets' radius R d =0.1-0.12 mm.Spraying continued until the complete suppression of thermal decomposition process Ending of this process was recorded visually and using thermocouple readings.After suppression of the pyrolysis reaction, the fraction (residue) of unreacted FF (m r ) upon extinguishing was dried for 24 hours at room temperature (fig.3) and weighed on laboratory microbalance.An analysis of the location of the approximation curves of fig. 4 enable to conclude that the mass loss when extinguishing mixed FF is greater than for birch leaves (on 0.21-0.54g).This effect may be due to the forest fuels covering structure.The area of birch leaves is larger than the area of needles and twigs.During extinguishing by a liquid aerosol, we observed water droplets accumulation on the birch leaves surface over a period of time.As a consequence, two extinguishing schemes were realized.These were blocking the oxidant entering the reaction zone and directly reducing the temperature of the decomposing FF bedbirch leaves.Because the differences in structure of samples with a mixed FF and birch leaves, water drops did not accumulate on the sample surface, but penetrated through the sample bed.The temperature of mixed forest fuel bed was reduced because of the water droplets evaporation.Penetrated into the pores of the sample water drops and the evaporation during the interaction of the products of thermal decomposition and the dropping streama cloud of water vapor was formed and displaced pyrolysis products.
Results and discussions
Figure 5 shows dependencies of the remaining weights of FF (m r ) on the initial ones (m f0 ).During comparing the initial mass of the sample and the remainder when extinguishing with a liquid aerosol (figure 4), it was established that a significant portion of the sample remained: • for birch leaves -53-83% of the initial mass of the sample (m f0 ); • for a mixed FF-54-85% of the initial mass of the sample (m f0 ).
Figure 5 shows, that remaining mass increases with increasing of initial mass of FF.For birch leaves and mixed forest fuel there is an increase in m r with a variation in the initial mass by 3.7 and 5.2 g, respectively.This effect may be explained by the heat content of the FF sample.During the experiments, the times of suppression the thermal decomposition reaction of forest fuel also increased with an increasing braziers' area.
Conclusion
The obtained result leads to conclusion about the effectiveness of extinguishing forest fire by a liquid aerosol.Experimental studies allow us to establish that up to 80% of the initial mass of the soil covering in the boreal zone were preserved.The established principles reflect the positive outcome of forest fire extinguishing The investigations of heat and mass transfer processes in the suppression of forest fire were supported by Russian Science Foundation (project 14-39-00003).Experiments to determine the specific differences between the processes of thermal decomposition and combustion of various types of forest fuel were carried out within the framework of the President of the Russian Federation grant MK-1684.2017.8.
Figure 4 Fig. 4 .
Figure 4 shows dependencies of the mass loss (Δm=m f0 -m r ) on initial mass of the sample in extinguishing.
|
2018-12-17T20:29:16.325Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "b806d05843b64d5d868ce1e50d974c1d1e32843f",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/24/matecconf_hmt2017_01039.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b806d05843b64d5d868ce1e50d974c1d1e32843f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
30925474
|
pes2o/s2orc
|
v3-fos-license
|
Viscosupplementation as a treatment of internal derangements of the temporomandibular joint : retrospective study *
BACKGROUND AND OBJECTIVES: There are many non invasive treatment modalities for internal temporomandibular joint derangements described in the literature, including counseling, drug therapy, physical therapy and interocclusal devices. However, some patients become refractory to conservative treatments and procedures such as arthrocentesis, arthroscopy and temporomandibular joint surgery are indicated. Viscosupplementation is a less invasive, low cost approach with good short and long term results. This study aimed at discussing viscosupplementation to treat internal temporomandibular joint alterations with results after four months of follow-up. METHODS: Participated in the study 55 patients with reducing and non reducing disc displacement and osteoarthritis refractory to conservative treatments who were submitted to sodium hyaluronate infiltrations. There has been statistically significant pain improvement for all groups. RESULTS: Patients with non reducing disc displacement and osteoarthritis had significant mouth opening improvement. Such results were constant along the four months of follow-up. CONCLUSION: Viscosupplementation with sodium hyaluronate may be considered a good alternative to functionally reestablish temporomandibular joint in the short term in patients with internal alterations refractory to conservative treatments.
INTRODUCTION
Among temporomandibular disorders (TMD), the derangement of the condyle-disc complex derives from the collapse of the normal rotational function of the disc on the condyle.Usually this situation occurs with elongation of the discal collateral ligaments and the inferior retrodiscal lamina.This group of articular TMD includes reducing and non-reducing disk displacement.These disorders are, many times, associated with inflammatory alterations such as synovitis, capsulitis ORIGINAL ARTICLE DOI 10.5935/1806-0013.20140001and retrodiscitis or degenerative alterations like osteoarthrosis and osteoarthritis 1 .Generally the primary protocol to control TMD prioritizes the simplest measures, which are reversible and less invasive 1 .However since intracapsular dysfunctions are often a result of pathologies of the articular surface, that is, of existing structural alterations, the conservative treatment sometimes proves to be ineffective.Several forms of treatment for internal dysfunctions of the temporomandibular joint (TMJ) are supported by the literature: functional rest, non-steroid antiinflammatory drugs, oral splint, physical therapy support exercises, intra-articular corticosteroid injection, arthrocentesis, arthroscopy, open joint surgery for TMJ, among others.Viscosupplementation with intra-articular injection of sodium hyaluronate (SH) -the sodium salt of the hyaluronic acid (HA) -was first used as a treatment for traumatic arthritis on racehorses 2 , subsequently used in humans to treat osteoarthritis in large joints such as knees, hips and shoulder.At In 1979, sodium hyaluronate started to be indicated for internal TMJ alterations 3 , and since then some studies have tried to assess the effectiveness of the technique, as well as to establish a protocol to its utilization.A multicenter randomized double blind and placebo-controlled study with 121 patients has presented promising results 4 .A group of 80 patients received SH injections (35 had reducing disk displacement (RDD), 8 presented non-reducing disk displacement (NRDD), and 37 with degenerative alterations of TMJ), while 41 patients received injection with saline solution (15 with RDD, 6 with NRDD and 20 with degenerative alterations of TMJ).Results showed that for patients with RDD, joint sounds were subjectively reduced in both groups, without statistically significant difference, but the degree and importance of the mandibular deviation improved significantly in the SH group.Patients with NRDD treated with SH presented an improvement of mouth opening in the first five weeks when compared to the group treated with placebo, however, statistically there was no significant difference.Regarding pain assessment with visual analogue scale (VAS), results indicated that the group treated with SH obtained significant improvement in comparison to placebo group.Other study 5 performed a retrospective study comparing the effectiveness of intra-articular injection of SH to the absence of treatment in patients with disk NRDD.A group of 60 patients with NRDD was submitted weekly to an injection of 1ml of SH during 5 weeks.A second group of 76 patients diagnosed with NRDD was only monitored without receiving any treatment (control group).During a period of two years, patients were examined monthly regarding mandibular movement range and joint pain.After this period, 82.3% of patients from the SH group presented an improvement (defined by the authors as mouth opening range over 35mm and absence of joint pain) against 64.7% from the control group, indicating statistically significant difference.Furthermore, it was observed that patients from the SH group presented fast remission of the symptomatology when compared to untreat-ed patients, concluding that SH appears to be an effective method for the treatment of NRDD.Other authors have assessed the efficacy of intra-articular injection of SH in 38 patients presenting RDD by a randomized placebo-controlled clinical trial 6 .Patients from SH group received two SH injections in the upper compartment of the affected TMJ while control group patients received saline solution injection.SH group presented statistically significant improvement for all evaluated aspects, while the placebo group presented significant improvement only for pain.It was concluded that SH injection is an efficient therapeutic option for the treatment of RDD in a six month period 6 .Comparison of injections of corticosteroid (CO) and sodium hyaluronate (SH) in 33 patients with arthralgia and RDD unresponsive to conservative treatments was also performed.In that controlled double-blind study, 18 patients received two infiltrations with 0.5mL SH 1% with a two weeks interval, while 15 patients received corticosteroid injection (0.5mL of bethametasone).Evolution was assessed using a questionnaire regarding pain, functional limitation, articular sounds and symptoms persistence, and a clinical evaluation.VAS indicated significant improvement, with a reduction of the initial algic condition of 30% for SH group and of 40% for the CO group 7 .A randomized controlled clinical trial with 67 patients with RDD, NRDD or degenerative alteration of TMJ compared injections of sodium hyaluronate and corticosteroid.The work group received 0.5mL of SH associated to 1ml bupivacaine 1% once a week totalizing from three to four injections.Control group received 0.5mL of prednisolone 2.5% with 1ml of bupivacaine 1% once a week, total of three or four injections.During 5 weeks monitoring period both groups presented significant improvement of pain and function, with no statistically significant difference between them 8 .A randomized double-blind clinical trial evaluated 41 patients with rheumatoid arthritis in the temporomandibular joints, dividing them in three groups: 14 individuals were treated with intra-articular SH injections; 14 with corticosteroid injections; 13 with saline solution injection.Monitoring period was of four weeks, and an improvement of symptoms and in the clinical indexes of dysfunction was observed in all groups.Better results were observed in the groups treated with SH and CO 9 .One study reports that 6 patients (7.5% of the sample) presented reactions such as discomfort and edema at the injection site 4 .Other author claims that 13 patients (37.1% of the sample) who received SH injection complained of pain during the procedure and within three days, 3 patients (8.5% of the sample) presented acute malocclusion at the injection side and muscular strength reduction 8 .This study aimed to at discussing and evaluating the viscosupplementation technique with SH injection as an alternative for the treatment of TMJ internal alterations by means of 55 case reports.
METHODS
This is a retrospective study performed by assessment of medical records of patients treated at the TMD and Orofacial Pain Clinic of the Federal University of Paraná (UFPR) and at private clinic.Fifty five patients with articular TMD diagnosis -RDD, NRDD and osteoarthritis (OA) -and unresponsive to conservative treatments, such as occlusal splints and mandible exercises, received viscosupplementation on the affected TMJ.Diagnostic clinical criteria on RDC/TMD were followed by two specialists in TMD and orofacial pain throughout the assessment of all patients.The technique described by Bonotto, Custodio and Cunali 10 for sodium hyaluronate 1mL infiltration was utilized in every procedure.Two specialists in TMD and orofacial pain performed all procedures during the period from February 2006 to March 2011.Both examiners were calibrated to data collection, and executed the technique.All patients have received from one to three infiltrations of SH with at least 10 days between them.Throughout the postoperative period patients were instructed to continue with routine conservative treatment, oral splint and/or mandibular exercises.Non-steroid anti-inflammatory drug was prescribed for the three days subsequently to the procedure.Patients' evolution regarding temporomandibular pain complains, was assessed using a VAS before and 4 months after the procedure.To evaluate alteration concerning mandibular function, measurements of interincisal opening were also performed with the same interval.The same professionals that had executed previous procedures performed all assessments.In order to compare average mouth opening and average indexes of the VAS, before and after treatment, the Wilcoxon signed rank test was applied.A 95% confidence interval was considered.This study was approved by the Ethics Committee, Federal University of Paraná under number 1245.170.11.2010.
RESULTS
From the 55 evaluated patients, 46 were females (83.64%) and 9 were males (16.36%).Average age was 32.98±15.84years.After clinical assessment using RDC/TMD criteria, diagnosis was RDD for 21.8% (12 patients), NRDD for 54.5% (30 patients) and OA for 23.6% (13 patients).Table 1 shows mouth opening and VAS data at baseline.Figure 1 indicates patient's evolution for mandibular function before and after the viscosupplementation treatment.Statistic significant improvement of the mandibular function was observed in patients with RDD and OA (p<0.001).Figure 2 shows the improvement for temporomandibular disorders pain complaint, which was statistically significant for all groups: RDD, NRDD and OA.Complaints of mild discomfort in the first 48 hours were reported by 9% of patients, while 7.2% of them have experienced an open bite in the injection side.
DISCUSSION
There is no precise indication for viscosupplementation in the literature, however there seems to be a consensus on its utilization in cases of internal symptomatic alteration of TMJ, specially in the presence of limited range of movement.During the monitoring of the reported cases, viscosupplementation demonstrated to be an efficient treatment to control pain in patients with RDD, NRDD and OA.This result is consonant with those presented by several authors 4,[6][7][8][9] .Furthermore, viscosupplementation improved mandibular function of patients with limited mouth opening caused by RDD and OA, corroborating others 5,9 .
Results regarding improvement of mandibular function observed in this study may be considered expressive, since they refer to patients who did not respond to conservative treatment.However it must be highlighted that this is a retrospective study based on chart review with short term follow-up and that the same examiners have performed both treatment and postoperative assessment procedures.Therefore some obliquity may be considered in the interpretation of results.Maintenance of conservative treatment during the follow-up period may have contributed to improvement of patients.However, these patients were refractory to conservative therapy alone.Furthermore, in clinical practice viscosuplementation should be associated with conservative treatment.HA is a mucopolysaccharide acid and an essential component of animal tissues.HA is composed by multiple alternating units of D-glucuronic acid and N-acetylglucosamine, forming highly viscous gelatinous solution due to its elevated hydrophilicity 11 .It is the major component of the synovial fluid and plays an important role in the articular tissues lubrication due to its high molecular weight 11 .Inflammatory and degenerative alteration of the joints reduces the concentration and molecular weight of HA 11,12 .SH injection increases the concentration and molecular weight of HA at the synovial fluid, associated to the relief of pain 13 .By clearing the adherence zones between articular disk and the mandibular fossa, the articular mobility is enlarged allowing a better circulation of the synovial fluid.It was verified the presence of Prostaglandin E2 and Leukotriene B4 in the synovial fluid of patients with TMD suggesting that these mediators are among the factors able to generate joint pain 14 .It is also suggested that the analgesic effect of viscosupplementation may occur by blocking receptors and endogenous algic substances in the synovial tissues.A strictly mechanic mechanism by the interruption of trauma caused by mechanic block of the disk or of both adherence zones was also suggested 4 , what could explain the effects of therapy in medium and long term, because although the injected HA is kept on the joint only for a few days the results last for months 15,16 .
Only two articles reported side effects of the technique, which seem to be brief and self-limiting 4,8 .During the follow-up of the cases reported in this study, no severe side-effect was observed.Most common complaints were mild soreness, edema and open bite at the injection side.However in all cases sideeffects were self-limiting confirming other authors' findings 8 .
According to results of this study, viscosupplementation can be considered an efficient alternative for the management of pain and function improvement in patients with RDD, NRDD and OA refractory to conservative treatments.
CONCLUSION
After the monitoring of clinical cases it is possible to conclude that viscosupplementation with SH may be an interesting proposal to reduce TMJ pain and improve mouth opening.Controlled clinical trials with significant samples and longer monitoring period are required to evaluate real effectiveness of viscosupplementation technique and to establish an objective protocol.
Figure 1 .Figure 2 .
Figure 1.Interincisal opening average pre and post treatment for the three groups RDD: reducing disc displacement; NRDD: non-reducing disc displacement; OA: osteoarthritis.*Statistical difference was observed in patients with NRDD and OA using Wilcoxon signed rank test (p<0.001).
|
2017-08-30T21:45:17.337Z
|
2014-03-01T00:00:00.000
|
{
"year": 2014,
"sha1": "1bfbb0ee97bcca6270fe46c5708105febdaebb47",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rdor/a/9Xzt4xFRYPhvK6LL6NBdpHQ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1bfbb0ee97bcca6270fe46c5708105febdaebb47",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247963623
|
pes2o/s2orc
|
v3-fos-license
|
Myosteatosis in Cirrhosis: A Review of Diagnosis, Pathophysiological Mechanisms and Potential Interventions
Myosteatosis, or pathological excess fat accumulation in muscle, has been widely defined as a lower mean skeletal muscle radiodensity on computed tomography (CT). It is reported in more than half of patients with cirrhosis, and preliminary studies have shown a possible association with reduced survival and increased risk of portal hypertension complications. Despite the clinical implications in cirrhosis, a standardized definition for myosteatosis has not yet been established. Currently, little data exist on the mechanisms by which excess lipid accumulates within the muscle in individuals with cirrhosis. Hyperammonemia may play an important role in the pathophysiology of myosteatosis in this setting. Insulin resistance, impaired mitochondrial oxidative phosphorylation, diminished lipid oxidation in muscle and age-related differentiation of muscle stem cells into adipocytes have been also been suggested as potential mechanisms contributing to myosteatosis. The metabolic consequence of ammonia-lowering treatments and omega-3 polyunsaturated fatty acids in reversing myosteatosis in cirrhosis remains uncertain. Factors including the population of interest, design and sample size, single/combined treatment, dosing and duration of treatment are important considerations for future trials aiming to prevent or treat myosteatosis in individuals with cirrhosis.
Introduction
Radiologically identified skeletal muscle abnormalities, including sarcopenia (low muscle mass) and myosteatosis (pathological fat accumulation in muscle), are common in patients with cirrhosis. Although extensive research over the last decade has demonstrated sarcopenia to have an independent association with poor prognosis in cirrhosis [1][2][3], little is known about the clinical implications of myosteatosis, an indicator of poor muscle quality. Muscle quality is defined by the ratio of muscle strength to mass, which is affected by changes in muscle composition [4]. In muscle, lipids can accumulate as intermuscular adipose tissue (IMAT, fat beneath the deep fascia and between adjacent muscle groups), intramuscular adipose tissue (fat between and/or within muscle fibers) and intramyocellular lipids in the form of lipid droplets [5][6][7]. Intramuscular fat may reduce muscle quality by disrupting muscle fiber alignment, thus weakening mechanical action [8].
The gold-standard technique for assessing fat infiltration into muscle is biopsy; however, limited data are available due to the invasiveness of tissue sampling. Cross-sectional imaging with computed tomography (CT) and magnetic resonance imaging (MRI) has facilitated comprehensive analysis of muscle quality. Despite the lack of ionizing radiation, MRI is expensive and time-consuming, and consistent protocols for the scanning process and software are lacking. Protocols using 3D volumetric MRI coupled with advanced image processing techniques have been recently developed, enabling assessment of muscle composition [9]. However, this technique needs to be validated and standardized in patients with cirrhosis.
In cirrhosis, CT has become a common modality for radiographic assessment of myosteatosis, mostly available as part of standard care [10]. Alteration in muscle quality by excess fat accumulation is recognized as a lower mean skeletal muscle radiodensity on CT [11]. Muscle radiodensity can be objectively assessed by CT in Hounsfield units (HU), where 0 and −1000 HU are the radiodensity of distilled water and air at standard pressure and temperature [12], respectively. The predefined CT radiodensity threshold value for demarcating muscle is in the range of −29 to 150 HU [13]. Fat infiltration into the muscle reduces the radiodensity measured in HU on CT; however, controversies remain regarding the definition of low-radiodensity muscle, as ranges from 0 to 29 HU and −29 to 30 HU have been applied as low-radiodensity muscle in previous studies [11].
Myosteatosis may not necessarily occur at the same time as the loss of muscle mass. It remains unclear whether pathological fat accumulation in muscle results from the loss of muscle mass or whether it occurs prior to alterations in muscle mass [14]. Although myosteatosis may be a feature of muscle loss, data regarding various interactions between myosteatosis and sarcopenia are still controversial. Myosteatosis was reported in 93% of sarcopenic patients with chronic liver disease [15], but no interaction effect of sarcopenia and myosteatosis on cirrhosis complications was reported [16].
Despite the significant clinical implications of myosteatosis in cirrhosis, a standardized definition for myosteatosis and the exact mechanisms associated with its development have not been thoroughly characterized. The elucidation of abnormalities in muscle physiology and underlying molecular mechanisms is essential in developing reversal agents. Therefore, the goal of this review is to summarize important results from the literature regarding the diagnosis of myosteatosis, pathogenic mechanisms and potential therapeutic targets. We will also outline important considerations for the design of future clinical trials in cirrhosis.
Search Strategy
A literature search was performed in MEDLINE (OvidSP) until October 2021 using the subject heading terms "myosteatosis", "muscle radiodensity", "muscle attenuation", "muscle fat" and "intramuscular fat" pooled with the Boolean operator "AND" to the search terms "cirrhosis", "chronic liver disease", "end-stage liver disease" and "liver transplant". The search was restricted to full-text papers published in English. A manual review of the literature was conducted to include papers discussing diagnosis, outcomes, pathophysiological mechanisms and potential interventions.
The preliminary search yielded 45 potentially relevant articles. After screening titles and abstracts, 28 papers were excluded, and therefore, 17 full-text articles were reviewed. Of those 17 articles, five were excluded because they did not include patients with cirrhosis or because no explanation for myosteatosis determination was provided. Figure 1 presents a detailed flow chart of the selection of the 12 studies included in this review. Study eligibility was assessed independently by M.E. and A.M.L., and any inconsistencies were resolved by a consensus of the reviewers. Relevant articles are discussed below.
Diagnosing Myosteatosis
Optimal cutoff values to define normal and low-radiodensity muscle in relation to adverse outcomes have not been established in patients with cirrhosis. Myosteatosis in cirrhosis has been defined using IMAT cross-sectional areas or mean muscle radiodensity. IMAT represents, in particular, intermuscular adipose tissue areas, whereas the low mean muscle radiodensity represents poor-quality muscle with areas containing intramuscular adipose tissue and intramyocellular lipids [17]. Figure 2 illustrates the IMAT cross-sectional areas and total skeletal muscle radiodensity estimation at L3 in two patients with cirrhosis. Mean skeletal muscle radiodensity was 19 HU in a patient with myosteatosis and 48 HU in a patient with normal muscle radiodensity. Areas composed of low-radiodensity muscle (<29 HU) are predominant in a patients with myosteatosis, whereas areas of normal-radiodensity muscle (≥29 HU) are prevalent in patients without myosteatosis. The IMAT cross-sectional area was 4 cm 2 /m 2 and 6 cm 2 /m 2 in a patient with and without myosteatosis, respectively.
Radiodensity ranges used for the analysis of normal radiodensity (red), low-radiodensity (dark blue) muscle and intermuscular adipose tissue (IMAT; green) are shown. Mean skeletal muscle radiodensity was 19 HU in a patient with myosteatosis and 48 HU in a patient with normal muscle radiodensity or no myosteatosis. Areas composed of lowradiodensity muscle (<29 HU) are predominant in patients with myosteatosis, whereas areas of normal-radiodensity muscle (≥29 HU) are prevalent in patients with myosteatosis. The IMAT cross-sectional area was 4 cm 2 /m 2 and 6 cm 2 /m 2 in a patient with and without myosteatosis, respectively.
Myosteatosis has been widely diagnosed based on the mean radiodensity (HU) value of the entire or partial muscle cross-sectional areas on CT. Values derived from cancer populations have been commonly applied to predict clinical outcomes in patients with cirrhosis. A mean third lumbar vertebra (L3) muscle radiodensity cutoff of <33 HU in patients with a BMI ≥25 kg/m 2 and <41 HU in those with a BMI <25 kg/m 2 was established in cancer patients to predict mortality [18]. Using these cutoffs for myosteatosis, 52% of patients with cirrhosis met criteria for myosteatosis [19]. Given fluid retention in a majority of patients with cirrhosis, the applicability of these BMI-dependent cutoffs is questionable. Moreover, the higher lipid storage capacity of skeletal muscle in females compared to males [20] requires sex-specific cutoffs for myosteatosis to be defined. In patients with cirrhosis, the optimal cutoff to predict 12-month mortality was determined using the psoas muscle radiodensity at the level of the fourth to fifth vertebra. Psoas muscle radiodensity below 43.14 HU was associated with higher 12-month mortality after adjusting for age, sex and Child-Pugh score [21]. When the predictive performance of psoas muscle radiodensity in predicting short-and long-term outcomes after deceased donor LT was compared to the performance of mean radiodensity of L3 skeletal muscle, better performance was observed using the latter in predicting post-LT mortality [22]. However, no sex-specific cutoffs using the mean radiodensity of L3-skeletal muscle have been established in patients with cirrhosis to predict pre-liver transplant (LT) mortality. Limited predictive accuracy of psoas muscle compared to whole muscle at L3 has been reported in other studies [23], highlighting the importance of establishing optimal cutoffs in diagnosing myosteatosis.
Myosteatosis has also been defined based on IMAT-normalized radiodensity or cross-sectional areas [24,25]. CT radiodensity of the multifidus muscles (HU) was divided by the radiodensity of subcutaneous adipose tissue to determine IMAT. L3-IMAT values >−0.44 in male and >−0.37 in female patients with cirrhosis have been used as cut-offs to delineate myosteatosis [24]. To determine IMAT, predefined radiodensity ranges of −190 to −30 HU [26] were applied to identify the cross-sectional areas of IMAT within the L3 muscle areas [25].
Assessment of myosteatosis in previous studies was performed using CT image analysis. MRI also has the capability of measuring fat infiltration into organs and muscles. Recently, MRI-derived fat fraction of erector spinae muscles was measured at the point of the highest muscle volume, with myosteatosis established as a fat fraction below 0.8 in LT recipients [27].
Clinical Significance of Myosteatosis
Reduced muscle radiodensity negatively correlates with clinical outcomes (Table 1). It is an independent predictor of increased pre-LT mortality in cirrhosis (adjusted for severity of the liver disease) and may be related to deterioration in physical conditioning [19,25]. Myosteatosis has also been shown to be associated with complications of cirrhosis, such as hepatic encephalopathy (HE), with a prevalence of 70% in patients with overt HE, as compared to 45% in patients without HE [16]. Independent of sarcopenia, myosteatosis is associated with the presence of minimal HE and the development of overt HE in patients with cirrhosis [28]. Significant improvement in CT-measured muscle radiodensity following transjugular intrahepatic portosystemic shunt was reported in previous studies [29,30].
When LT is a competing event, myosteatosis, sarcopenia, MELD and HE are independently associated with mortality in cirrhotic patients evaluated for LT. In an attempt to improve prognostication, the MELD-Sarco-Myo-HE score was developed, which improved MELD accuracy in predicting 3-and 6-month mortality. By removing myosteatosis from the score, a significant loss in accuracy in comparison to the MELD-Sarco-Myo score was noticed, suggesting the clinical importance of myosteatosis in predicting pre-LT mortality [31].
When myosteatosis was defined based on IMAT-normalized radiodensity, it was associated with multidimensional frailty in hospitalized male patients with cirrhosis but not in female patients. A correlation between IMAT and frailty index (r = 0.238, p = 0.018) and higher frequency of myosteatosis was only observed in frail male patients (62.5 vs. 15.8%, p = 0.001) compared to the non-frail group [24]. In another study, mean muscle radiodensity <34 HU was associated with higher risk of waitlist mortality (HR 8.88, 95% CI 1.95-40.41, p = 0.005), whereas no association between IMAT cross-sectional areas and waitlist mortality was confirmed in multivariate sensitivity analysis of 261 patients with cirrhosis listed for LT. It was speculated that CT-determined IMAT might be less accurate than mean muscle radiodensity because it constitutes only a small portion of CT images and therefore is subject to interobserver bias [25].
Preoperative myosteatosis, assessed using the fat fraction of MRI, in patients who underwent LT was associated with increased length of hospital stay post-LT. There was also a trend toward higher risk of graft loss (adjusted HR, 2.07; 95% CI, 0.92-4.64; p = 0.08) and mortality (adjusted HR 2.24, 95% CI, 0.93-5.41; p = 0.07) [27]. Without adjusting for MELD, sex, race, BMI and weight at LT, and donor and recipient age and etiology, the association between myosteatosis and graft survival was significant (HR 2.08, 95% CI 1.04-4.15, p = 0.037). However, given the small number of outcomes in this study, there remains a possibility of multivariate model overfitting.
The impact of low skeletal muscle radiodensity on perioperative outcomes remains controversial. In patients receiving deceased donor orthotopic LT, mortality and complication rates over the first 3 months, length of intensive care unit (ICU) and hospital stay, and procedural costs were higher in patients with myosteatosis. There were no differences in long-term graft and patient survival between groups [32,33], suggesting myosteatosis is a key factor in predicting short-term outcomes following LT [22]. In another study, despite longer ICU stay in patients with cirrhosis and myosteatosis, no difference in the hospital length of stay or bacterial infections was seen in the first 90 days post-LT between the groups [19]. Inclusion of myosteatosis within the balance-of-risk (BAR) score, a well-established score for identification of high-risk recipients and/or donor-recipient combinations, improved outcome prediction following orthotopic LT, indicating the possible further importance of myosteatosis as a predictor of immediate outcomes [32]. Applied cutoffs identified patients at risk for inferior short-but not long-term graft and patient outcomes after LT.
72%
Myosteatosis was associated with a higher risk of post-LT adverse outcomes, including mortality and allograft failure at 1 year, as well as longer hospital and intensive care unit stays.
In 106 LT recipients, myosteatosis was associated with higher risk of post-LT adverse outcomes, including mortality at 1 year (HR, 3.3; 95% CI, 1.00-11.13; p = 0.049), allograft failure (HR, 4.1; 95% CI, 1.2-13.5; p = 0.021) and longer hospital and intensive care unit stays. Myosteatosis was determined using unenhanced abdominal CT images taken 6 months before or 1 month post-LT [34]. Given the catabolic stress of LT, potential discrepancies between applying pre-and post-LT CTs in determining body composition features have not been clarified.
Mechanisms of Myosteatosis in Cirrhosis
Emerging evidence suggests the preferential storage of extramyocellular lipids in muscles with a dietary lipid overload of short duration [35]. During the early phase of lipid overload, oxidative muscles may resist intramyocellular lipid accumulation through augmented β-oxidation capacity; however, this capability may fluctuate depending on muscle type [35]. Following excess exposure to fatty acids, oxidative muscles dispose free fatty acids by oxidation, whereas re-esterification of free fatty acids to triglycerides occurs in glycolytic muscles due to the lower mitochondrial oxidative phosphorylation [36,37]. This suggests pathophysiological adaptation to lipotoxicity differs by muscle type. Therefore, differences between muscle types and sex-dependent differences in muscle metabolism are important considerations. In general, higher levels of lipids accumulate in oxidative rather than glycolytic muscles, and lipid oxidation is the favored energy source in oxidative muscles. This demonstrates muscle-specific responses in intramyocellular fatty acid metabolism [38].
Excessive fat accumulation in muscle may impact muscle fiber orientation and is associated with muscle inflammation, reduced muscle strength and physical performance [39,40]. An early adaptation response to this lipotoxicity is a transition of muscle fibers from fast to slow (type II to type I), which leads to enhanced muscle oxidative capacity [38]. Chronic adaptation response may be associated with higher amounts of glycolytic metabolites con-current with a reduction in mitochondrial lipid oxidation. This indicates that muscle fibers are using lower levels of lipids for oxidation in myosteatosis [5]. Impaired muscle oxidative capacity in turn triggers muscle fiber atrophy. Fiber-type-specific lipid accumulation has not yet been defined in cirrhosis.
In vitro and in vivo experimental models have been used to explore the mechanisms of myosteatosis. Animal studies of myosteatosis are mainly diet-induced obesity models with increased intramyocellular lipid accumulation and decreased oxidation of lipids in muscle fibers. However, a muscle-specific pattern was reported for high-fat-induced myosteatosis [41]. Culturing human muscle cells isolated from a vastus lateralis muscle biopsy with human obese subcutaneous adipose-tissue-conditioned medium impaired myogenesis and promoted intramyocellular lipid accumulation in myotubes [42]. Using C2C12 cells, a recognized in vitro model of skeletal muscle cells, a reverse association between skeletal muscle lipid accumulation and AMPK (AMP-activated protein kinase) activity was found [43]. In palmitate-treated C2C12 myotubes, leucine reduced lipid accumulation through regulation of mitochondrial function in an mTORC1-independent manner. Palmitate-treated C2C12 myotubes have been used to imitate in vivo lipid accumulation in obese skeletal muscle [44].
Data regarding the mechanisms by which excess lipid accumulates within muscle in cirrhosis is sparce, but it might be related to the metabolic abnormalities associated with liver failure (Figure 3). Hyperammonemia may play an important role in the pathophysiology of myosteatosis in cirrhosis. Increased skeletal muscle ammonia uptake induces skeletal muscle mitochondrial dysfunction via cataplerosis of α-ketoglutarate [45], which subsequently results in impaired mitochondrial oxidative phosphorylation and diminished lipid oxidation in muscle [46]. In an experimental model of myosteatosis, increased lipid storage in supraspinatus muscle of Sprague-Dawley rats was associated with a reduction in the expression of genes involved in the uptake of fatty acids, transportation and β-oxidation within the mitochondria. These changes probably lead to lipid-induced inflammation and increased reactive oxygen species (ROS) generation in the muscle [5]. Hyperammonemia, insulin resistance, mitochondrial dysfunction, diminished lipid storage capacity of subcutaneous adipose tissue and age-related differentiation of muscle stem cells into adipocytes have been also suggested as potential mechanisms contributing to myosteatosis.
Mitochondrial dysfunction has been suggested as a putative contributor to skeletal muscle insulin resistance [47], as the accumulation of triacylglycerol and lipid molecules, such as diacylglycerol and ceramide, interrupts GLUT-4 translocation and triggers insulin resistance [48]. Considering the importance of mitochondrial oxidative phosphorylation in ATP production in skeletal muscle, mitochondrial dysfunction and low cellular levels of ATP in myosteatosis may impair protein synthesis via reduced insulin-stimulated ATP synthesis, ultimately leading to reduced muscle mass [47]. Whether reversing myosteatosis by improving mitochondrial function is accompanied by improved muscle mass requires further investigation.
Myosteatosis was the first muscle alteration identified in both early and fibrosing preclinical models of non-alcoholic steatohepatitis (NASH). The degree of fat infiltration into muscle was correlated with the severity of liver disease and inflammation rather than insulin resistance or visceral fat accumulation [49]. In line with this finding, a reduction in myosteatosis degree was associated with a reduction in liver stiffness, independent of weight loss, in 48 obese patients with metabolic-dysfunction-associated fatty liver disease (MAFLD) [50]. Lipotoxicity associated with myosteatosis impacts muscle secretome, which may cause subsequent alterations in muscle mass and function [51].
Although obesity and insulin resistance are two main conditions associated with myosteatosis, insufficient storage of lipids in subcutaneous adipose tissue has also been recognized as a potential contributor to myosteatosis [6,42]. A decreased ability of subcutaneous adipose tissue to store lipids and ectopic fat accumulation in other locations, such as visceral adipose tissue, liver and muscle, is linked with inflammation and insulin resistance [52]. Excessive lipid availability and flux into muscle are determinant factors in skeletal muscle lipid deposition and accumulation of lipotoxic intermediates [53]. Improving lipid storage capacity of subcutaneous adipose tissue by proliferator-activated receptor gamma (PPAR-gamma) agonist enhanced insulin sensitivity [54]. Therefore, improved storability of adipose tissue and elevated muscle fatty acid oxidation can contribute to lower circulating lipid levels and consequently lower lipotoxicity [55]. Whether myosteatosis in cirrhosis is associated with metabolic disorders associated with high adiposity, or impaired storage of lipids in subcutaneous adipose tissue remains unknown. Lastly, age-related differentiation of muscle stem cells into adipocytes [56] has also been suggested as a potential mechanism contributing to myosteatosis.
Results of studies investigating the mechanisms underlying myosteatosis in cirrhosis should be interpreted with caution, as the majority of data on lipid accumulation in muscle comes from experimental studies (in males) in which the impact of high-fat diet or skeletal muscle injury on lipid accumulation in muscle was investigated. Although animal models are required to promote our understanding of myosteatosis in cirrhosis, they may not necessarily represent the clinical course of cirrhosis. Therefore, studies assessing cirrhosisassociated myosteatosis are needed.
Potential Pharmaceutical Targets Based on Pathogenic Pathways
Nutritional and pharmacological interventions that influence ammonia metabolism have attracted interest to improve myosteatosis in cirrhosis. Other agents that deserve future investigation are the long-chain n-3 polyunsaturated fatty acids (PUFAs), which may improve mitochondrial oxidative phosphorylation capacity. Determining the efficacy of these agents will require a careful definition of endpoints and detailed information on patients' actual physical activity level and nutritional intake.
Ammonia-Lowering Treatments
Excessive amounts of ammonia delivered to muscle are now well recognized as the key multipotent metabolic contributor to loss of muscle quantity and quality in cirrhosis. However, data exclusively assessing the impact of ammonia-lowering strategies on myosteatosis in patients with cirrhosis are lacking. Agents used to lower ammonia levels and improve mitochondrial function may have a beneficial role in myosteatosis treatment or prevention. A reduction in plasma ammonia levels can be achieved using long-term therapeutic nutritional supplementation with branched-chain amino acid (BCAA) mixtures [57]; L-ornithine L-aspartate mixture with rifaximin [58]; or leucine, which increases mitochondrial oxidation in hyperammonemic states [59]. Improved mitochondrial function and reduced ammonia levels have also been observed with the use of l-carnitine in a dose-dependent manner in patients with cirrhosis [60]. L-ornithine and L-aspartate, in combination with rifaximin, decreased ammonia levels in plasma and muscle of an experimental model of hyperammonemia and led to improved muscle protein synthesis and function. A significant increase in type II fiber size and reduction in the expression of myostatin and autophagy markers was noticed using these ammonia-lowering agents [58]. Considering the contribution of hyperammonemia in the pathogenesis of myosteatosis in cirrhosis, the potential ability of L-ornithine and L-aspartate to effectively lower blood ammonia and subsequently improve myosteatosis requires further investigation.
Long-Chain n-3 Polyunsaturated Fatty Acids
Evidence suggests that omega-3 PUFAs improve mitochondrial oxidative phosphorylation capacity in human skeletal muscle and therefore may be pivotal in myosteatosis treatment [61]. The ability of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) to prevent tumor-associated myosteatosis was found in a preclinical model of colon cancer [62]. The metabolic consequence of omega-3 polyunsaturated fatty acids in reversing myosteatosis in cirrhosis remains to be investigated.
Considerations for Future Research Trial Designs
Myosteatosis is an endpoint in trials aiming to improve skeletal muscle quality. When designing an intervention that targets a specific underlying mechanism, maximizing potential benefit and minimizing possible toxicity are important determinants of trial efficacy [14]. Practical considerations for the implementation of clinical trials include the population of interest, design and sample size, single/combined treatment, duration and dosing of the intervention and treatment endpoints.
Preventive or treatment trials for skeletal muscle abnormalities require a valid assessment of the abnormality. The diagnosis of myosteatosis for admission of patients into clinical trials and the effective measurement of myosteatosis reversal overtime should be established based on CT-measured muscle radiodensity [3]. CT may be the only option in trials assessing the quality of muscle by measuring muscle radiodensity. Besides reliability and sensitivity of modalities to capture changes in muscle radiodensity, cost, availability and feasibility of appropriate techniques to quantify changes should be considered, particularly in large or longitudinal studies.
The population of interest is another important consideration in order to identify the group that may benefit the most from the intervention. Patients with mild-to-moderateseverity myosteatosis may be the most responsive to therapy. The lack of myosteatosis stages in cirrhosis makes it difficult to identify the best target population and generalize the findings, as the preventative approach in early-stage patients might be different than treatment strategies in patients with moderate-to-severe myosteatosis.
For trials assessing the impact of pharmacological interventions on muscle quality, complementary therapies such as exercise or nutritional support might result in differences between responders and non-responders, and therefore, detailed information on these therapies needs to be accounted for. The duration of intervention should be long enough to ensure that the magnitude of change in the treatment endpoint is of adequate length and that the outcome is not just due to normal variation. The accepted change for muscle cross-sectional area on CT is a difference greater than 2%, and any change between −2 and +2% is not clinically meaningful and may reflect tissue maintenance [63]. However, such a threshold has not been identified for changes in muscle radiodensity.
Factors such as treatment endpoint, population of interest, annual rate of muscle change and severity of liver diseases are important considerations to identify the length of the trial and should be taken into account in designing trials in cirrhosis [14,64]. Compliance with intervention is a significant determinant of clinical trial outcomes. Poor patient adherence is a frequently described drawback of clinical trials, which could be improved through optimal design, dosing and the appropriate type of supplement [65]. Lastly, features such as time point of the disease's trajectory, sex, age, race and medications may also act as confounding variables, and therefore, a well-adjusted distribution using design and analytic strategies should include a homogeneous population, especially in small trials [14]. The importance of sample size calculation to determine the optimal number of patients to detect a clinically meaningful difference is another consideration for designing trials [66]. Employing the aforementioned considerations in future large trials investigating both prevention and treatment of skeletal muscle abnormalities in patients with cirrhosis may ensure positive outcomes.
Conclusions
Among cross-sectional imaging techniques to diagnose myosteatosis in patients with cirrhosis, abdominal CT constitutes the most studied technique. Myosteatosis is mainly defined as a low muscle radiodensity on CT. It is associated with a poor prognosis, including mortality, and complications such as HE in cirrhosis. Although myosteatosis confers prognostic value in cirrhosis, it is not included in conventional scores for prognosis, such as the MELD or Child-Pugh scores. This may need to be evaluated in future studies. Identifying the efficacy of a potential intervention for myosteatosis necessitates an accurate validated definition, which is currently lacking in cirrhosis. Standard modalities and definitions of myosteatosis (including sex-based cutoffs) in cirrhosis should be established and validated for use in clinical practice. It is possible that myosteatosis may also contribute to sarcopenia. Future research should also investigate the impact of the concurrent presence of muscle abnormalities, i.e., sarcopenia and myosteatosis, in such patients. The use of noncontrast versus contrast-enhanced CT scans should be reported in studies of myosteatosis, given the higher muscle radiodensity in the arterial and portal venous phase compared to non-contrast-phase CTs [67]. Although myosteatosis has been defined as pathological lipid accumulation in muscle, the composition of lipids seems to play an important role in the pathogenesis of myosteatosis rather than the total amount of lipids per se [11]. Future studies need to investigate an association between the composition of lipids stored in muscle and the presence of myosteatosis. Although myosteatosis is associated with worse outcomes in cirrhosis, no standard treatments are available. A better understanding of the mechanisms underlying myosteatosis is key to planning clinical trials with the aim of reversing this skeletal muscle abnormality. In summary, this review emphasizes the need for prospective studies with a larger number of patients to develop our existing knowledge of the predictive value of myosteatosis in cirrhosis and trial new treatments.
|
2022-04-06T15:10:43.984Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b3808e272c16eebfde590915bf3fc5190bcab1ce",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/11/7/1216/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e3d81fdb980f7d751cbd17a8acc50621dd64e9b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264590773
|
pes2o/s2orc
|
v3-fos-license
|
Crossing the Aisle: Unveiling Partisan and Counter-Partisan Events in News Reporting
News media is expected to uphold unbiased reporting. Yet they may still affect public opinion by selectively including or omitting events that support or contradict their ideological positions. Prior work in NLP has only studied media bias via linguistic style and word usage. In this paper, we study to which degree media balances news reporting and affects consumers through event inclusion or omission. We first introduce the task of detecting both partisan and counter-partisan events: events that support or oppose the author's political ideology. To conduct our study, we annotate a high-quality dataset, PAC, containing 8,511 (counter-)partisan event annotations in 304 news articles from ideologically diverse media outlets. We benchmark PAC to highlight the challenges of this task. Our findings highlight both the ways in which the news subtly shapes opinion and the need for large language models that better understand events within a broader context. Our dataset can be found at https://github.com/launchnlp/Partisan-Event-Dataset.
Introduction
Political opinion and behavior are significantly affected by the news that individuals consume.There is now extensive literature examining how journalists and media outlets promote their ideologies via moral or political language, tone, or issue framing (de Vreese, 2004;DellaVigna and Gentzkow, 2009;Shen et al., 2014;Perse and Lambe, 2016).However, in addition to this more overt and superficial presentation bias, even neutrally written, broadlyframed news reporting, which appears both "objective" and relatively moderate, may shape public opinion through a more invisible process of selection bias, where factual elements that are included or omitted themselves have ideological effects (Gentzkow and Shapiro, 2006 The Guardian (Left): The Texas Republican governor Greg Abbott has signed into law one of the most extreme six-week abortion bans in the US, despite strong opposition from the medical and legal communities • • •."This bill ensures that every unborn child who has a heartbeat will be saved from the ravages of abortion," said Abbott • • •.
Event #3 ( signed ): neg->Heartbeat Bill Event #4 ( opposition ): neg->Heartbeat Bill Event #2 ( saved ): pos->Heartbeat Bill neg->abortion Figure 1: Excerpt from a news story reporting on Heartbeat Bill.Blue indicate events favoring left-leaning entities and disfavoring right-leaning entities; vice versa for Red .Although both media outlets report the event Greg Abbott signed Heartbeat Bill, The Guardian select the additional event opposition to attack Heartbeat Bill.Interestingly, both media outlets include quotes from Greg Abbott but for different purposes: one for supporting the bill and the other for balanced reporting.Allen, 2006;Groeling, 2013).
Existing work in NLP has only studied bias at the token-or sentence-level, particularly examining how language is phrased (Greene and Resnik, 2009;Yano et al., 2010;Recasens et al., 2013;Lim et al., 2020;Spinde et al., 2021).This type of bias does not rely on the context outside of any individual sentence, and can be altered simply by using different words and sentence structures.Only a few studies have focused on bias that depends on broader contexts within a news article (Fan et al., 2019;van den Berg and Markert, 2020) or across articles on the same newsworthy event (Liu et al., 2022;Qiu et al., 2022).However, these studies are limited to token-or span-level bias, which is less structured, and fail to consider the more complex interactions among news entities.
To understand more complex content selection and organization within news articles, we scrutinize how media outlets include and organize the fundamental unit of news-events-to subtly reflect their ideology while maintaining a seemingly balanced reporting.Events are the foundational unit in the storytelling process (Schank and Abelson, 1977), and the way they are selected and arranged affects how the audience perceives the news story (Shen et al., 2014;Entman, 2007).Inspired by previous work on selection bias and presentation bias (Groeling, 2013;D'Alessio and Allen, 2006), we study two types of events.(i) Partisan events, which we define as events that are purposefully included to advance the media outlet's ideological allies' interests or suppress the beliefs of its ideological enemies.(ii) Counter-partisan events, which we define as events purposefully included to mitigate the intended bias or create a story acceptable to the media industry's market.Figure 1 shows examples of partisan events and counter-partisan events.
To support our study, we first collect and label PAC, a dataset of 8,511 PArtisan and Counterpartisan events in 304 news articles.Focusing on the partisan nature of media, PAC is built from 152 sets of news stories, each containing two articles with distinct ideologies.Analysis on PAC reveals that partisan entities tend to receive more positive sentiments and vice versa for count-partisan entities.We further propose and test three hypotheses to explain the inclusion of counter-partisan events, considering factors of newsworthiness, market breadth, and emotional engagement.
We then investigate the challenges of partisan event detection by experimenting on PAC.Results show that even using carefully constructed prompts with demonstrations, ChatGPT performs only better than a random baseline, demonstrating the difficulty of the task and suggesting future directions on enabling models to better understand the broader context of the news stories.
Related Work
Prior work has studied media bias primarily at the word-level (Greene and Resnik, 2009;Recasens et al., 2013) and sentence-level (Yano et al., 2010;Lim et al., 2020;Spinde et al., 2021).Similar to our work is informational bias (Fan et al., 2019), which is defined as "tangential, speculative, or background information that tries to sway readers' opinions towards entities in the news."However, they focus on span-level bias, which does not necessarily contain any salient events.In contrast, our work considers bias on the event level, which is neither "tangential" to news, nor at the token level.Importantly, we examine both partisan and counterpartisan events in order to study how these core, higher-level units produce ideological effects while maintaining an appearance of objectivity.
Our work is also in line with a broad range of research on framing (Entman, 1993;Card et al., 2015), in which news media select and emphasize some aspects of a subject to promote a particular interpretation of the subject.Partisan events should be considered as one type of framing that focuses on fine-grained content selection phenomenon, as writers include and present specific "facts" to support their preferred ideology.Moreover, our work relates to research on the selection or omission of news items that explicitly favor one party over the other (Entman, 2007;Gentzkow and Shapiro, 2006;Prat and Strömberg, 2013), or selection for items that create more memorable stories (Mullainathan and Shleifer, 2005;van Dalen, 2012).In contrast, we focus on core news events, those that may not explicitly favor a side, but which are nevertheless ideological in their effect.
Finally, our research is most similar to another recent study on partisan event detection (Liu et al., 2023), but they only investigate partisan events and focus on developing computational tools to detect such events.In contrast, our work also incorporates counter-partisan events, enabling a broader and deeper understanding of how media tries to balance impartial news coverage and promoting their own stances.We also construct a significantly larger dataset than the evaluation set curated in Liu et al. (2023), enhancing its utility for model training.
Partisan Event Annotation
PAC contains articles from two sources.We first sample 57 sets of news stories published between 2012-2022 from SEESAW (Zhang et al., 2022).Each news story set contains three articles on the same story from outlets with different ideologies.Here we take out the articles labeled with a Cen- ter ideology and only keep stories with two news articles from opposite ideologies.To increase the diversity of topics in our dataset, we further collect 95 sets of news stories from www.allsides.com,covering topics such as abortion, gun control, climate change, etc.We manually inspect each story and keep the ones where the two articles are labeled with left and right ideologies.Next, we follow the definition of events from TimeML (Pustejovsky et al., 2003), i.e., a cover term for situations that happen or occur, and train a RoBERTa-Large model on MATRES (Ning et al., 2018) for event detection.Our event detector achieves an F1 score of 89.31, which is run on PAC to extract events.Next, the partisan events are annotated based on the following process.For each pair of articles in a story, an annotator is asked to first read both articles to get a balanced view of the story.Then, at the article level, the annotator determines the relative ideological ordering, i.e., which article falls more on the left (and the other article more right) on the political spectrum.Then, the annotator estimates each article's absolute ideology on a 5-point scale, with 1 being far left and 5 as far right.
For each event in an article, annotators first identify its participating entities, i.e., who enables the action and who is affected by the events, and assign them an entity ideology when appropriate, and estimate the sentiments they receive if any.Using the story context, the article's ideology, and the information of the participating entities, annotators label each event as partisan, counter-partisan, or neutral relative to the article's ideology, based on the definitions given in the introduction.If an event is labeled as non-neutral, annotators further mark its intensity to indicate how strongly the event supports the article's ideology.A complete annotation guideline can be found in Appendix A. The annotation quality control process and inter-annotator agreement are described in Appendix B. We also discuss disagreement resolution in Appendix C. The final dataset statistics are listed in Table 1.many more partisan events than counter-partisan events, whereas more moderate news outlets tend to include a more equal mix.
Partisan sentiment.News media also reveal their ideology in the partisan entities they discuss, via the sentiments associated with those entities, where partisan entities tend to have positive associations and vice versa for count-partisan entities (Groeling, 2013;Zhang et al., 2022).In Figure 3, we find support for this expectation.We also find that left entities generally receive more exposure in articles from both sides.
Partisan event placement.Figure 4 shows that for both left and right media outlets, partisan events appear a bit earlier in news articles.For counterpartisan events, left-leaning articles also place more counter-partisan events at the beginning, while right-leaning articles place more counter-partisan events towards the end.This asymmetry suggests that right-leaning outlets are more sensitive to driving away readers with counter-partisan events, thus placing them at the end of articles to avoid that.5 Explaining Partisan and Counter-Partisan Event Usage In this section, we investigate a number of hypotheses about why media outlets include both partisan and counter-partisan events.It is intuitive to understand why partisan events are incorporated into the news storytelling processing, yet it is unclear why counter-partisan events that portray members of one's own group negatively or members of another group favorably are reported.Specifically, we establish and test three hypotheses for why an outlet would include counter-partisan news, similar to some of the theories articulated in Groeling ( 2013): (1) newsworthiness, (2) market breadth, and (3) emotional engagement.
Hypothesis 1: Newsworthiness
This hypothesis suggests that a primary goal of mainstream media is to report newsworthy content, even if it is counter-partisan.In Figure 5, we find that counter-partisan events are more likely to be reported by both sides (which is not tautological because the ideology of events is not simply inferred from article ideology).However, we find a striking asymmetry, where the left appears to report mainly counter-partisan events that were also reported on by the right, but the counter-partisan events reported by the right are not as common on the left.This suggests that the left may be motivated by newsworthiness more.
Hypothesis 2: Market Breadth
Our second hypothesis is that media may seek to preserve a reputation of moderation, potentially in order not to drive away a large segment of its potential audience (Hamilton, 2006).One implication of this hypothesis is that larger media either grew through putting this into practice, or seek to maintain their size by not losing audience, while smaller media can focus on more narrowly partisan audiences.To test this implication, we collected the monthly website traffic1 of each media outlet with more than one news article in our dataset and computed the average ratio of partisan to counterpartisan events, calculated per article and then averaged over each outlet.In Figure 6, we plot the average partisan ratio against the logged monthly website traffic.The correlation coefficient of -0.35 supports the hypothesis that larger outlets produce a more bipartisan account of news stories.
Hypothesis 3: Emotional Engagement
Our third hypothesis is that outlets will include counter-partisan content if its benefits in terms of emotional audience engagement outweigh its ideological costs (Gentzkow and Shapiro, 2006).This implies that the emotional intensity of counterpartisan events should be higher than that of partisan events (since higher intensity is required to offset ideological costs).We employ VADER (Hutto and Gilbert, 2014), a lexicon and rule-based sentiment analysis tool on each event to compute its sentiment intensity.Figure 7 shows that both partisan and counter-partisan events have stronger sentiments than non-partisan events, but we find no strong support for our hypothesis that counterpartisan events will be strongest.If anything, rightleaning events are more intense when reported on by either left or right media, but this difference is not statistically significant.
Experiments
We experiment on PAC for two tasks.Partisan Event Detection: Given all events in a news article, classify an event as partisan, counter-partisan, or neutral.Ideology Prediction: Predict the political leaning of a news article into left or right.
Models
We experiment with the following models for the two tasks.We first compare with a random baseline, which assigns an article's ideology and an event's partisan class based on their distribution in the training set.Next, we compare to RoBERTabase (Liu et al., 2019) and POLITICS (Liu et al., 2022), a RoBERTa-base model adapted to political text, continually trained on 3 million news articles with a triplet loss objective.We further design joint models that are trained to predict both partisan events and article ideology.Finally, seeing an emerging research area of using large language models (LLMs), we further prompt Chat-GPT to detect events with a five-sentence context size.Appendix F contains an analysis of experiments with different context sizes and number of shots for prompting ChatGPT.
Results
For Partisan Event Detection task, we report macro F1 on each category of partisan events and on all categories in training yields further improvement for POLITICS on ideology prediction and a slight improvement for RoBERTa on event detection.Moreover, it is worth noting that partisan events are more easily detected than counter-partisan ones, which are usually implicitly signaled in the text and thus require more complex reasoning to uncover.Finally, though ChatGPT model has obtained impressive performance on many natural language understanding tasks, its performance is only better than a random baseline.This suggests that large language models still fall short of reasoning over political relations and ideology analysis that require the understanding of implicit sentiments and broader context.
Error Analysis
We further conduct an error analysis on event predictions by RoBERTa model.We discover that it fails to predict events with implicit sentiments and cannot distinguish the differences between partisan and counter-partisan events.To solve these two problems, future works may consider a broader context from the article, other articles on the same story, and the media itself, and leverage entity coreference and entity knowledge in general.More details on error analysis can be found at Appendix E.
Conclusion
We conducted a novel study on partisan and counter-partisan event reporting in news articles across ideologically varied media outlets.Our newly annotated dataset, PAC, illustrates clear partisan bias in event selection even among ostensibly mainstream news outlets, where counter-partisan event inclusion appears to be due to a combination of newsworthiness, market breadth, and emotional engagement.Experiments on partisan event detection with various models demonstrate the task's difficulty and that contextual information is important for models to understand media bias.
Limitations
Our study only focuses on American politics and the unidimensional left-right ideological spectrum, but other ideological differences may operate outside of this linear spectrum.Although our dataset already contains a diverse set of topics, other topics may become important in the future, and we will need to update our dataset.The conclusion we draw from the dataset may not be generalizable to other news media outlets.In the future work, we plan to apply our annotated dataset to infer events in a larger corpus of articles for better generalizability.The event detection model does not have perfect performance and may falsely classify biased content without any justifications, which can cause harm if people trust the model blindly.We encourage people to consider these aspects when using our dataset and models.
A Annotation Guidelines
Below we include the full instructions for the annotators.A Javascript annotation interface is used to aid annotators during the process.Given a pair of news articles, you need to first read two news articles and label their relative ideologies and absolute ideologies.Then, for each event, you need to follow these steps to label its partisanship (partisan, counter-partisan, or neural): • Identify the agent entity and the patient entity for each event and other salient entities.These entities can be a political group, politicians, bills, legislation, political movements, or anything related to the topic of the article.
• Label each entity as left, neutral, or right based on the article context or additional information online.
• Estimate sentiments the author tries to convey toward each entity by reporting the events.
• Based on each entity, its ideology, and sentiments, you can decide whether an event supports or opposes the article's ideology.If it supports, label it as partisan.Otherwise, label it as counter-partisan.For example, in a right article, if a left entity is attacked or a right entity is praised by the author, you should label the event as a partisan event.If a left entity is praised or a right entity is attacked by the author, you should label the event as counter-partisan.
B Annotation Quality
We collect stories from Allsides, a website presenting news stories from different media outlets.Its editorial team inspects news articles from different sources and pairs them together as a triplet.
One of the authors, with sufficient background in American politics, manually inspected each story by following the steps below • Read the summary from the Allsides, which includes the story context and comments from the editors on how each news article covers the story differently.
• Read each article carefully and compare them.
• Pick the two news articles with significant differences in their ideologies.
We hired six college students who major in political science, communication and media, and related fields to annotate our dataset.Three are native English speakers from the US, and the other three are international students with high English proficiency who have lived in the US for more than five years.All annotators were highly familiar with American politics.To further ensure the quality of the annotation, before the process began, we hosted a two-week training session, which required each annotator to complete pilot annotations for eight news articles and revise them based on feedback.After the training session, we held individual weekly meetings with each annotator to provide further personalized feedback and revise annotation guidelines if there was ambiguity.Each article is annotated by two students.
We calculate inter-annotator agreement (IAA) levels on the articles' relative ideologies, their absolute ideology, and the events.The IAA on the articles' relative ideologies between two annotators was 90%, while the agreement on the articles' absolute ideologies was 84%.The higher agreement on the articles' relative ideologies demonstrates the usefulness of treating a story as one unit for annotation.For stories with conflicting relative ideologies or articles with a difference greater than 1 in their absolute ideologies, a third annotator resolves all conflicts and corrects any mistakes.Despite the subjective nature of this task and the large number of events in each article, the Cohen's Kappa on event labels is 0.32, which indicates a fair agreement is achieved.When calculating agreement on whether a sentence contains a partisan or counterpartisan event when one exists, the score increases to 0.43, which is moderate agreement.
Our dataset covers diverse topics, including but not limited to immigration, abortion, guns, elections, healthcare, racism, energy, climate change, tax, federal budget, and LGBT.
C Annotation Disagreement
In total, the dataset contains 304 news articles covering 152 news stories.All news stories are annotated by at least two annotators: 5 stories are annotated by one annotator and revised by another to add any missing labels and correct mistakes, while 147 stories are annotated by two annotators.Out of news stories annotated by two people, a third annotator manually merges 54 news articles to correct errors and resolve any conflicts.For the rest of the news stories, we combine annotations from two annotators and have a third annotator resolving only conflicting labels.During the merging process, we also discover three types of common annotation disagreements: • Events with very weak intensity: some events are only annotated by one annotator, typically, these events have low intensity in their partisanship or are not relevant enough, so the other annotator skips them.
• Label different events within the same sentence: this happened the most frequently because when news articles report an event, they describe it with a cluster of smaller and related events.Two annotators may perceive differently which event(s) is partisan.
• Events are perceived differently by two annotators, one may think it is partisan, and the other may think it is counter-partisan.Usually, both interpretations are valid, and we have a third annotator to decide which interpretation should be kept.
D Ideology Prediction
Table 3 shows the ideology prediction performance of different models.
E Error Analysis
We perform a detailed examination of 100 event predictions generated by our RoBERTa model.We discover sentiments' intensity closely correlates with the model's performance.Specifically, when the model classifies events as either partisan or counter-partisan, 70% of these events feature strong/explicit event triggers like "opposing" or "deceived".The remaining events use more neutral triggers such as "said" or "passed".Our model demonstrates higher accuracy in predicting events that contain strong or explicit sentiments.However, it fails to predict events with implicit sentiments and cannot distinguish the differences between partisan and counter-partisan events.
E.1 Events with Implicit Sentiments
The first example in Figure 8 is from a news article about the climate emergency declared by Joe Biden after Congress failed the negotiation.The model fails to predict "give" as a partisan event.This is primarily because the term itself does not exhibit explicit sentiment and the model does not link "him" to Joe Biden.However, when contextualized within the broader scope of the article, it becomes evident that the author includes this event to bolster the argument for a climate emergency by highlighting its positive impact.To predict this type of events correctly, the model needs to understand the context surrounding the event and how each entity is portrayed and linked.
E.2 Counter-partisan Events
The second example in Figure 8 is from a right news article about the lawsuit by Martha's Vineyard migrants against Ron DeSantis.The model incorrectly categorizes the event "horrified" as partisan due to the strong sentiment conveyed in the text.However, when placed in the broader context of the article, which defends Ron DeSantis and criticizes Democrats for politicizing migrants, this event should be more accurately classified as a counterpartisan event.The author includes it specifically to showcase the response from Democrats.The model seems to have limited capability of differentiating between partisan and counter-partisan events, possibly because of the similar language used to express partisan and counter-partisan events and the difficulty of recognizing the overall slant of news articles.
F ChatGPT Prompts
We use five different context sizes for our ChatGPT prompt: a story with two articles, a single article, 10 sentences, 5 sentences, and 3 sentences.An example prompt with sentences as context can be viewed in Table 5.
Context Size vs. Number of Shots.Since the context window size of ChatGPT is fixed, we explore prompts with different context window sizes and investigate the trade-off between context window size and the number of shots.We try out five window sizes on our development set: 3 sentences, 5 sentences, 10 sentences, a single article, ; D'Alessio and Story Title: Texas Governor Signs 'Heartbeat Bill' Banning Abortion National Review (Right): Texas Governor Greg Abbott signed a bill on Wednesday barring abortions • • • "Our creator endowed us with the right to life and yet millions of children lose their right to life every year because of abortion," Abbott, a Republican, said during a bill signing ceremony.• • • Event #1 ( sign ): pos->Heartbeat Bill Event #2 ( lose ): pos->Heartbeat Bill neg->abortion
4Figure 2 :Figure 3 :
Figure2: Average percentage of partisan and counterpartisan events reported across articles for media with different ideologies.More moderate news outlets tend to include a more equal mix of events.
Figure 4 :Figure 5 :
Figure 4: Distribution of partisan and counter-partisan events in each quartile per news article.Shaded area shows 95% confidence level for both left and right.
Figure 6 :Figure 7 :
Figure6: The average ratio of partisan vs. counterpartisan events by media outlets versus logged website traffic.The size of dots represents media's article number in PAC.Larger media tends to be more balanced.
Table 1 :
Statistics of PAC.
Table 2
. For Ideology Prediction task, we use macro F1 score on both the left and the right ideologies, as reported in Table3.First, both RoBERTa and POLITICS improve performance over the random baseline, where joint
Table 3 :
Model performance on ideology prediction with the best results in bold.Average of 5 random seeds.
|
2023-10-31T06:41:26.735Z
|
2023-10-28T00:00:00.000
|
{
"year": 2023,
"sha1": "9d7cef4f02e53480dc0c06669f9022f843265a09",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.findings-emnlp.45.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8c102a7bd238f43bcdb64c9493a77496d275a6bc",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
210950560
|
pes2o/s2orc
|
v3-fos-license
|
Cellular and molecular architecture of hematopoietic stem cells and progenitors in genetic models of bone marrow failure
1Genetics & Genome Biology Program and 2Marrow Failure and Myelodysplasia (Pre-leukemia) Program, Division of Hematology/Oncology, Department of Pediatrics, The Hospital for Sick Children, Toronto, Ontario, Canada. 3Princess Margaret Cancer Centre, Toronto, Ontario, Canada. 4Department of Pediatrics, Children’s Hospital of Eastern Ontario, Ottawa, Ontario, Canada. 5Hematology-Oncology, Montreal Children’s Hospital, Montreal, Quebec, Canada. 6Division of Hematology, Oncology & Bone Marrow Transplant, University of British Columbia and British Columbia Children’s Hospital, Vancouver, British Columbia, Canada. 7Department of Pediatrics, McMaster University, Hamilton, Ontario, Canada. 8Institute of Medical Science and 9Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada.
Introduction
Myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) comprise a spectrum of hematopoietic disorders. Despite intensive chemotherapy and hematopoietic stem cell (HSC) transplantation, the overall survival of advanced MDS/AML remains low, approximately 60% in children and approximately 30% in adults (1). The outcome is further compromised by treatment-related, long-term adverse events (2).
Hematopoiesis is a complex developmental system that is organized as a hierarchy sustained by multipotent HSCs. Although typically depicted with increasingly restricted oligopotent and unipotent progenitors downstream of HSCs, recent studies demonstrate a reshaping of the architecture of human hematopoietic hierarchy between in utero fetal liver and adulthood time points (3)(4)(5). Transcriptional and functional analysis suggests that by adulthood, there is predominantly a 2-tier hierarchy of multipotent and unipotent human stem progenitor stem cells (HSPCs) (5).
AML is a heterogeneous disorder that derives from early HSPCs, which undergo malignant transformation to leukemic blasts and clonal expansion. Deep sequencing of leukemic samples extrapolated the Inherited bone marrow failure syndromes, such as Fanconi anemia (FA) and Shwachman-Diamond syndrome (SDS), feature progressive cytopenia and a risk of acute myeloid leukemia (AML). Using deep phenotypic analysis of early progenitors in FA/SDS bone marrow samples, we revealed selective survival of progenitors that phenotypically resembled granulocyte-monocyte progenitors (GMP). Whole-exome and targeted sequencing of GMP-like cells in leukemia-free patients revealed a higher mutation load than in healthy controls and molecular changes that are characteristic of AML: increased G>A/C>T variants, decreased A>G/T>C variants, increased trinucleotide mutations at Xp(C>T)pT, and decreased mutation rates at Xp(C>T)pG sites compared with other Xp(C>T)pX sites and enrichment for Cancer Signature 1 (X indicates any nucleotide). Potential preleukemic targets in the GMP-like cells from patients with FA/SDS included SYNE1, DST, HUWE1, LRP2, NOTCH2, and TP53. Serial analysis of GMPs from an SDS patient who progressed to leukemia revealed a gradual increase in mutational burden, enrichment of G>A/C>T signature, and emergence of new clones. Interestingly, the molecular signature of marrow cells from 2 FA/ SDS patients with leukemia was similar to that of FA/SDS patients without transformation. The predicted founding clones in SDS-derived AML harbored mutations in several genes, including TP53, while in FA-derived AML the mutated genes included ARID1B and SFPQ. We describe an architectural change in the hematopoietic hierarchy of FA/SDS with remarkable preservation of GMP-like populations harboring unique mutation signatures. GMP-like cells might represent a cellular reservoir for clonal evolution.
existence of founding clones and derived subclones (6). AML is sometimes preceded by MDS. MDS is a clonal preleukemic disease state with cytopenia due to underproduction, abnormal differentiation, increased apoptosis, and varying degrees of leukemic blasts and carries a high risk of progression to leukemia. The incidence of both MDS and AML increases with age (7), but both can present in early childhood (8,9).
Although rare, inherited bone marrow failure syndromes (IBMFSs) provide an opportunity to study AML evolution and progression because of a high risk of MDS/AML (11,12) and stepwise progression from nonmalignant hematopoietic phase, to MDS (13), and on to AML (14)(15)(16). We previously showed that by the age of 18 years, patients with the common IBMFSs Fanconi anemia (FA) and Shwachman-Diamond syndrome (SDS) have a 75% and 25% risk, respectively, of developing marrow cytogenetic abnormalities, MDS, or AML (11). AML secondary to MDS has a particularly poor outcome. Only a few studies that focused on clonal hematopoiesis in IBMFSs have been published. TP53 mutations were identified in some SDS patients with (17) or without MDS/AML (18). RUNX1 mutations have been detected in whole marrow cells from several patients with FA without transformation (19). CSFR3 (18,20,21) and RUNX1 (22) mutations have been detected in whole marrow cells from severe congenital neutropenia patients with and without MDS/AML. Further studies are necessary to decipher the cells that initiate transformation and why they abnormally accumulate mutations.
In this study, we aimed to discover cellular and molecular signatures underlying early clonal evolution when no clinical signs of MDS/AML are detected in 2 relatively prevalent IBMFSs that feature an initial marrow failure phase and frequently progress to MDS/AML: FA and SDS. FA is caused by germline mutations in 1 of 23 DNA repair genes collectively referred to as the FA pathway (23), and SDS is caused by germline mutations in genes that are involved in the late stage of 60S ribosome subunit maturation, SBDS (24), DNAJC21 (25), and EFL1 (26), but also in SRP54 (27), which is involved in the cotranslational protein-targeting pathway. We found that the granulocyte-monocyte progenitor-like (GMP-like) population is relatively preserved compared with marked exhaustion of other cell populations and carries a high mutation load and a unique trinucleotide mutation signature, suggesting that GMP-like cells are a reservoir for clonal evolution.
Results
HSCs and multipotent progenitors are markedly reduced in FA/SDS. We and others showed global reduction in hematopoietic cells and in CD34 + cells in bone marrow from patients with FA (28) and SDS (29). We hypothesized that in both disorders defects begin within the most early hematopoietic cells and applied 12-paramater deep immunophenotyping profiling methodology based on recently developed approaches (refs. 5, 30, and Figure 1A). Cell numbers were normalized to the viable (propidium iodide-negative) cells in the sample. Within the CD34 + CD38primitive progenitor compartment and compared with healthy controls, the relative numbers of CD90 + CD45RA -HSCs were reduced 14.1-and 4.6-fold in FA and SDS, respectively, and the CD90 -/CD45RAmultipotent progenitors (MPPs) were reduced 17.7-and 7.8-fold in FA and SDS, respectively ( Figure 1, B and C). Because most patients with FA/SDS included in this study had hypocellular bone marrow specimens (Supplemental Table 1; supplemental material available online with this article; https://doi.org/10.1172/jci.insight.131018DS1), we suggest that the average fold decrease in absolute numbers of patients' HSPCs compared with healthy controls is likely higher than that of the above relative numbers.
FA and SDS are characterized by variable levels of oligopotent hematopoietic progenitor loss. CD34 + CD38 + progenitors include the common myeloid progenitors (CMPs), megakaryocyte erythroid progenitors (MEPs), and GMPs. CMPs and MEPs were markedly and significantly reduced in the patients. CMPs were reduced 8.1-and 3.5-fold in FA and SDS, respectively. MEPs were reduced 12.3-and 15.5-fold in FA and SDS, respectively (Figure 1, D and E).
Unexpectedly, the reduction of HSCs did not result in universal reduction of all their downstream progenies. In SDS, MEPs represented the most affected population compared with CMPs or GMPs. In FA, MEPs and CMPs were markedly reduced compared with GMPs. Furthermore, in both SDS and FA, GMPs (CD34 + /CD38 + /FLT3 + /CD45RA + ) were least affected and relatively preserved, with only 1.5-fold reduction in SDS and 2.3-fold reduction in FA. In SDS, the percentages of GMPs were not significantly different from controls ( Figure 1, D and E). Remarkably, when HSPC frequencies were normalized to the total number of CD34 + cells in the respective samples, the average percentage of SDS GMPs was a modest 1.56-fold higher than the average percentage of healthy controls' GMPs (P = 0.03). In FA, the average percentage of GMPs was 1.15 times higher than that of controls, but the difference did not reach statistical significance (Supplemental Figure 1). These data about FA/SDS GMPs were surprising for both disorders, but particularly in SDS, because granulopoiesis is the most affected hematopoietic process in SDS (29,31).
FA and SDS feature an abnormally high frequency of somatic variants in GMPs. The IBMFSs are difficult to study genetically because there is a paucity of cells to work with. Therefore, we undertook genetic analysis to gain insight into the mutations present within the GMP population that seemed to be persisting more extensively than other progenitors. In addition, because of the relative abundance of GMP-like cells, we reasoned that they are more likely to carry mutations that confer a growth advantage than other progenitors that were markedly reduced.
We analyzed somatic tier 1 and 2 variants in GMP-like cells, as described in Methods. Bone marrow fibroblasts were used as a surrogate germline tissue. The cogency of variant detection was supported by Student's t test was used to compare between patients and controls. The same control data in C and E are also presented in B and D, respectively. a high congruence of mapped reads across the genome (Supplemental Figure 2) and per chromosome (Supplemental Figure 3). Analysis of a marrow fibroblast sample demonstrated that this congruence was seen between amplified and unamplified DNA before whole-exome sequencing (WES). Importantly, we consistently saw lower variant numbers when GMPs were compared to self fibroblasts versus fibroblasts from other subjects, which is expected given normal genomic variations between individuals (Supplemental Figure 4). There was a consistently higher number of variants in patients versus controls who were processed and analyzed in an identical fashion (see below). Detection of calls by MuTect2 and by other mutation caller software programs (Sterlka and VarScan) was also highly congruent (data not shown). In addition, there was no correlation between gene size and number of variants detected, which would be expected from random mutations along the genome. Also, we found no aberrantly high rates of C>T (G>A) errors in analysis of a GMP DNA sample compared with a blood DNA sample amplified by single-cell REPli-G whole-genome amplification kit and by VarScan mutation caller software (data not shown). Last, detecting variants by WES and the cancer gene panel showed high congruence (Supplemental Table 2).
The numbers of somatic variants among FA patients (mean 111) and SDS patients (mean 108) were remarkably higher than that among control subjects (mean 25), whose samples were processed in the same way (P values of 0.04 and 0.02, respectively) ( Figure 2A). All variants were rare (minor allele frequency ≤ 1%) or absent in the general population's databases (data not shown). There was no significant age difference The groups were compared using the Wilcoxon's signed-rank test. *P < 0.0001. The y axis represents the variant frequency and the x axis represents the variants arranged from those with the highest allele frequency to the lowest. In each group, each number may represent a different variant.
between FA/SDS patients and controls (P = 0.34, and P = 0.41, respectively). Interestingly, the frequency of variants in FA was not statistically different from SDS ( Figure 2A).
The total numbers of variants in each subject according to age at sampling are in Figure 2, B-D. A statistically significant correlation between mutation burden and age could not be accurately determined because a larger number of subjects in each group is required for this analysis. Importantly, the variants in SDS/FA appeared in significantly higher allele frequencies compared with those of controls (P < 0.0001) ( Figure 2E).
Types of nucleotide change across patients. Because of their AML predisposition, we reasoned that mutations in FA/SDS GMP-like cells are characterized by previously published AML mutational patterns. Therefore, we used multiple analytical techniques to understand the mutational process and patterns underlying the high mutational load in FA/SDS. First, we determined the variants underlying transition changes (interchanges between purine bases or between pyrimidine bases; Figure 3A) and transversion changes (interchanges between purine and pyrimidine bases; Supplemental Figure 5). We found that the most abundant single nucleotide variants (SNVs) in all groups (FA, SDS, and controls) were as seen in AML (32) -namely G>A/C>T transitions, followed by A>G/T>C transitions and G>T/C>A, C>G/G>C transversions. Nevertheless, the proportions of G>A/C>T transitions in FA/SDS were significantly higher than those of control subjects (P < 0.05). The heatmap depicts specific trinucleotide variants (SNV including the base immediately 3′ and 5′ to the SNV site). The 5′ base is shown on the y axis and the 3′ base on the x axis. Z score of the log-transformed values from 0 to 2 was used. To generate the heatmap, the number of each variant plus 1 was converted to log. (C) Percentage of SNVs and indels according to their damaging effects on the protein in each of the study subject groups. (D) Mean number of mutated genes in FA subjects, SDS subjects, and controls with SEM. Results of comparison between each patient group to controls by Student's t test are shown. P = 0.069 by Kruskal-Wallis test when comparing the 3 subject groups. The box plots depict the minimum and maximum values (whiskers), the upper and lower quartiles, and the median. The length of the box represents the interquartile range.
To gain further insight into the mutational processes in FA/SDS, we analyzed variants in the context of a trinucleotide change: the 6 options of nucleotide substitutions and the 16 combinations of bases immediately 3′ and 5′ to this variant. Overall, this resulted in a mutational signature that comprised 96 trinucleotide frames for each subject that are displayed in a heatmap in Figure 3B. All the subject groups showed a high C>T mutation rate regardless of the flanking 5′-and 3′-nucleotides, that is, Xp(C>T)pX sites. However, this propensity was much more prominent in patients with FA (P = 0.04) and SDS (P = 0.02) than in controls. The visualization of vertical rows on the heatmap suggests that the 3′ base has a greater influence on the mutational pattern. The vertical rows seen within the C>T region indicate that most patients have lower mutation rates at Xp(C>T)pG sites (arrows in Figure 3B) compared with other Xp(C>T)pX sites. This pattern was less prominent in healthy control subjects. The low number of mutations seen at Xp(C>T)pG sites may be attributed to the relatively low number of CpG sites in the genome and could be the result of the deamination of methylated cytosines (33). Last, there was a modestly increased mutational load at T>C sites in FA/SDS. Different cancers generate mutations through distinct processes and leave their mark on the genome through a unique mutational signature (34). To identify the specific cancer trinucleotide signature of GMPlike cells from each subject, we first normalized variants to the relative contribution of each trinucleotide in the exome region using the DeconstructSigs R package and then compared our results to those in the Catalogue of Somatic Mutations in Cancer (COSMIC) database. Normalization entails determining the amount of a certain trinucleotide variant relative to the amount of native trinucleotides occurring within the respective genome. De novo AML has previously been characterized by the COSMIC database to have a trinucleotide pattern contributed by Signatures 1 (spontaneous deamination of 5-methylcytosine) and 5 (transcriptional strand bias for T>C substitutions at ApTpX context). Because of a minimum 50-variant criterion for analysis, cancer signatures could be constructed from 9 of the 14 FA/SDS GMP-like cell samples but from none of the control subjects (Supplemental Figures 6-14). Importantly, the AML Signature 1 was more frequent (8 of the 9 patients) and more often the dominant signature (4 of the 9 patients) than other signatures (Supplemental Table 3).
The analysis of tier 1 and 2 SNVs and indels predicted varying degrees of damage to the encoded protein from stop-gain, frameshift, start-loss, splicing, and missense alterations to potentially less severe effects of 3′ UTR, 5′ UTR, and synonymous changes ( Figure 3C). The distribution of mutation types for patients was similar to controls although the rates of mutations were higher.
Recurrence in 3 subjects
Compilation of a dominant mutational tree in samples without clinical evidence of transformation was performed as described previously (35) in all FA (Supplemental Figure 16, A-F) and SDS samples (Supplemental Figure 17, A-H). The specific genes and variants in each clone are listed in Supplemental Table 5. In all samples there were mutations in known MDS/AML genes and in other cancer-related genes that have not previously been reported in MDS/AML to our knowledge. Interestingly, in 2 FA samples the founding clones harbored somatic mutations in MDS/AML-related genes (KDM6A in FA3 and FANCE in FA5), while in the rest of the FA samples, the founding clones harbored cancer-related genes that have not been previously associated with MDS/AML to our knowledge. TP53 mutations were part of the founding clones in 2 SDS patients (SDS1 and SDS5) (Supplemental Table 5) but in none of the samples of FA patients without leukemia. Other MDS/AML-related genes were identified in the founding clones in 3 other patients with SDS (Supplemental Table 5).
Analysis of MDS/AML-related gene pathways showed high rates of mutations in the transcription factors/regulation pathway, DNA repair/checkpoint gene pathway, and activated signaling molecules pathway in FA/SDS (Supplemental Figure 18).
Clonal landscape of AML samples in FA/SDS. To gain insight into the relevance of variants and mutated genes detected in samples without transformation, we analyzed leukemic cells from 1 FA patient (FA7) with AML and 1 SDS patient (SDS7) with AML. Although only 2 AML cases from these rare disorders were available for the study, these anecdotes provide a unique opportunity to observe processes that appeared at 2 stages: before any clinical and standard laboratory evidence of transformation and at an ultimate catastrophic phase of leukemia. Blast cell samples were paired with marrow fibroblasts or T cells from the same subject, and somatic variants in blasts were analyzed as described in Methods. The mutation rate in SDS-derived AML (SDS/AML) blasts was slightly higher than the rates in all other SDS samples without transformation, but the number of variants in FA-derived AML (FA/AML) blasts was within the range of those in untransformed FA samples ( Figure 4A). Similar to our findings in non-AML samples, both FA/SDS samples showed higher G>A/C>T transition rates than controls, the predominant mutation type in de novo AML ( Figure 4B and ref. 34). The number of transversions was low (Supplemental Figure 19), and meaningful comparison between transformed and untransformed samples was impossible.
The trinucleotide heatmap depicting the variant change and adjacent 5′ and 3′ bases in non-AML and AML patients is in Figure 4C. All samples, including AML blasts and GMPs from subjects at no transformation, featured high mutation rates at Xp(T>C)pX sites. Importantly, AML Signature 1 was the predominant trinucleotide signature in FA/AML blasts (64%) and comprised a substantial faction in SDS/AML blasts also (22%) (Supplemental Figures 20 and 21).
Analysis of the potential impact of mutations on the protein showed a generally similar pattern in FA/SDS with AML samples compared to those without AML ( Figure 4D).
Cancer-associated genes with moderate-to high-impact mutations in AML samples are listed in Table 2. The genes with the highest VAF are in Figure 5, A and B. Several genes harbored variants with high frequency in FA/AML and were predicted to be part of the founding clone by mutational tree analysis (Supplemental Figure 22A and Supplemental Table 5). These genes were ARID1B, SFPQ, PCDH15, EPPK1, and MAP2K1. The founding clone gave rise to 3 subclones that included mutations in the MDS/AML genes NUP98, PML/BRCA1, and TP53/BRCA2, respectively. The first clone gave rise to an additional clone with mutations in the CREBBP MDS/AML-associated gene.
The genes that appeared in highest allele frequency in SDS/AML included MYH1, TP53, FLT4, LPHN3, and DICER1 (Table 2). These genes were predicted to be part of the founding clone, which gave rise to 2 subclones (Supplemental Figure 22B and Supplemental Table 5). The mutated genes in 1 of the subclones included the MDS/AML gene PTPRD and other cancer genes (e.g., JAK1 and an additional mutation in DICER1). This subclone gave rise to additional clones harboring mutations in MDS/AML genes, such as BRAF and SETD2. The second subclone featured a mutation in SFPQ, and subsequent clones included mutations in NCOR1, SMAD4, NF1, and BRCA1. Similar to samples without AML (Supplemental Figure 18), in both FA/AML and SDS/AML, commonly mutated pathways included transcription factor or transcription factor regulation and DNA repair ( Figure 5C). Last, we evaluated whether genes with high-or moderate-impact mutations that appeared in patients without transformations were also mutated in the AML phase. In FA, 18 of the 255 genes that were part of clonal hematopoiesis in patients without MDS/AML appeared in the AML blasts (Supplemental Figure 23 and Table 6). In SDS, 52 of the 282 genes that were part of clonal hematopoiesis in patients without transformation appeared in the AML blasts also (Supplemental Figure 23 and Supplemental Table 6).
Clonal evolution and progression observed in sequential samples. From the patient with SDS who developed leukemia, 2 additional samples 36 months and 25 months before the development of AML were available. The number of mutations grew prominently from stage to stage (Pearson's r value of 0.99) ( Figure 6A). The growth was more prominent than the age-related mutation increment we found in our SDS patient cohort ( Figure 2C). Interestingly, there was a gradual increase in G>A (r = 0.99) and C>T transitions (R = 0.99938) but not in A>G or T>C transitions ( Figure 6B). There was also a gradual accentuation of the trinucleotide signature (heatmap in Figure 6C). The number of transversions was low (Supplemental Figure 24) and did not show a conclusive pattern.
Construction of trinucleotide cancer signatures using the COSMIC database was feasible for the last 2 sequential samples. Interestingly, Signature 1 accounted for 9.2% of the mutational signature in the second sequential sample (Supplemental Figure 14) and increased to 22.2% at the stage of AML (Supplemental Figure 21).
Similar to the variant numbers, there was also a gradual increase in the number of genes with moderate-or high-impact mutations in each sequential sample: 15, 65, and 103, respectively (Table 3) (r = 0.945). Of the 15 genes with mutations in first sequential sample, 2 were mutated in the second and third samples. Of the 65 mutated genes in the second sequential sample, 10 were mutated in the third sample.
In each of the sequential samples, a dominant mutational tree could be constructed. However, as seen with bone marrow cytogenetic abnormalities in FA (36) and SDS (37), the dominant tree may arise and regress, and in each sequential sample a different dominant tree was apparent. The founding clone in sequential sample 1 harbored 13 genes with high-or moderate-impact mutations, including ARHGEF12 and NOTCH2; in sequential sample 2 there were 28 such genes, including IDH2 and MYH2; and in the third sample (AML) there were 6 such genes, including TP53. The known pathogenic mutation in TP53 (c.742C>T; p.Arg248Trp) was dominant in the AML stage (52%). It is noteworthy that with progression from sequential samples 1 to 3, the proportion of mutations in transcription factors, transcription factor regulation, activated signaling molecules and DNA repair, and checkpoint molecule pathways increased ( Figure 6D).
Discussion
The present study focused on evaluating the cellular and molecular events before overt leukemia develops and their potential impact on malignant transformation. We report for the first time to our knowledge detailed analysis of the very early hematopoietic cells (HSCs, MPPs) and subsequent progenitors (CMPs, MEPs, GMPs) in FA and SDS. Most HSPCs were markedly reduced except for GMPs, which were much more frequently preserved. Molecular analysis of phenotypically GMP cells revealed a high number of somatic mutations compared with control subjects and genetic signatures that resembled those seen in AML. Using sequential SDS samples before and at AML stage, we were able to show AFF1, AKT2, BCL9L, BRCA1, BRCA2, CENPF, CHD8, CLSTN2, COL11A1, CREBBP, DAB2IP, DDX60, DICER1, EPHA7, ERBB3, ERC1, KAT6B, KMT2C, LCP1, LRP2, MLLT1, MLLT10, NBN, NTRK1, NUP98, PIK3CB, PML, POLQ, PRCC, PTPN13, RAD50, ROS1, SOS1, STK19, SUFU, TP53, UBR5, TRIP11 PCDH15, ARID1B, SFPQ, EPPK1, MA, MAP2K1, IL21R, that somatic nucleotide-level mutations develop and disappear very rapidly in this disorder, resembling observations related to some large clonal marrow cytogenetic abnormalities (36,37). The reconstructed founding clone at the AML stage harbored mutations in several genes, including TP53. The overrepresentation of immunophenotypic GMPs versus other myeloid progenitors in patients with FA/SDS suggests that these cells feature higher survival or growth properties and possibly harbor some of the initial transformational events that lead to MDS/AML. We cannot rule out the possibility that relative preservation of GMP-like cells reflects a general compensatory mechanism for bone marrow failure unrelated to leukemia risk. Although possible, it would be surprising that a compensatory mechanism targets GMPs regardless of whether the mostly affected lineage is granulocytic (SDS) or megakaryocytic/erythrocytic (FA). It is noteworthy that the initiating events may occur in earlier HSPCs, which then acquire the immunophenotype of GMPs. The markedly elevated somatic variants in FA/SDS GMP-like cells is in keeping with this hypothesis. It is possible that some of these mutations enhance proliferation or inhibit cell death, thereby conferring a growth advantage to these progenitors. For example, the TP53 mutation p.Arg248Trp seen in patients with SDS inactivates the protein and its proliferation-regulating properties. Future studies are necessary to decipher the mechanism underlying the relative preservation of GMP-like cells in FA/SDS bone marrow and whether it is related to increased proliferation, decreased apoptosis, or self-renewal.
Interestingly, despite different functions of FA genes from SDS genes, in both conditions GMPs were relatively more preserved, and there were no significant differences in the average number of somatic mutations. This raises the possibility that, at least in part, clonal evolution in bone marrow failure disorders does not depend on the direct biochemical sequela of the germline mutation and might be related to the consequent growth disadvantage of bone marrow cells, mitotic stress, and a drive for survival through growth-promoting somatic mutations.
The cause of an increased propensity for MDS/AML in IBMFSs and the mechanisms of leukemogenesis are unclear, and several hypotheses have been proposed (38,39). Our findings of an increased mutation rate in GMP-like cells and their relative preservation provide a groundwork for research focusing on these questions. Several pathological processes have been identified in FA/SDS and may be considered while trying to explain an increased risk of somatic mutations. FA proteins are involved in correction of interstrand DNA cross-links (40) and telomere length maintenance (41), leading to chromosomal instability. There is also evidence for short telomeres and genomic instability in SDS (42,43). These pathologies may lead to somatic structural chromosomal abnormalities that are commonly seen in SDS (37,43,44) and in FA (45,46); however, they may not directly explain the increased numbers of SNVs seen in our study. In FA, DNA interstrand cross-links may lead to DNA double-strand breaks due to prolonged stalling of the replication fork or collapse. This may eventually lead to errors during repair or replication. Oxidative stress has been implicated in DNA damage and cancer development (47,48) and is increased in both FA (49)(50)(51) and SDS (52,53). In addition, the accelerated cell death and slow-growing cells in FA (49,50,54) and SDS (55-57) may lead to replicative stress, which can consequently increase the rate of randomly occurring mutations. Interestingly, it has been suggested that the slow-growing HSPCs in bone marrow failure disorders are under selective pressure for mutations that reverse their growth defect and ameliorate the restraints on proliferation (58,59). Last, similar to AML (60) and MDS (61), SDS bone marrow stroma features increased angiogenesis (62). SDS bone marrow stroma has also been shown to be functionally impaired in humans (29) and in mice (63). In the latter study, deletion of Sbds in mouse mesenchymal stem cells resulted in DNA damage in HSPCs and in a proinflammatory response that was shown to contribute to leukemic transformation (63).
To our knowledge, there are no published data about the rate, type, and signature of somatic variants in GMPs from inherited leukemia predisposition syndromes, and only little information is available about somatic mutations in bone marrow samples from patients with FA (19) and SDS (17,18). An explicit comparison between results from the present work to those from previously published studies on IBMFSs is challenging because of different methodologies and analytic approaches. Nonetheless, the number of variants in our study might be different from that reported in few published papers on FA/SDS, and there are several possible explanations for that. First, mutation rates in GMP-like cells have not previously been published. GMP-like cells were relatively preserved in FA/SDS, which might be attributed to a higher rate of somatic mutations that confer growth advantage. Second, published studies focused on mutations with high allele frequency. For example, in the study on somatic mutations in FA patients (19), mainly Sanger sequencing was used; the technique typically detects variant with allele frequency of over 10% to 20%. In the published WES data on 2 patients with SDS (18), few variants were reported; however, the authors focused on variants at the expected binomial distribution around 50%. Because of the analysis of highly purified progenitors and limited number of progenitors in FA/SDS, we used amplified DNA. Quality assessment of the data, paired analysis of amplified and unamplified DNA from control marrow fibroblasts across the genome (described in the Results section), and our internal robust methodology suggest that the trends seen herein are real and that significant bias by DNA amplification is unlikely.
The molecular changes found herein in FA/SDS GMP-like cells are reminiscent of those seen in AML, for example, abundance of G>A/C>T and G>T (32). G>A/C>T hypermutations have been attributed to the endogenous process of deamination at methylated cytosine sites (32). Importantly, this pattern was also dominant in FA/SDS with AML samples and steadily increased in sequential samples from a patient with SDS who eventually developed AML. Studies of sequential samples from additional cases are needed to determine whether gradual acquirement of this pattern is indeed part of the transformational process in FA/SDS.
In most samples we were able to reconstruct a dominant mutational tree. However, most mutations were not part of the dominant mutational tree, suggesting that FA/SDS marrows contain multiple unrelated clones. Further, we cannot rule out a possibility that at the stage of AML, additional smaller, unrelated AML clones coexisted. Importantly, using sequential samples, we found that similar to large cytogenetic abnormalities that may appear and disappear with time in FA (36) and SDS (37), including del(20q10-11) and i(7q), SNVs may also appear and disappear, as described in 1 patient with severe congenital neutropenia (21). Our study further shows that most clones do not culminate in leukemia evolution, and despite a burst of evolving clones, most of them disappear and become outnumbered by new clones. This process probably continues until a combination of critical mutations appears in the same clone and drives progression toward MDS/AML.
It is noteworthy that the frequency of mutations in genes that are commonly mutated in de novo MDS/ AML (e.g., DNMT3A, TET2 and SF3B1) was low in patients with FA/SDS, particularly in the ones who developed AML, suggesting that transformation in FA/SDS may use novel mechanisms. PCDH15 was mutated in FA/AML with high VAF (69%) and was predicted to be part of the founding clone of the dominant mutational tree. PCDH15 is a member of the cadherin superfamily, which encode integral membrane proteins that mediate calcium-dependent cell-cell adhesion. It is mutated in several solid cancers, including breast cancer, glioma, and lymphoma (64)(65)(66). The findings of mutations in this gene also in 2 SDS patients without AML (1 of them in the founding clone) suggest a potential pathogenic role.
It is noteworthy that SFPQ was mutated in both our patients with AML, in the founding clone in FA, and in a subclone in SDS. To our knowledge, SFPQ was previously reported to be mutated only in 1 subject with AML (67). A recent study suggested downregulation of SFPQ by miRNA-1296 in colorectal cancer as a mechanism for cell proliferation (68). The published mutation in a patient with AML was described as nonsynonymous without further details. The mutation in our patient with FA/AML was a missense variant in the N-terminal domain (p.Gly14Ser). The mutation in the patient with SDS/AML was a missense variant in the C-terminal domain (p.Glu699Lys). It is possible that loss of or aberrant SFPQ alters splicosome function and drives MDS/AML. Further studies are necessary to determine whether SPFQ mutations are more common in IBMFS-associated MDS/AML than in de novo MDS/AML and whether there is synergism between HSC loss and SFPQ in developing leukemia.
It is important to note that TP53 was mutated herein mainly in patients with SDS. It was mutated in the SDS/AML founding clone and in 2 SDS patients without transformation, indicating that it is indeed an early transformational event. It is interesting that in sequential samples, the TP53 mutation p.Arg248Trp (previously reported as pathogenic) was detected in the founding clone of SDS/AML but not in the founding clone in previous samples. This information supports the notion that early hematopoietic cells in IBMFSs have heightened tendency for clonal evolution, but most clones eventually subside and do not progress.
In summary, FA and SDS are characterized by a burst of clonal evolution. Although the molecular changes largely follow AML features, most hematopoietic clones do not progress, and at a leukemic stage only a few clones become predominant. The differences between clones that progress to leukemia and those that do not need to be further elucidated. Future studies should also evaluate the prognostic value of the identified molecular changes in this study and their potential use for early detection of irreversible transformation or therapeutic targets in FA and SDS. Last, because AML blasts from only 2 patients with FA/SDS were available for this study, the molecular data at the AML stage are anecdotal, and multicenter, collaborative efforts are required to collect a larger number of AML samples from these rare disorders to validate our observations.
Methods
Flow cytometry. Bone marrow HSPC population sizes were evaluated by multiparametric immunophenotyping ( Figure 1A), as previously described (5). Cell frequencies were normalized as previously described to the total bone marrow mononuclear cells (5,69) and to total bone marrow CD34 + cells (70,71).
DNA preparation for genomic studies. To identify the spectrum of somatic mutations and affected genes, we analyzed DNA from phenotypical sorted GMP cells. DNA samples from 200 to 965 sorted GMPs were amplified by whole-genome amplification (REPli-G Mini Kit, QIAGEN) for 16 hours with adjustment of reagents to cell number as per the manufacturer's instructions and as previously described (72)(73)(74)(75).
To eliminate germline variants, we paired each subject's data with his or her marrow fibroblast genome as a source of nonhematopoietic DNA. We enriched marrow fibroblasts by culturing marrow cells, removing floating hematopoietic cells, and passaging 3 to 5 times. Because of poor growth of passaged patient cells, DNA of marrow fibroblasts from close to half of the patients (and 1 healthy subject for quality control) was amplified, with no apparent effect on the number of filtered somatic variants (Supplemental Table 7) and no apparent bias toward specific nucleotide change (Supplemental Table 8). Furthermore, matched amplified and unamplified DNA from fibroblasts showed a high congruence of mapped reads across the genome and per chromosome (Supplemental Figures 2 and 3).
To study molecular events in AML samples, we sorted blast cells. In a case of an SDS patient with AML, amplified DNA from marrow myeloblasts underwent paired analysis with DNA from marrow fibroblasts. For an FA patient with AML, a peripheral blood sample was available, and amplified DNA from myeloblasts underwent paired analysis with amplified DNA from T cells.
WES. DNA underwent exome enrichment by the Sure Select 50 Mb Human All Exon Capture Kit (Agilent Technologies) according to the manufacturer's instructions and sequencing on the Illumina HigSeq2500 at The Centre for Applied Genomics (The Hospital for Sick Children) as previously described (25). The average reads per nucleotide among the analyzed subjects was 146 (range 116-189).
Next-generation sequencing cancer gene panel assay. To augment mutation discovery by deep variant analysis and validate variants in cancer-related genes found by WES, we used a deep sequencing panel of 877 genes, which either were known cancer-related genes from the COSMIC database or are hypothesized to play a role in cancer (based on published expression in tumors, known function, or constitutive mutation in cancer predisposition syndromes). The total number of bases for nonoverlapping exons covered by the panel ± 10 bp is 3,012,823 bp. The panel was developed by our group as previously described (76). The average reads per nucleotide among the analyzed subjects was 1216 (range 775-2098).
GMP and marrow fibroblast FASTQ files were aligned and mapped separately to the reference genome to create binary alignment map (BAM) files, and both BAM files were then processed using MuTect v1.1.5 (http://www.broadinstitute.org/cancer/cga/mutect) for somatic point mutations and indels.
Variants from GMP WES and cancer panel sequencing were selected as true somatic variants if (a) they appeared in GMPs from both WES and the cancer panel, (b) the variant frequency in marrow fibroblasts was 0, (c) the variant comprised over 7% of the total reads for the respective nucleotides in GMPs (using this threshold, over 90% of the variants fulfilled all criteria in both WES and cancer panel) (Supplemental Table 2), and (d) the read depths by the cancer panel in GMPs and in marrow fibroblasts were over 50.
Analysis of somatic variants. Somatic variants were classified into tiers as described (77). As conventionally done in cancer genomics analysis, we used only tier 1 and 2 variants, which are more likely to have a pathogenic effect than tier 3 and 4 variants.
The R package deconstructSigs (http://github.com/raerose01/deconstructSigs) was used to construct tumor signatures from somatic variants, to normalize signatures according to variant frequencies, and to compare them to known tumor signatures in COSMIC. A mutation signature was determined by comparing the total variant profile of a patient to the known variant profile of different cancers. For this analysis a minimum of 50 somatic variants per sample was required to construct a signature. ComplexHeatmap (http://bioconductor.org) was used to create a sample heatmap of somatic variants. Variant Effect Predictor (http://grch37.ensembl.org/info/docs/tools/vep/index.html) was used to annotate the mutations for functional consequence.
Mutational trees were reconstructed by the PhyloWGS software program as developed by Quaid Morris's group (35) (http://github.com/morrislab/phylowgs). The program can reconstruct related clonal subpopulations in tumor samples from whole-genome sequencing/WES data. It is based on VAFs of the mutations and uses the Markov chain Monte Carlo procedure. It can construct mutational trees with or without data about copy number variants (78). Using this software, we designated marrow fibroblast cells as molecular group 0. Subsequent clones were ordered and numbered by the software program.
Statistics. Descriptive analysis was used to characterize groups. Two-tailed Student's t test was used to determine the statistical significance of differences between 2 means. To determine significant differences between multiple means, the nonparametric Kruskal-Wallis test was performed followed by Dunn's post hoc test. Wilcoxon's signed-rank test was used for testing whether 3 samples have different VAF distributions. P < 0.05 was considered significant. The statistical analyses were performed using Microsoft Excel, XLSTAT Version 2019.1.2 (Addinsoft), and GraphPad Prism v8. The bioinformatics software programs used in this study are described with the respective analyses in the Methods and Results sections.
Study approval. Patients with SDS were eligible for the study if they fulfilled the international consensus diagnostic criteria (79) and had biallelic SBDS mutations. Patients with FA were eligible if they had a clinical diagnosis of FA and positive chromosome fragility testing. At the time of testing, most patients without leukemia had cytopenia and hypocellular bone marrow (Supplemental Table 1); no patient had clonal marrow cytogenetic abnormalities. Healthy control subjects were donors for bone marrow transplantation. The study was approved by the Research Ethics Board at The Hospital for Sick Children, and informed written consent was obtained from all enrolled subjects. Usage of a sample that had been cryopreserved in the Tissue Bank at The Hospital for Sick Children was done according to the Research Ethics Board's regulations and approval. A total of 7 FA, 8 SDS, and 8 healthy control subjects were studied. The list of subjects and samples is in Supplemental Table 7.
Author contributions
SH contributed to study design, acquisition of data, and analysis and interpretation of data and assisted in writing the manuscript. BB contributed to study design, acquisition of data, and analysis and interpretation of data and drafted the article and revised it for important intellectual content. SZ contributed to study design, acquisition of data, and analysis and interpretation of data and assisted in writing the manuscript. HL contributed to study design, acquisition of data, and analysis and interpretation of data and assisted in writing the manuscript. S Abelson contributed to analysis and interpretation of data and reviewed/revised the manuscript for important intellectual content. RJK heads 1 of the Canadian Inherited Marrow Failure Registry site research teams that contributed acquisition of vital data and interpretation of data and reviewed/ revised the manuscript for important intellectual content. S Abish heads 1 of the Canadian Inherited Marrow Failure Registry site research teams that contributed acquisition of vital data and interpretation of data and reviewed/revised the manuscript for important intellectual content. MR heads 1 of the Canadian Inherited Marrow Failure Registry site research teams that contributed acquisition of vital data and interpretation of data and reviewed/revised the manuscript for important intellectual content. VRB heads 1 of the Canadian Inherited Marrow Failure Registry site research teams that contributed acquisition of vital data and interpretation of data and reviewed/revised the manuscript for important intellectual content. RDB contributed to study design and analysis and interpretation of data. HM contributed to analysis, generation of figures, and interpretation of data. SD contributed to study design, interpretation of data, and review of the manuscript. AS developed the cancer panel used in this study, contributed to study design and analysis and interpretation of data, and assisted in writing the manuscript. JED contributed to study design and analysis and interpretation of data and assisted in writing the manuscript. YD contributed to study conception and design, acquisition of data, and analysis and interpretation of data and drafted and revised the article.
|
2019-11-22T00:44:08.820Z
|
2019-11-13T00:00:00.000
|
{
"year": 2020,
"sha1": "1a377877c4a10ccb3a4a7bc4662554f8f6bf6eec",
"oa_license": "CCBY",
"oa_url": "http://insight.jci.org/articles/view/131018/files/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1df24b6fa48036ad0500c76f49d85669f5b7d1b2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
15052657
|
pes2o/s2orc
|
v3-fos-license
|
A simple and rapid method to characterize lipid fate in skeletal muscle
Background Elevated fatty acids contribute to the development of type 2 diabetes and affect skeletal muscle insulin sensitivity. Since elevated intramuscular lipids and insulin resistance is strongly correlated, aberrant lipid storage or lipid intermediates may be involved in diabetes pathogenesis. The aim of this study was to develop a method to determine the dynamic metabolic fate of lipids in primary human skeletal muscle cells and in intact mouse skeletal muscle. We report a simple and fast method to characterize lipid profiles in skeletal muscle using thin layer chromatography. Findings The described method was specifically developed to assess lipid utilization in cultured and intact skeletal muscle. We determined the effect of a pan-diacylglycerol kinase (DGK) class I inhibitor (R59949) on lipid metabolism to validate the method. In human skeletal muscle cells, DGK inhibition impaired diacylglycerol (DAG) conversion to phosphatidic acid and increased triglyceride synthesis. In intact glycolytic mouse skeletal muscle, DGK inhibition triggered the accumulation of DAG species. Conversely, the DGK inhibitor did not affect DAG content in oxidative muscle. Conclusion This simple assay detects rapid changes in the lipid species composition of skeletal muscle with high sensitivity and specificity. Determination of lipid metabolism in skeletal muscle may further elucidate the mechanisms contributing to the pathogenesis of insulin resistance in type 2 diabetes or obesity.
Background
Dysregulation of lipid metabolism, leading to lipid content modification as well as production of second messengers, contribute to the pathogenesis of insulin resistance in type 2 diabetes and obesity. Following uptake in skeletal muscle, free fatty acids (FFA) are converted to long-chain fatty acyl-CoAs (LCACoAs), which can undergo several fates. LCACoAs can be imported into the mitochondria and used as substrates for β-oxidation, incorporated into triglycerides, or serve as a source of second messengers, such as diacylglycerol (DAG). In skeletal muscle from obese humans [1,2], FFA oxidation capacity is reduced, thereby leading to intramuscular triacylglycerol accumulation [3,4]. In addition, an accumulation of secondary messenger lipid species such as DAG or ceramide, also contributes to muscle insulin resistance, thereby exacerbating the severity of type 2 diabetes [5][6][7]. For example, DAG accumulation activates specific protein kinase C isoform activity and impairs insulin-stimulated glucose transport through enhanced IRS-1 serine phosphorylation [8]. Understanding how FFA levels impact glucose metabolism may elucidate the role of lipids in the pathogenesis insulin resistance in type 2 diabetes or obesity. Here, we present a simple and fast method to characterize the metabolic fate of lipids in skeletal muscle using a thin layer chromatography system (TLC).
Primary human skeletal muscle cell culture
Satellite cells were isolated from vastus lateralis skeletal muscle biopsies derived from healthy volunteers by trypsin-EDTA digestion, as previously described [9]. All participants provided written informal consent and all protocols were approved by the Karolinska Institutet ethics committee. Myoblasts were propagated in growth medium (F12/ DMEM, 20% FBS, 1% PeSt and 1% fungizone (Invitrogen, Sweden)), and differentiated at >80% confluence in lowserum medium (DMEM containing 1 g/l glucose, 2% FBS, 1% PeSt and 1% Fungizone). Experiments were performed on differentiated myotubes cultured in 6-well plates. Final experiments were conducted 7 days after differentiation was induced.
Cultured primary human skeletal muscle cells were incubated with 0.2 μCi/ml [ 14 C(U)] palmitate (Perkin Elmer, CA, USA) with non-radioactive palmitate (25 nM) for 6 hours in the presence or absence of the DAG kinase inhibitor (R59949, Calbiochem, Merck AB, Sweden). Following the incubation step, cells were washed 3 times with cold PBS in order to remove the free and membranebound radioactive palmitate.
Mouse muscle incubation
Male C57BL/6 mice were purchased from Charles River (Germany). Mice were housed on a 12 hour light/dark cycle and received ad libitum standard rodent chow. Experiments were approved by the Regional Animal Ethical Committee (Stockholm, Sweden).
Mice (12-14 weeks old) were fasted for 4 hours prior to the study. Mice were anesthetized intraperitoneally with Avertin (2,2,2-tribromoethanol and tertiary amyl alcohol) at a volume of 10 μl/g body weight. Extensor digitorum longus (EDL) and soleus muscles were carefully dissected without stretching and gently removed with tendons intact. Muscles were incubated for 30 minutes at 30°C in vials containing pre-oxygenated (95% O 2 , 5% Figure 1 Schematic representation of lipid extraction and TLC development. The outlined protocol was established to extract, separate and identify lipid species in primary human skeletal muscle cells and intact murine skeletal muscles. CO 2 ) Krebs-Henseleit buffer (KHB) supplemented with 15 mM mannitol, 5 mM glucose, 3.5% fatty acid-free bovine serum albumin and 0.3 mM palmitate. Muscles were then transferred to new vials containing fresh pregassed KHB, supplemented as described above containing 2.5 μCi/ml of [ 14 C(U)]-palmitate, and incubated for 120 min in the presence or absence of 25 μM DAG kinase inhibitor (R59949). At the end of the incubation, tendons were removed from muscles, which were rapidly weighed, immediately frozen in liquid nitrogen, and stored at −80°C.
Lipid extraction
Cultured cells were scraped directly from plates in 300 μl of an isopropanol/ 0.1% acetic acid mixture. Frozen muscles were disrupted in the same buffer using the TissueLyser II (Qiagen). The samples were incubated overnight at room temperature with slight shaking to allow lipids to diffuse into the solvent. Next, 600 μl of hexane and 150 μl of 1 M KCl were added to each sample. The hexaneisopropanol system is particularly suitable for extraction of hydrophobic lipids, such as free fatty acids, triglycerides and cholesterol esters [10]. Addition of KCl is designed to improve the removal of non-lipid contaminants [11], including proteins and amino acids. Samples were then rotated for 10 minutes at room temperature. Tubes were stored upright for 5 minutes to induce phase separation. The organic phase (upper phase) was collected (~600 μl) and transferred to a new tube. The organic phase was dried using a vacuum pump for 1 hour. Alternatively, nitrogen steam could be used to dry the lipids. The lipid pellet was eluted in 50 μl of 1:1 methanol: chloroform.
Detection of lipid species with thin layer chromatography TLC plates that contain a concentration zone and are channeled were selected to facilitate the loading of lipid extracts (Silica Gel G 250 μm 20×20 cm, Analtech, DE, USA). One hour before the development of the TLC plate, the loading chamber was filled with 100 ml of a hexane: diethylether:acetic acid mixture (80:20:3). The lid of the chamber was then sealed using high-vacuum grease (Corning, NY, USA) to allow vapor to accumulate in the chamber. The lipid suspension was then applied to the TLC plate at~2 cm from the bottom (on the preadsorbant zone) and separated in the hexane:diethylether:acetic acid system for 30 minutes. The compounds 1,2-Dioctanyl wrapped in plastic foil. The wrapped plate was transferred to an exposure cassette (GE Healthcare) and exposed to an X-ray film. The cassette was also wrapped with plastic in order to avoid moisture damage and was stored at −80˚C overnight.
Quantification
After overnight incubation, the cassette was removed from −80˚C and allowed to stand at room temperature for 1 hour. The X-ray film was developed using an X-ray developer machine. Quantification by densitometry was performed using Quantity One software (Biorad).
Detection of fatty acid oxidation with 3 H-palmitate
In addition to lipid intermediates, the described protocol can also be used to simultaneously detect fatty acid oxidation, through the addition of 3 H-palmitate along with 14 C-palmitate at the same time. For this, the TLC plate should be sprayed with an autoradiographic enhancer spray (EN 3 HANCE, Perkin Elmer) prior to X-ray film exposure. Application of EN 3 HANCE should be repeated 3 times for 10 seconds, with complete drying of the TLC plate following each round. This spray allows the detection of lowly abundant species and can reduce the time necessary for sufficient exposure. Fatty acid oxidation is measured by the release of 3 H 2 O in the media [12].
Results and discussion
A schematic representation of the protocol used to identify lipid species in skeletal muscle by TLC is described in Figure 2. Phosphatidic acid cannot enter the plate and appears at the origin, while 1,2-DAG migrated a few millimeters above the origin. The two distinct bands of 1,3-DAG are visible at 2 and 3 cm above the origin, due to the conformation of the lateral chains. Palmitic acid and triglycerides migrated at 6 cm and 9 cm, respectively, from the origin. The characterization of the lipid standard profile is crucial, since standards can be pooled together in one lane to allow more samples to be run on the same plate.
Exposure of primary human muscle cells to 14 Cpalmitate for 6 hours produced a lipid migration pattern as shown in Figure 3A. To further validate the sensitivity and specificity of this system, primary human muscle cells were treated with a specific DAG kinases inhibitor (R59949; Figure 3B-D). DGK inhibition increased by 31% 1,3-DAG ( Figure 3C) and by 30% percent triglycerides species compared with the control ( Figure 3B) in primary human myotubes, which reflects an impaired DAG conversion to phosphatidic acid. This result is consistent with previous published work [13], confirming the sensitivity of the assay. Different exposure times can be easily selected to quantify different lipid species according to their abundance.
The metabolic fate of free fatty acids was also determined in intact mouse skeletal muscle ( Figure 4A). Treatment of mouse glycolytic EDL skeletal muscle with a DGK inhibitor led to an accumulation of 1,3-DAG species ( Figure 4D). However the DGK inhibitor did not alter 1,3-DAG species in oxidative soleus muscle. In contrast to human cells, triglyceride synthesis in mouse EDL muscle remained unchanged after incubation with the DGK inhibitor ( Figure 4C), possibly due to a reduced incubation time.
Conclusion
Different systems exist to explore cellular lipid profiles in various biological systems, including high-performance liquid chromatography, gas chromatography, and mass spectrometry. TLC is a convenient system that allows simple and easy determination and quantification of lipid species. The equipment required for TLC is quite inexpensive and the experimental procedure can be quickly established in most laboratory environments. The advantages of TLC and different applications have been reviewed earlier [14,15]. Lipid extraction using isopropanol/hexane was first used by Hara and Radin [11] as an alternative to chloroform/methanol extraction and extracts a high percentage of the lipids with low protein contamination. Moreover, use of plastic materials is possible with this extraction procedure. The possibility to use alternative TLC buffer systems allows the user to specifically separate the lipid species of interest [16,17]. Finally, the use of a single mobile phase permits simultaneous comparisons of multiple samples. Hence, this method provides an easy and rapid evaluation of how exogenous compounds influence lipid metabolism.
In developing the current TLC protocol, radioactive palmitate was used as a convenient, inexpensive, and available physiological substrate. Due to the great diversity in fatty acid structure, with varying chain length or degree of saturation, different substrates (e.g. oleate, myristate, laurate) can assume distinct fates. Therefore, different radiolabeled fatty acids can be used in the same TLC system in order to examine the effects of exogenous compounds on lipid abundance. In conclusion, this method allows for the easy, fast and efficient detection of changes in lipid metabolism in both cultured and intact skeletal muscle. Moreover, this method can readily be extrapolated to other cell types and tissues such as brain, heart, liver or smooth muscle.
|
2017-06-26T21:46:42.070Z
|
2014-06-24T00:00:00.000
|
{
"year": 2014,
"sha1": "221e0ecb5c3922de80338864a68794eb49cb015f",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-7-391",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bdf59e297e256f6e55c743eb460af7dc199fee5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
212922691
|
pes2o/s2orc
|
v3-fos-license
|
A new efficient method of adaptive filter using the Galois field arithmetic
This paper proposes an efficiently implementation of Multi-Delay Filter Block Recursive Least Squares (MDF-BRLS) algorithm. This implementation uses a particular transform that is defined on a finite ring of integers with arithmetic carried out modulo Fermat numbers. In term of performances, this Fermat Number Transforms (FNT) is ideally suited to digital computation, requiring approximately Nlog2N additions, subtractions and bit shifts, but no multiplications unlike the Fourier transform (FFT). FNT implementation results confirm that the MDF-BRLS adaptive filter can implement with smallest computational complexity compared with an implementation using the Fourier transform.
Introduction
Adaptive filters are widely used in the area of signal processing such as echo cancellation, noise cancellation, channel equalization for communications and networking systems. Necessity of adaptive filters implementations is growing up in many fields. An adaptive filter is a filter that self-adjusts its transfer function according to an optimizing algorithm [1,2]. Because of the complexity of the optimizing algorithms, most adaptive filters are digital filters that perform digital signal processing and adapt their performance based on the input signal. Adaptive filters are time-varying since their parameters are continually changing in order to meet a performance requirement [3,4]. The implementation process requires various of performances such as good convergence characteristics, lower execution time and reduce computational complexity. However, it is difficult to satisfy these characteristics simultaneously, so efficient algorithms and efficient architectures are desired.
The design of adaptive filter algorithm is an important part within the design of adaptive filter. The fast Multi-Delay Filter Block Recursive Least Squares (MDF-BRLS) algorithm [5] is one of the efficient adaptive algorithm since here the process of filtering and adaption is done in frequency domain without overlap and occupied the smallest block size using FFT method [6][7][8].
The linear convolution, used by the MDF-BRLS algorithm, is one of the most important digital signal processing problems. It can be implemented more efficiently based on the Fermat number transform (FNT) than based on the fast Fourier transform (FFT) since the FNT possesses the property of cyclic convolution and low computational complexity. The expensive multiplication in the FFT and inverse (IFFT) can be replaced by bit shifts in the FNT and its inverse (IFNT) with the transform kernel 2 or its integer power [9,10]. The principal objective of our study is to reduce the computational complexity of the MDF-BRLS algorithm. In order to ensure this, we looked further into the mathematical bases of the Number Theoretic Transform (NTT) and in particular the Fermat Number Transform (FNT), compared to the Fast Fourier Transform (FFT), which allows reduction of several multiplications which are necessary to achieve certain functions such as convolution products.
The rest of the paper is organized as follows. In second section, we will present the MDF-BRLS algorithm. Third Section presents the Number theoretic transform and the implementation technique of the MDF-BLMS algorithm, and in particular the calculation of the circular convolution, via a particular transform, called Extended-FNT. Then, simulation results and computational complexity estimation of the MDF-BRLS algorithm using the Extended-FNT are presented in section four and compared to those obtained using the FFT.
Block recursive least squares (BRLS)
The weight vector ∧ of BRLS algorithm is given by [5]: The different parameters and vectors are exposed below: M represents the lowest power of 2 larger than or equal to N+L-1.
represents the Kalman gain. * denotes the linear convolution.
Multi-Delay Filer BRLS (MDF-BRLS)
The decomposition of the two vectors k and k x without been overlapped was beyond the key idea of the MDF-BRLS algorithm [8], and that is on the contrary of the preceding MDF [6], [7]. That is done mainly according to the following vectors: The exhibited problematic of the matrix multiplication among the two coefficients and how it can be solved presents in fact the novelty of our choice. Actually, a problem of calculation is presented by the matrix k which is of ( M M ) dimensions, once an updating of the filter coefficients through the MDF-BRLS algorithm is taken place. There are essentially N-1 samples difference among the FFT of the two sequences gives a result differs from that attained by using BRLS algorithm. Therefore, the MDF-BRLS proposes to decompose the matrix k , into ' size according to the following vector: Also, the inverse correlation matrix 1 k P should also be minimized into ' K temporarily matrices From the above matrix decompositions, we define the calculation of the scalar k q according to this relation: .
which are consecutively scalar and matrix could be written as related by the two equations below: Count on the earlier equations, the updated weight equation of L'-points MDF-BRLS algorithm is presented now by: must be excluded, therefore the weight vector ˆk w of MDF-BRLS algorithm takes the following concluding form: On the other hand, to get the inverse correlation matrix k P (2) updated, a similar decomposition method of that presented in the previous equations must be respected: is defined as follow: Taking into account the matrix k P , we define the vector k as follow: . .
The MDF-BRLS algorithm processes the adaptive filter using the smallest block size. Consequently, it allows reducing, as much as possible, the computational complexity and the execution time of this algorithm. (Table 1). The values of M and associated with a FNT are given by In an FNT over
The Fermat number transform (FNT)
The extended FNT (EFNT) and its inverse are proposed by the following equations (24 and 25).
Simulation results
The performance of the MDF-BRLS algorithm implemented with FFT and Extended-FNT transforms is evaluated by computer simulation using Matlab. In these simulations, the MDF-BRLS algorithm is considered, in single-talk situation, with the parameters listed in Table 2.
F
The size of the impulse response is 128, L that matches a delay of 16 ms used for a sampling rate of 8 kHz. In single-talk state, the acoustic echo cancellation system should reduce the echo to around 24 dB for delays lesser than 25 ms and to around 40 dB for delay beyond 25 ms.
The response of the echo track impulse The computational complexity Based on the equation of the weight error convergence, the MDF-BRLS performance is simulated and presented in Figure 2. The result obtained in figure just before presents a perfect construction of echo's path of the MDF-BRLS algorithm (convergence is around -78 dB), using FFT and Extended-FNT transforms. This figure shows also that both transforms, FFT and Extended-FNT, do not reveal any significant difference in terms of convergence.
The impulse response of the echo path calculated by both transforms FFT and Extended-FNT is seen in Figure 3.
This result obtained in figure above, by using the MDF-BRLS algorithm were analyzed with both transforms, FNT and Extended FNT revealed that there was no difference in terms of construction of echo's path (error difference is around 16 10 dB). Furthermore, the result obtained presents a perfect construction of echo's path, which means that the residual echo is unheard at the output of the echo cancellation system using both methods of implementation.
Based on the obtained values, seen in Table III, the differences between both methods of implementation reside mainly in the computational complexity of the filter.
As well known, that the FFT and Extended-FNT are fundamentally having the same implementing, our EFNT-based block adaptive filtering has been applied with convolution steps based on the FFT procedure. One difference that distinguishes the Extended-FNT computation from other techniques, arises from the use of finite arithmetic modulo, the Fermat number
|
2019-12-05T09:34:42.850Z
|
2019-11-28T00:00:00.000
|
{
"year": 2019,
"sha1": "199ba5c06a83c8eb231acd206f767f5f4c136775",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/663/1/012060",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "37c3677673fdce2429f7e209c636c17057097d12",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
265045598
|
pes2o/s2orc
|
v3-fos-license
|
Effect of online game policy on smartphone game play time, addiction, and emotion in rural adolescents of China
Background Smartphone game addiction has emerged as a major public health problem in China and worldwide. In November 2019 and August 2021, the National Press and Publication Administration of China implemented two increasingly strict policies, as a means of preventing smartphone game addiction in adolescents aged 18 or below. This study aimed to analyze the effect of the policies on smartphone game play time, addiction, and emotion among rural adolescents in China. Methods We sent the questionnaire to rural adolescents through the online survey tool Questionnaire Star, a professional online survey evaluation platform. The questionnaire included demographic data, smartphone use survey, smartphone game addiction and emotion evaluation scales. The Smartphone Addiction Scale-Short Version (SAS-SV) measured adolescents’ smartphone game addiction. The Short Version of UPPS-P Impulsive Behavior Scale (SUPPS-P) and Social Anxiety Scale for Children (SASC) measured emotion. According to SAS-SV score, the enrolled rural adolescents were divided into addiction group and non-addiction group. The t-test, Chi-square test, and repeated measure ANOVA assessed the effect of the policies on adolescents’ smartphone game addiction and emotion. Results Among enrolled 459 rural adolescents with a mean age of 14.36 ± 1.37years, 151 (32.90%) were in the addiction group and 308 (67.10%) were in the non-addiction group. Adolescents in the addiction group were older, more male, and higher grade. There were time and group effects between the two groups in playtime. After a year of policies implementation, the weekly game time dropped from 3.52 ± 1.89 h to 2.63 ± 1.93 h in the addiction group and from 2.71 ± 1.75 h to 2.36 ± 1.73 h in the non-addiction group. There were also time and group effects in SAS-SV and SASC scores, but not for SUPPS-P score. In the addiction group, the SAS-SV score dropped from 41.44 ± 7.45 to 29.58 ± 12.43, which was below the cut-off value for addiction, and the level of social anxiety was consistently higher than non-addiction group. Conclusions The playtime of rural adolescents spent on smartphone games has decreased significantly due to the restriction of the policies rather than the lack of addiction or social anxiety. The policies had practically significant effects in reducing smartphone game play time for rural adolescents in China.
Background
With the rapid development of technology, the Internet has gradually penetrated all aspects of people's work and life.According to the China Internet Network Information Center (CINIC) data, as of June 2022, the number of Chinese netizens was 1.051 billion, with 10-19-year-old netizens accounting for 13.5%, and the average weekly Internet time of netizens was 29.5 h [1].The Internet provides great convenience for learning and leisure, but smartphone overuse or gaming disorder has significant consequences for adolescents.Prolonged use of electronic device was found to be related to physical discomfort, including eye discomfort, musculoskeletal discomfort (wrist, neck, shoulder and back), obesity, sleep deprivation and insufficient physical activity [2][3][4][5][6].Gaming disorder may lead to several negative mental health problems such as depression, social anxiety, stress, suicide ideation and substance abuse [2,6].In addition, studies have reported that students with gaming disorder had poorer academic achievement.This may be explained by poor time management resulting in most of the time being spent on games and poor sleep quality leading to a lack of in-class concentration [7][8][9].
At present, whether smartphone addiction meets the criteria for addiction is controversial [10], while the International Classification of Diseases (11th Revision) formally defined gaming disorder as addictive behavior in May 2019 [11].The problem of adolescents addicted to online games has emerged as serious public health problem.However, the etiology of gaming disorder is not fully understood.Some researchers have suggested that the incidence of addictive behaviors in adolescents increases due to the immaturity of cognition and brain development [12].Multiple protective and risk factors have been considered to be associated with gaming disorder [13].Self-control, positive parent-adolescent relationship, high levels of school connectedness are protective factors [13,14].Impulsivity, maladaptive cognitions and motivations, hostility, deviant peer affiliation, family conflicts, school bullying are positively correlated with gaming disorder [2,[13][14][15].
The prevalence of gaming disorder varies widely due to the lack of standard definition and heterogeneity in demographics and research methodology.It was reported that the overall prevalence of gaming disorder was 3.3% in general populations [16], while in adolescents it was 4.6% [17], indicating that the prevalence of gaming disorder among adolescents is higher.Gaming disorder is more prevalent in Asian countries, a metaanalysis conducted in 2022 that calculated the prevalence of gaming addiction in East Asia as 12% [18].A survey of participants from 34 provinces in China showed that the prevalence of gaming disorder among adolescents was 17.0% [2], and the prevalence was higher among adolescent males than females (19.2% versus 7.8%) [19].It was reported that there was a gap between urban and rural Internet use among Chinese minors [20,21].Compared with urban areas, the proportion of mobile phone dependence was higher in rural students, while the proportion of Internet time restricted by parents was lower.The reported prevalence of Internet addiction among leftbehind children was higher than that of non-left-behind children due to the lack of parent-child communication and parental supervision [22].
In response, governments around the world have taken regulatory measures to reduce the time of children and adolescents on video games, and those depend on the values and policy goals of various governmental departments [23][24][25].In the Western world, video game-related regulations are mainly limited to the rating systems evaluating content and age-appropriateness, such as the Pan European Game Information (PEGI) rating system used in Europe and the Entertainment Software Rating Board (ESRB) used in North America [23].Compared with the policies implemented by Western countries, Asian countries have clear regulations and mainly aimed at adolescents [23,24].The policies limiting the availability of the game include Shutdown system implemented by Thailand, Vietnam, South Korean, China, Selective Shutdown Policy in South Korea, anti-online game addiction system in China, etc.More specifically, for instance, the 'Juvenile Protection Act' , also known as the Cinderella Law, prohibited individuals under the age of 16 from playing games between 12 midnight and 6 am in South Korean in 2011, which was formally abolished due to it was outdated and basically ineffective in 2021 [25,26].
In mainland China, governmental regulation of play time is consistent.Drawing on international experience and based on China's national conditions, the Chinese government has adopted a series of policies to prevent minors from becoming addicted to online games.The National Press and Publication Administration of China (NPPA), one of the agencies directly under The State Council, is in charge of the administration of press and publication and copyright throughout the country.Since 2005, NPPA has organized and formulated the Development Standards and the Real-Name Authentication Scheme of Online Game Anti-Addiction System [27].In November 2019, NPPA issued a notice to prevent adolescents aged 18 or below from becoming addicted to online games.The policy emphasized the strict implementation of real-name registration and logins, and online game companies could provide no more than one and a half hours of service to minors on ordinary days, with the limit set at no more than three hours on official holidays [28].The Law of the People's Republic of China on the Protection of Minors (2020 Revision) put forward protection measures against minors' Internet addiction, such as online game service providers shall, in accordance with the relevant regulations and standards of the State, classify game products, make age-appropriate warnings, and take technical measures to prevent minors from accessing inappropriate games or game functions (Article 75, paragraph 3) [29].In December 2020, Online Game Age-Appropriate Tip officially entered the trial stage and provided three different age marks: green 8+, blue 12 + and yellow 16+.For different age levels, the implementation of different game systems, play time, game payment and other operations [30].In August 2021, NPPA issued stricter regulations to prevent gaming addiction.New regulations required online game companies to allow minors to play only from 8 pm to 9 pm on Fridays, weekends, and official holidays.Press and publication administrations at all levels shall strengthen supervision and deal with companies that fail to put measures in place [31].
There are a large number of left-behind children in rural China, and guardians cannot effectively restrict children's use of smartphones, which aggravates their addiction to smartphones.At present, smartphone management is a common problem.There are few studies on smartphone game addiction among children in rural areas, and it is imperative to know whether anti-addiction policies reduce the use of smartphones by these children.The main objective of this study was to explore the relationship between anti-addiction policies and smartphone gameplay time, addiction and emotion among rural adolescents in China.
Study participants and data collection
A questionnaire was distributed to rural adolescents through the online survey tool Questionnaire Star starting in September 2021 to collect relevant data, and follow-up visits were conducted in March and September 2022, respectively.The questionnaire includes demographic information, smartphone use characteristics and scale assessments.Informed consent was obtained from the legal guardians of minors involved in the study.Adolescents aged 10-18 who had played smartphone game once or more were included in the study.Exclusion criteria included never having played smartphone game, under 10 years old, over 18 years old, not willing to participate in the study.This study was approved by the ethical review board of Chaohu Hospital of Anhui Medical University, which conformed to the principles embodied in the Declaration of Helsinki.
Assessment instruments Smartphone addiction scale-short version (SAS-SV)
The SAS-SV is a short version that contains only 10 items with a 6-point Likert scale (1: ''strongly disagree'' and 6: ''strongly agree'') to evaluate smartphone addiction by self-reporting.Compared with SAS, the SAS-SV provides a cut-off value to evaluate the level of addiction and treatment effect, which is better as an appropriate tool for evaluating smartphone addiction in adolescents.The Cronbach's alpha of the SAS-SV is 0.91 [32].
Short version of UPPS-P impulsive behavior scale (SUPPS-P)
The SUPPS-P consists of 20 items assessing five distinct facets of impulsivity, with 4 questions in each dimension, including negative urgency (α = 0.78), lack of premeditation (α = 0.85), lack of perseverance (α = 0.79), sensation seeking (α = 0.74), and positive urgency (α = 0.78).A 4-point Likert scale is used for scoring, and some items are coded in reverse, with higher total scores indicating higher impulsivity.The SUPPS-P is considered a valid and reliable alternative to the original UPPS-P [33].
Social anxiety scale for children (SASC)
The SASC consists of 10 items by self-report measure with two factors.Factor 1 is labeled fear of negative evaluation (FNE) with 6 items, and factor 2 is labeled social avoidance and distress (SAD) with 4 items.Anxiety is significantly correlated with FNE and SAD factors.The scores range from never true (0) to sometimes true (1) to always true (2), and several items indicate higher levels of anxiety with lower scores.The standardized alpha reliability coefficient is 0.76 for SASC, and the test-retest reliability is 0.67 [34].
Statistical analysis
The mean and standard deviation (SD) for quantitative variables and percentage for categorical variables were used to describe the characteristics of participants.The t-test, Chi-square test, and repeated measure ANOVA were used to compare the differences between the two groups to assess the effect of the policies on adolescents' smartphone game addiction and emotions.All statistical analyses were analyzed using SPSS software (version 16.0), and statistical significance was set at a level of twosided p < 0.05.
Demographic characteristics of all enrolled rural adolescents
A total of 459 adolescents completed the study with an average age of 14.36 ± 1.37.Participants were divided into addiction and non-addiction groups based on SAS-SV scores (over 31 points for males, over 33 points for females).There were 151 (32.9%) participants in the addiction group with an average age of 14.58 ± 1.26(minimum,12; maximum,17), and 308 (67.10%) in the non-addiction group with an average age of 14.26 ± 1.40(minimum,11; maximum,18).There were statistical differences in gender and grade distribution between the two groups.Most adolescents came from two-parent families (86.27%), and the proportion of only children was low (20.92%).No significant difference was found in who they lived with or their parents' educational background between the two groups.The main characteristics of participants are presented on Table 1.
Comparison of smartphone use habits between the two groups
The survey found that 54.90% of rural teenagers had their own mobile phones.There were no significant differences in smartphone use habits between the two groups.They often used mobile phones to watch videos, make calls, learning, play games and so on.87.15% of the adolescents had played mobile games, and more than half of them played mobile games for the first time in grade 3-6 in primary school.Table 2 shows the specific smartphone usage habits among the two groups of participants.implementation of the policies.The total number of participants who completed all assessments was 459.There were time and group effects between the two groups in play time.The addiction group spent more time in smartphone game than the non-addiction group at a year ago.With the implementation of the policy, the time spent on smartphone game decreased significantly in both the two groups.Moreover, there is no significantly difference in play time between the two groups now.There were time and group effects between the two groups in the SAS-SV and SASC scores.The SAS-SV scores of the addiction group were significantly higher than those of the nonaddiction group, and the scores of the addiction group decreased significantly after the policy was implemented.
Effects of the policies on playtime and emotion for the two groups
The SASC score of the addiction group was always higher than that of the non-addiction group, and only the decrease in the non-addiction group was significant.
Based on the SUPPS-P, no statistical difference in impulsivity were found between the two groups.
Discussion
The play time of rural adolescents spent on smartphone game has decreased significantly after the implementation of the policies.It was reported that the prevalence of gaming disorder increased for adolescents during the COVID-19 pandemic [35,36].The data showed the prevalence of minors playing mobile games was 53.2% in China in 2021, down 3.2% points from 2020 [21].In 2022, the weekly gaming time of minors was further reduced after the policies were implemented in China, such as 75.49% of minors played less than 3 h per week in 2022 and 67.76% in 2021 [37].The participants' mean SAS-SV score was below the cut-off value for addiction after the new regulations had been implemented for one year.
It is reasonable for this study to conclude that the policies have practically significant effects in reducing smartphone gameplay time for rural adolescents in China.
The results found that adolescents in the addiction group were older, more male, and higher grades than those in the non-addiction group, which is consistent with previous researches [6,38].Boys spend the most time on gaming, but girls were more likely to engage in social media [39].Study found excessive smart device use for leisure more prevalent than learning among adolescents [3], and minors tend to devote their time to short videos after gaming is restricted [37].Maladaptive cognitions, psychological features and relevant brain areas were found to be associated with sexual dimorphism and gaming disorder [19,40].Such sex differences should be noted in future studies.Some external factors were associated with a higher risk of gaming disorder, including single-parent families, low socio-economic status, low mother's education level, poor family relationships, excessive use of video games by parents, and physical or verbal abuse exerted by parents [41,42].Our study did not find any correlation between family background and gaming disorder, which may be related to the fact that the sample came from rural children in the same area.
However, the result showed that 45.03% of the adolescents in the addiction group did not have their own smartphone.We know that a proportion of adolescents are using family members' mobile phones or borrowing others' mobile phones to play games.Grandparents, out of coddling their grandchildren, may give mobile phones to children without restraint.In addition, they may increase the time of smartphone games through some circumvention methods, including registering with the real name of the parents' ID card, fraudulently obtaining facial recognition from family members, using other relatives, friends and other adults' identity information, borrowing or buying game accounts to bypass the supervision of the anti-addiction system.At present, accurate identification of minors online is a challenging task.High impulsivity and low self-control are key risk factors of gaming disorder, and adolescents with high impulsivity may exhibit heightened spontaneous responses to behavioral cues to games, especially among male adolescents [15,38].The current study did not find a statistical difference in impulsivity between the two groups, and one reason for this may be that the sample being from a non-clinical background.Social anxiety is reportedly associated with behavioral addictions [43,44].Individuals with gaming disorder tend to have less face-to-face interaction because they spend most of their time playing games.Studies showed that social anxiety was lower when interacting online than offline, socially anxious gamers believe that online communication can avoid the distress of face-to-face social interactions, and the negative metacognitions about online gaming played a mediating role in the relationship between social anxiety and gaming disorder [44,45].Social anxiety would elevate gratification of Internet gaming, which has also been suggested to result in gaming disorder [46].
The prevention of adolescents' addiction to online games requires the cooperation of the government, schools, families and enterprises.China is one of the few countries in the world that have national measures to prevent public health hazards of online games.In practice, the truly implementation is conducive to the prevention and treatment of gaming disorder.Internet usage reportedly declined only in the first two years after the Cinderella Law was implemented in South Korea [47], suggesting the importance of translating policies into action and keeping up with the times.In the future, research and development protection procedures should be strengthened, and anti-addiction systems should be continuously upgraded to prevent bypassing supervision.Game companies and platforms should continue to improve social responsibility, strictly implement anti -addiction policies, and provide high -quality and healthy game products.Schools should strengthen publicity and education, carry out various activities, strictly standardize terminal equipment management, promote parents' performance of monitoring responsibilities, and ensure that primary and secondary school students grow up healthily under a good network environment.Parents should establish a harmonious parent -child relationship with their children and increase their companionship.Additionally, parents should play an exemplary role, guide their children to use the network correctly and limit play time reasonably because study showed restrictive parental mediation produced a boomerang effect, which increasing child-parent conflict and possibly exacerbating addictive use [48].
The study has several limitations.First, the generalization of the results to adolescents may be limited because our study was conducted among rural adolescents, more research based on large samples, urban and rural integration should be conducted to verify those figures.Second, participants smartphone use time was based on self-reporting, without standard records, and time management software is conducive to truly understanding of smartphone usage.Third, the effect of anti-addiction policies is presented indirectly because it is affected by multiple potentially confounding factors.However, it is reasonable to interpret the effectiveness of policies in the context of domestic longitudinal survey reports on Internet use among minors.Taking the Cinderella Law in South Korea as an example, further research should be conducted on whether policies are directly effective and long-term effective.Additionally, this study only explored the effectiveness of the policies and did not further explore the reasons for the high prevalence of gaming disorder among rural adolescents, which is very important for preventing and reducing excessive gaming.Last, common Internet-based addictive behaviors, including Internet addiction, online gaming disorder, online gambling disorder, pornography use, and smartphone use disorder.Research targeting a specific purpose may be conducive to understanding behavioral addiction.
Conclusions
After the policies of firmly preventing minors from being addicted to online games was implemented, smartphone gameplay time decreased significantly among rural adolescents in China.Due to the high prevalence and adverse effects of gaming disorder, for the healthy growth of adolescents, the anti-addiction system still needs to be upgraded, and parents should strengthen the supervision of adolescents using mobile phones.Adolescents with gaming disorder commonly have emotional problems, healthcare providers should provide psychological intervention.Due to lack of evidence-based clinical interventions, prevention-oriented and comprehensive intervention are important principles in dealing with game disorder.
Table 3
summarizes the changes in smartphone use time and mood of two groups before and after the
Table 1
Demographic data of all enrolled rural adolescents
Table 2
Comparison of smartphone use habits between the two groups
Table 3
Play time and emotion evaluation of all participantsAbbreviations: SAS-SV, Smartphone addiction scale-short version; SUPPS-P, Short version of UPPS-P impulsive behavior scale; SASC, Social anxiety scale for children.Compare with addiction group, **
|
2023-11-08T15:13:57.111Z
|
2023-11-08T00:00:00.000
|
{
"year": 2023,
"sha1": "8b1800aeed60d268e6a271b00400675db683e5c9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8b1800aeed60d268e6a271b00400675db683e5c9",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259679441
|
pes2o/s2orc
|
v3-fos-license
|
Challenges in conserving ethnic culture in urban spaces: Case of Ako Dhong village (Vietnam)
Abstract Most studies of ethnic heritage in Vietnam have paid much attention to those who live in remote and mountainous areas. A little attention has been given to how indigenous people in the cities have been conserving their cultural heritage. Based on a long ethnography at Ako Dhong village, an Ede community located in Buon Ma Thuot city, Central Highlands of Vietnam, from 2010 to 2020 and utilising two conceptual frameworks of political ecology and authorities versus minorities, this article seeks to analyse complicated challenges facing the community. Unlike previous investigations focusing on external influences such as state policies, migration, or religions like Protestantism and Catholicism, this research provides a multifaceted picture of internal and external factors affecting the community’s cultural heritage. We argue that the local crony capitalism represented by collusion between the state and businesses, the state’s top-down cultural approach and community rifts threaten the community’s efforts to protect cultural heritage.
PUBLIC INTEREST STATEMENT
Based on a long ethnography at Ako Dhong village, an Ede community located in Buon Ma Thuot city, Central Highlands of Vietnam and the use of political ecology and authorities versus minorities, this study uncovers some challenges facing Ako Dhong's urban ethnic group to conserve their cultural heritage. Empirical data were gathered through in-depth interviews, participant observations, group discussions, and document reviews. The research findings reveal the heritage preservation achievements of local people have been undermined by external factors such as the "handshake" between local government and businesses for forest encroachment, the lack of efficient planning measures, the top-down policy implementation, and the internal community conflicts. This paper suggests that the Buon Ma Thuot city government needs to change its viewpoint of heritage values and then implement some measures to protect the community's cultural heritage.
Introduction
In the late spring 2010, we started a customary law project funded by the United Nations Development Program (UNDP) at Ako Dhong village, Buon Ma Thuot city, Central Highlands region of Vietnam. This region is a plateau bordering the lower parts of Laos and the northeast of Cambodia, so it plays a strategically military, economic, and political role in Vietnam. In the late 1950s, Ede or Rade people reclaimed abandoned land to form Ako Dhong village, followed by the arrival of Kinh people. Whist the Kinh are the largest ethnic group at just over 80% of the total population, the Ede group comprises less than 1% of Vietnamese population in Vietnam. In the Central Highlands of Vietnam, the Ede are the second largest indigenous group, behind the Jarai people, having approximately 400,000 people (General Statistics Office, 2019). The culture of the Ede people is primarily characterised by the Ede language as part of the Chamic languages, and matriarchal society (Dang, 2019a). While walking along the village, local scenery and landscape took us by surprise. Plenty of beautiful gardens have relatively identical shapes like balanced and square fenced with green plants, spacious yards, and flowers. Many stilt houses stand out since they are built to symbolise the Ede's matriarchal society. In terms of housing layout, the stilt house is in front of the garden, and the other modern one is behind. This kind of layout has been utilised by most of the local households. It is in stark contrast to what is going on in other areas of Buon Ma Thuot city. According to the report released by the People's Committee of Buon Ma Thuot, most Ede families sold their stilt houses and then built concrete ones (2010). Additionally, Ako Dhong has a wide and wet valley in which the hustle and bustle of city life gives way to the sound of flowing streams, birdsongs, and the shade of old forest trees. This is so impressive since most the villages in the Central highlands of Vietnam have had no forest due to the Vietnamese nationalisation of the forest land since 1975 (Dang, 2019b). Worse, this region is also one of the deforestation hotspots in Vietnam, so the Vietnamese Prime Minister ordered the closure of natural forests (Tuan, 2016).
Once the customary law project ended, we went back and forth between Hanoi and Buon Ma Thuot during 2011-2020 to understand how Ako Dhong and two other Ede villages changed under the impacts of substantial urbanisation (Dang, 2019a). Notably, these field trips allowed us to observe Ako Dhong's significant changes such as the popularity of concrete and modern houses, the degradation of community forest, and the rise of crimes, thus posing serious issues for Ako Dhong's cultural heritage management. As a result, several key questions can be made like what challenges the Ako Dhong community faces to protect their culture and how these challenges affect Ako Dhong people's cultural assets.
Studies on challenges to preserving ethnic cultures worldwide can be divided into three groups. The first group criticises development policies and programs related to states and international organisations since they pose negative impacts on local communities' socio-cultural life in the Southern Hemisphere countries (Biersack et al., 2006;Blaikie et al., 2015;Clarke, 2001;Paulson et al., 2003;Peet et al., 2010;Perreault et al., 2015;Robbins, 2019). The second group considers discourse and development practices as profoundly influencing the lives of third-world ethnic communities (Duncan, 2008;Escobar, 2011;Ferguson, 1994;Pigg, 1992). The third group focuses on the change of mentality and cultural practices in indigenous communities and religious conversion (Hiebert, 2008;Robbins, 2004;Schiller, 1997;Thong et al., 2023).
Similarly, these three approaches mentioned above are seen in various studies on Vietnam's ethnic culture. For the first group, a study "The Development Crisis in the Uplands of Vietnam" conducted by Jamieson and colleagues in 1998 claims that the government's mountainous development programs caused four major challenges for mountainous areas including: (1) poverty; (2) population pressure; (3) degraded environment and (4) dependence of ethnic minorities on external systems and the "marginalisation" of the ethnic minorities' economy (Jamieson et al., 1998, p. 15). Other authors argue that under the influence of development policies of the state, the village space of indigenous communities is disrupted, and traditional cultural practices being broken (Mai, 2011;Nguyen, 2008;Salemink, 2000), so the locals have to look to external religions as a new spiritual fulcrum (Salemink, 2003). The reasons stem from the lack of understanding, and the misunderstanding about mountainous regions (Jamieson et al., 1998;Nguyen, 2008), knowledge based on Kinh people's perspectives (Evans, 2018;Hoang & Pham, 2012), the ignorance of local community's cultural contexts (Dang, 2014;Le, 2019;Mai, 2011), and the "heritagisation" of culture by state (Bui & Lee, 2015;Salemink, 2016). In addition, examining the impact of discourse and development practices on ethnic minorities is presented through studies undertaken by Jamieson et al. (1998), Hoang and Pham (2012) and Hoang (2018. Finally, the link between religious conversion and ethnic cultural change is shown in Nguyen's study (Nguyen, 2004), Ngo (2015Ngo ( , 2016, and Nguyen and Pham (2013).
In the Vietnamese context, as Peters and Andersen commented that most of studies on ethnic cultural preservation focus on communities in remote areas (Peters & Andersen, 2013). A few studies touch upon minorities in urban spaces by highlighting social relations and social networks (Nguyen & Tran, 2021), the transformation of village space (Dang, 2015(Dang, , 2019b, and social security (Pham, 2010), particularly challenges to the cultural conservation of H'Mong villages at Sapa in the context of growing tourism and urbanization (Dao, 2016;Quan, 2022;Srikham, 2019;Tran, 2006). Overall, three subjects including the state, global institutions, and the community have been substantially studied. Notably, none have regarded the link between the state and local enterprises and its impacts on communities' effort in protecting cultural heritage. These studies have analysed the external factors but ignore the internal reasons like individualism or community rifts.
Since the debates over the preservation of ethnic heritage in the urban setting have been scarcely documented, this paper aspires to explore the challenges to preserving the ethnic culture of an indigenous community in Buon Ma Thuot. To achieve this, conceptual frameworks of political ecology and authorities versus minorities are utilised. The subsequent section introduces the brief history and heritage conservation achievements made by Ako Dhong people, followed by research method part. Next, the paper presents the main research findings by highlighting the challenges facing Ako Dhong's heritage. The discussion section puts Ako Dhong into a broader context, before wrapping this study. This article reveals a complicated relationship among actors in heritage management and conservation and explains the difficulty in preserving ethnic cultural heritage in the highland city of Vietnam.
Political ecology
In the 1970s, dissatisfied with cultural ecology that focused on interactions between humans and the ecological environment at the local level, anthropology and Relevant disciplines researchers started examining the cultural and environmental change of communities in developing countries within a broader socio-economic and political context. This approach is called political ecology. In Bryan's words, political ecology raises concerns over the interaction of diverse socio-economic and political forces with ecological change (Bryant, 1992, p. 14). Meanwhile, Roberts (2020) suggests that political ecology highlights the role of economic and state forces in the appropriation and disruption of the local environment. For example, through Political Ecology of Soil Erosion in Developing Countries (Blaikie, 2016) and Land Degradation and Society (Blaikie & Brookfield, 1987), the authors argue that land degradation is not a result of poverty, ignorance, overpopulation, and irrational livelihood practices of local people, but socio-economic-political pressures from the outside which forces farmers to exploit the land against their will. In addition, researchers expressed concerns about ecological consequences caused by the state development programs and ecological consequences of conflicts between indigenous communities and resource management government agencies within the "internal territorialisation" context (Rasmussen & Lund, 2018;Vandergeest & Peluso, 1995;Wadley, 2003). Some researchers also examined multi-dimensional relationships between the government, the people, and social classes in rural areas in natural resource competitions. For instance, through The political ecology of forestry in Burma, 1824-1994(Bryant, 1997 and Rich forests, poor people: resource control and resistance in Java (Peluso, 1992), the authors investigated the resources competitions between people and colonial and postcolonial states in Indonesia and Burma. Escobar adopted a new approach to political ecology when he considered discourse a factor in shaping knowledge and conflicts related to nature (Peet & Watts, 1996). In the political-cultural context of third-world countries, in Escobar's view, conflicts over resources are not merely competition in the production field, but also reflect the conflict of symbols and cultural meanings between different subjects (Hoang & Pham, 2012). This framework provides a lens to understand how the local authority's policy towards community forests has posed barriers to Ako Dhong's efforts to conserve their cultural heritage.
Authorities versus minorities
Minority groups are considered "metaphors and reminders of the betrayal of the classical national project. This betrayal, which was rooted in the failure of the nation-state to preserve its promise to be the guarantor of national sovereignty, underwrites the worldwide impulse to extrude or eliminate minorities" (Appadurai, 2006, p. 43). Regardless of the size and the high level of cultural difference, the cultural gap between the minority and the majority might cause friction resulting in ethnic violence, even ethnic cleansing. Many countries are composed of various ethnic, racial, and cultural groups (Harrison, 2010). The study conducted by Silverman and Ruggles (2007) contend that conflicts can occur over matters of indigenous land and cultural property rights to manage and conserve cultural heritage of the minority. Central to this is the question of who defines and takes control of cultural heritage. Therefore, various heritage scholars and researchers have sought to acknowledge the motives behind heritage interventions versus political goals and identity (Logan, 2012). For instance, the stewardship over heritage in Tibet is inextricably linked to China's invasion of Tibet after the 1949 revolution. In the case of Indonesia, the growth of cultural heritage for tourism is regarded prominently on the national agenda, prioritising national unity over cultural pluralism (Silverman & Ruggles, 2007). This theory contributes to explaining the opposition between the local government and the people of Ako Dhong in the way of ethnic culture preservation, which is reflected in the inadequacies of the local government's policy towards this community.
Ako Dhong: a typical case study
Ako Dhong is one of the 33 Ede villages in Buon Ma Thuot city. In the Ede language, Ako Dhong means watershed because it is located at the upper part of the Ea Nuol stream, the largest stream in Buon Ma Thuot. Two European nuns named Colomban and Boniface set foot on Buon Ma Thuot in 1954. Two years later, they were given a 45-hectare wasteland by the local government for coffee cultivation. In the coffee plantations, Y-Diem Niê (Ama H'Rin) was considered a leader among indigenous workers because of his hard work and intelligence. In 1966, both nuns relocated to live with the locals on the plantation from the Buon Ma Thuot Diocese. After 1975, the Vietnamese government issued various policies for the Central Highlands provinces, including nationalising forest land resources, collecting of agricultural land, migration, and sedentarisation. Indigenous people, therefore, lost their ownership and access to forest resources, and their farms were also placed under the management of agricultural cooperatives. There were also waves of migration from Kinh and other ethnic groups into the Central Highlands, thus disrupting Ede's traditional farming practices and lifestyle (Dang, 2014). Fortunately, Ako Dhong residents have been persistently protecting their cultural heritage under the village elders' leadership, particularly Ama H'Rin.
In a group discussion with some Ako Dhong villagers, locals were asked to share their thoughts about Ako Dhong's cultural heritage. Two key features were identified from these discussions including typical values and assets given by previous generations. Specifically, some crucial elements are considered cultural heritage like land, long stilt houses and forests. Land including residential and farm land is supposed to supply people with daily food and a place of residence. Ancestors also bestow it, so it is recognised as "the back of the ancestors". Long stilt house not only helps provide the locals with a place to live, but also is a place for religious practices and social activities between Ede people. Finally, the forest has been an indispensable part of Ede culture since they are believed to be born and die in the forest. The forest also provides a space for cultivating rice and vegetables, some materials for long stilt house construction and firewood and a venue for agricultural rituals.
Based on the Ako Dhong villagers' awareness as mentioned earlier, various measures have been in place to protect their cultural heritage. Every household in Ako Dhong village has both residential and farm land. Once the price of property increased significantly in the 2000s, many residents converted farm land areas into residential ones and then sold them at high prices. The money was used to buy farms in rural districts at a reasonable price, to fix or reconstruct stilt houses, and to collect some Ede traditional objects kpan benches, gongs, and jars. In addition, all Ako Dhong households have stilt and concrete house. The concrete house is utilised for modern and daily uses at the back of the garden, so the façade is maintained for the stilt house. Crucially, while the number of stilt houses dropped significantly in many Ede villages, the opposite has been seen at Ako Dhong. During the survey, the number of Ako Dhong's stilt houses was 24 (2014) and has increased to 30. Traditional practices such as can drinking, gong beating gongs, khan telling, and folk songs have been well maintained in various communal events (funerals, weddings, and church services). Besides, Ako Dhong is known to be the only village in the Central Highland with a community forest, which is approximately one hectare. Nowadays, the forest has been utilised as a common space for walking, running, and playing for many Ako Dhong residents. It is a space for young people to share and learn traditional gong songs. Also, the late village elder Ama H'Rin came to the forest and encouraged young people to protect it while he was alive.
Research Method
A qualitative case study approach was employed, beginning with the first field trips to Ako Dhong in 2010 and ending in 2020 (Yin, 2018). We visited Ako Dhong one time per year, the longest trip occurred within three weeks and the shortest occurred in a week. Empirical data were gathered through in-depth interviews, participant observations, group discussions and document review. We conducted fifty in-depth interviews with various stakeholders, including men and women of various ages and social backgrounds ( Figure 1). We had first meetings with some village elders before reaching out to locals and others. We conducted ten in-depth interviews with five officials (Official 1 to 5) of Buon Ma Thuot city's People Committee. Those interviewees work for Departments of Culture and Information, Department of Resource and Environment, Department of Urban Management, and Department of Internal Affairs. Through interviews, we could understand the Buon Ma Thuot city's planning and cultural preservation policies and conflicts between different agencies. Thirty-seven interviews with local people were conducted, including eight times with two village elders (Village Elder 1 and Village Elder 2) and twenty-nine times with eleven locals (from Local 1 to Local 11). The interview with an expert was conducted twice and one interview with a local businessman. The names of the informants were changed to ensure their anonymity. In general, these in-depth interviews helped us understand different perspectives between stakeholders about achievements and difficulties in cultural heritage management and conservation in Ako Dhong village.
We also participated in two community meetings at the communal house of Ako Dhong to listen to villagers' opinions about whether they should maintain the forest. We also learnt about the social cohesion and cultural diversity of Ako Dhong by attending a funeral ceremony for the late village elder Ama H'Rin in 2012. Besides, we held four group discussions to understand the Ako Dhong people's viewpoints about the government's cultural, political, and economic policies. Moreover, various reports for the period 2010 -2020 of the People's Committee of Tan Loi ward and the People's Committee of Buon Ma Thuot City were gathered to understand the socioeconomic context of the village of Ako Dhong, including planning and cultural preservation projects. Additionally, some online materials about other urban ethnic groups in Vietnam such as Cat Cat (H'Mong village) in Sapa and B'Nơr C (K'Ho village) in Lac Duong were examined; therefore, a comparison between Ako Dhong and those villages would be undertaken.
Collusion between local agencies and businesses
According to the Ede customary law, Ako Dhong's forest was owned by its community, so nobody had the right to possess it. Once the policy for nationalising forest was issued in 1975, the government has become the owner of Ako Dhong forest. However, this change was encountered with Ako Dhong residents' dissent since they considered the forest a sacred property to be maintained and protected for future generations. The local authorities were aware about the locals' attitude, so they acknowledged the Ako Dhong community as the forest custodians. In the 1990s, deforestation took place at a substantial pace throughout the Central Highlands, the late Village elder Ama H'Rin wanted the government to empower the forest management right for the Ako Dhong community. In addition, amendments in the Forestry Law of Vietnam created an ideal condition for Ako Dhong to become the forest protection agent. The Article 29 of the 2004 Law on Forest Protection and Development stipulates the assignment of forests to village communities: The conditions for assignment of forests to village communities are prescribed as follows: (1) The village communities have the same customs, practices, and traditions of close community association with forests in their production, life, culture, and belief; can manage forests; have demand and file applications for forest assignment.
(2) The assignment of forests to village communities must be in line with the approved forest protection and development plannings and plans; and match the capacity of the local forest funds.
Village population communities shall be assigned the following forests: (1) Forests which they are managing or using efficiently.
(2) Forests that hold water sources in direct service of the communities or other common communal interests cannot be assigned to organisations, households, or individuals (Vietnamese government, 2004).
The revised Law on Forest Preservation and Development in 2019 presents the regulation as mentioned above. During 2004 -2019, Ako Dhong villagers submitted multiple applications to request the Buon Ma Thuot government to recognise them as forest managers. However, the City People's Committee did not respond to the community's demand. Local 2 shared: We have asked the City People's Committee to issue the village a certificate of forest use and management rights, but they only promise. When we ask why it takes so long, they reply it is in progress. I think the Committee is delaying the procedures to favour businesses using the forest.
What the male Local 2 told was based on a reliable foundation. During 2009 -2010, the city government held several meetings to introduce new planning projects, through which the village's cemetery and forest would be converted into a supermarket. This project was integrated as part of "Green Stream" -an urban area project running along the Ea Nuol stream being invested by Trung Nguyen Coffee Group. Local 2 remembered: Various leaders of the province and city had some trips to the village to introduce planning and discuss with us. However, we reject it. The late Ama H'Rin village elder supposed that it is not wrong for Ako Dhong to maintain the forest legally and ethically. Fortunately, the supermarket project was cancelled, and Trung Nguyen's "Green Stream" urban area project was conducted but it does not violate the village's forest.
The locals' efforts in forest protection were confronted with some challenges five years later as the male Local 1 explained: A local official took a male stranger to my house. The stranger who, is supposed to be a director of a large construction company in Dak Lak, wants to help Ako Dhong rebuild a village road. I felt happy because the road had been seriously degraded for a long time. However, he mentioned that the roads would be fixed on the condition that if the community would sell the forest to him. I was shocked by his purpose of seizing the forest. I asked him how much did you buy it for? He replied 9 billion dongs. He also supposed that if the money is divided equally, every household could receive a big portion. I was speechless, so the two left.
Through what Local 1 shared, Buon Ma Thuot officials intentionally help businesses to bargain with the locals for the forest. The road is considered an exchange tool for the forest.
The negotiations between the businessman and the Local 1 continued to happen as the Local 1 revealed: He invited me out for a drink. I did not want to go in the first place. I then agreed as he promised to talk something new to me. He increased the price to 11 billion dongs, but I kept my mind. My decision was supported by all villagers at the communal meeting. However, I had a doubt that something bad could happen.
To gain deeper understanding of the businessman, some attempts were made to approach him. With the support of the Village Elder 2, the businessman agreed to meet us at a local café. He explained why he tried to own the Ako Dhong forest: Ako Dhong is a beautiful village. I think that the city government should support villagers to conserve their traditional culture and develop tourism. However, maintaining the silt house and performing gongs are not so enough. The forest should be converted into a commercial centre, or a luxurious residential area. In this way, villagers will have some benefits, and so do authorities and businesses. Everybody knows except for the Village Elder. I believe that authority will agree with my proposal.
The businessman's opinion was justified by a subsequent decision being released by the local authority. Accordingly, the local authority decided to adjust the current planning of Ako Dhong Valley. This area will be converted into residential land. Although this plan has not yet been approved by the People's Committee of Dak Lak province, the valley area is expected to become expensive real estate for buyers. Therefore, the Official 1 of the People's Committee of Buon Ma Thuot City claimed about the prospect: The community will lose the forest if the Ako Dhong Valley is converted into the residential land. Ako Dhong's green space will disappear. My colleague mentions that the provincial departments do not participate in since it may cause some problems and encounter the community resistance. Therefore, they are waiting for the provincial chairman's decision.
Apparently, while refusing to grant forest use and management rights to the community, the Buon Ma Thuot city government supported the local businesses to privatise the forest resources at Ako Dhong. At this stage, a question must be asked, what will Ako Dhong culture be like if the community forest will disappear? We suppose that deforestation will cause two negative impacts. First, it will undermine the local residents' momentum of cultural conservation. In fact, people find it so challenging to continue their work once the forest as the local cultural symbol lost. Second, deforestation might threaten Ako Dhong's traditional cultural practices. The Expert strengthens this point through his response to an interview: The forest is not only a resource, an "environment", and "ecology", but it is also a source of spiritual life. People may lose their roots without it. The forest is also the cultural source given the close relationship between people and the forest. If there is no forest, culture will disappear. Therefore, it is suggested that the loss of forests will mark a gradual death of Ako Dhong's traditional cultural values.
Conflicts between various local agencies
To protect the Ako Dhong village's green space, the Buon Ma Thuot Department of Planning and Urban Management started the "Architectural landscape of the Ako Dhong village" project in 2012. This project was undertaken with consultations from the village elder Ama H'Rin and some villagers. Accordingly, the minimum residential land area of households in the village is 8 m in width, and 30 m in length. A five-meter-long corridor runs from the fence to the yard for planting trees and flowers. The project, therefore, provided the legal basis for maintaining the Ako Dhong's traditional space and architecture. However, the Department of Resource and Environment Management of Buon Ma Thuot City is responsible for land management within Buon Ma Thuot city did not facilitate the project. On the contrary, this department allowed the local households to sell their garden land to outsiders as civil agreements. The Kinh people, particularly local officials were long waiting for this to enter Ako Dhong. The disagreement between these two departments was noted by the Official 2: By talking with the village elder Ama H'Rin and the villagers, we believe in the "Architectural landscape of the Ako Dhong village" project. The project is to support villagers to turn the traditional living space into a cultural highlight for Buon Ma Thuot. Regrettably, the Department of Resource and Environment Management refused to collaborate with the Department of Planning and Urban Management on this project. What they consider not to be invalid according to the law. We disagree with this, but we have no right to stop them.
The disagreement between local agencies helped a large number of Kinh people coming and living at Ako Dhong from 2010.
According to Figure 2, the Ede group outnumbered the Kinh group from 2010 to 2012. However, from 2014 onward, the Kinh group surpassed the Ede group. In 2020, the Kinh group had more than twice that of the Ede group. Kinh people's dominance gradually disrupted Ako Dhong's homogeneity since there were cultural differences. These differences were highlighted by the Local 3: "Ma Phong bought a garden in Ako Dhong in 2009. He is believed to be the director of a forestry enterprise in Dak Lak province. The late villager elder Ama H'Rin encouraged him to build a stilt house to protect the village's traditional landscape, but he refused. He built a Kinh style villa. We felt uncomfortable, but we could not do anything because he had a right to do what he wanted". In addition to the Local 3's opinion, the Local 5 described some further impacts: Before 2014, our village was so peaceful that we do not care about thievery. We did not need to lock our doors during the nights. However, once the Kinh people become dominant, everything has changed. The village has been noisier, and garbage has been more present. Thievery has been a big issue.
Cultural imposition by the local government
The local authority was responsible for constructing the communal house, village gate, and roads. The interview with the Local 1 reveals how the village gate was built: The gate was built without the villagers' participation. The city leader went to the village and saw the gate's decoration. The decoration was so plain that he asked me to add some features like rice pounding and gongs beating in Ede culture. Nevertheless, there is not much space on the gate, how can I do it? I heard that the cost for the gate is 400 million dongs. If they would consult us, we would have recommended something for the gate. They consulted us once the gate completed in 2015. I heard that the Vietnamese government gave each Ede village 1 billion dongs. The local authority built the gate since there was nothing else to spend that sum of money.
Based on the Local 1's perspective, some issues can be withdrawn. First, the construction of the gate did not originate from the community's demand, but it was an option for the local official to utilise the allocated budget. Second, there was no discussion between villagers and local officials in the making of the gate. The gate is not so consistent with the local cultural standards having a high roof and two large columns. Some decorative elements like the Lac bird image as a popular "totem" of the Kinh people were used. Meanwhile, some symbols representing the Ede cultural tradition including the water station and the long stilt house were not used at all. The Kinh style gate in the entrance of the Ede village is not welcomed by the locals as the Local 8, a highly respected figure in Ako Dhong commented: If you only look at the gate, many people will think that Ako Dhong is a Kinh village because the gate is like the Kinh one. Why is the Ede village's gate designed in the Kinh people's style? Have you seen anything usual?
What had happened to the Ako Dhong's gate reflects a reality of Vietnamese cultural policy. The Government and local authorities often overlook local cultural values, so they tend force indigenous people to conform to the Kinh people's values. Consequently, Ako Dhong people do not feel respected and they no longer trust the government's policies. This is consistent with other studies conducted by Culas (2010), Harrell (2011), Hoang and Pham (2012), andDang (2014). Recently, households in Ako Dhong refused to participate in the Plan 130 of the Buon Ma Thuot city government (2019). Under the plan, the city government would support 2 households in Ako Dhong. Every household would receive 350 million VND to revitalise the long-stilt house space, and then participate in the city's homestay tourism program of the Buon Ma Thuot city. The woman 2 in Ako Dhong explained why the villagers were not interested in the program: If we the homestay program, we have to build a standardised long-stilt house, but we like to prefer to do it in our own way. In addition, tourists will eat and stay with the host, thereby affecting our lives. We might lose our privacy. Also, homestay tourism requires a larger space, not just a stilt house. But we are not allowed to converse agricultural land into residential one, so how can we afford it?
Therefore, Ako Dhong residents may receive little support from the local authority, so they instead find alternative measures to protect their cultural heritage.
Community rift
In Ako Dhong village, the late village elder Ama H'Rin was a spiritual leader. However, some problems which happened in his family, impacted the Ako Dhong people's efforts to protect their culture. The male Local 5 who, is Ama H'Rin's first son-in-law, conducted forest encroachment. This activity was uncovered and informed to community once a villager uncovered it. The community did not trust him anymore. When the Local 5 was the Ako Dhong village chief, he turned public land into his family's possession with a cadastral surveyor's help. No one in Ako Dhong knew this.
In 2014, when Ama H'Rin passed away, thing was brought to light. The Local 2 shared: I was informed that some Kinh people were cutting trees in the forest next to the Local 5's house. I immediately asked them: Why are you cutting down our trees? They said "Are you crazy? A man sold the land to us, these trees are ours. We can cut down trees to build houses." I went to see the Local 5 to ask for his land registration, but he replied that he lost it. I supposed that he had occupied about 8 sào 1 in the village, including part of the forest. Ama H'Rin's family and the community disagreed with his behaviour. Some people wanted to file lawsuits against him, but how can they sue when he had been granted the registration?" The above-mentioned behaviour leads two some issues. It stimulates businesses to ake possession of the community forest despite the legal and ethical barriers. Also, Ama H'Rin family's social stature dropped substantially, leading to the decline of people's trust in the long-recognised figure. This is consistent with what the Local 6 in Ako Dhong shared: The village is like an extended family because the households used to live in the same house before 1975. People love and show respect for one another. But the lawsuits related to the Local 5 have changed a lot. They affected the reputation of the Ama H'Rin's family. He would be so sad in the afterlife because some things happened in the family upon his death. Also, the village is no longer as united as it used to. Money has changed people. Money is considered more important than forests, stilt houses, gongs, and even people's dignity. Now it is the Local 5. But who knows for sure that there will not be another person.
Discussion
Through the case study in Ako Dhong, this article points out the challenges of ethnic culture conservation in the context of Vietnamese urbanisation. In light of theoretical foundations of political ecology and the opposition between authorities and minorities, we can see some complicated problems undermining the heritage preservation achievements of the Ako Dhong people. These challenges come not only from outside factors such as local authority, businesses, and migrants, but also from the community. Initially, it paved the way for businesses to possess this resource, thereby leading to problems. First, local leaders' benefits from converting the forest to real estate are much more important than maintaining and conserving Ako Dhong's cultural heritage. The motive for policy implementation made by the local leaders is closely linked to their profit, not for the public. Thus, it can be said that there has been a great lag in the local urban leaders' perspectives in Vietnam compared with the mainstream perspective in the world today. This is clearly stated by the United Nations Educational, Scientific and Cultural Organization (UNESCO) in (2014) that "cultural heritage plays an important role in creating a sustainable city as an important non-renewable resource of cities, a catalyst for social cohesion, as an element of identity and creativity, as an economic factor that attracts revenue from tourism and as a factor in mitigating climate change". Secondly, some studies regard companies as outside agencies that obtain profits from revitalisation by UNESCO and Vietnam's government (Salemink, 2016), this research showcases a different angle of how local businesses dispossessed the heritage of the community with the help of the authority. In other words, the case study of Ako Dhong provides a new insight into how local crony capitalism has threatened the community's cultural heritage. The literature has fundamentally studied crony capitalism in developing countries from an economic-political lens (Beresford, 2008;Ngo & Tarko, 2018;Pei, 2016;Vu & Nguyen, 2023). For example, Pei argues that the instrumental alliance between capitalists and politicians is the root cause of widespread corruption in China after the 1990s (Pei, 2016). By showing the influence of local crony capitalism in the field of ethnic cultural heritage, this research contributes to enriching the knowledge about the crony capitalism in developing countries.
The second challenge is that the Buon Ma Thuot city government lacks a proper planning to conserve Ako Dhong's residential space. This has enabled the Kinh to migrate massively to Ako Dhong in recent decades, thus turning Ede into an ethnic minority in their homeland. In the planning project for the Buon Ma Thuot city, Dak Lak province (1998), the city did not take into account the planning for the village cultural space of the Ede community, a factor contributing to the local cultural diversity and uniqueness (Dang, 2019b). Ten years later, the planning vision remained unchanged. The government's neglect of Ako Dhong-a cultural highlight of Buon Ma Thuot is a telling example showing that local leaders have not appreciated Ede villages' cultural space in the overall urban space of Buon Ma Thuot. Meanwhile, recent research conducted by Thai and Phan (2020) indicates that the cultural heritage resource is one of the vital factors increasing the Buon Ma Thuot city's tourism competitiveness. This reveals a paradox that intending to be the coffee city of the world (Nguyen, 2022), the leaders of Buon Ma Thuot city are eager to increase their locals' attraction to tourists, but they neglect the factors making its identity.
A third challenge is that the government tends to impose the Kinh based cultural values and standards on indigenous communities. This ethnocentric approach has led to serious land and culture conflicts between indigenous groups and Kinh people in the Central Highlands since 1975 (Dang, 2014;Evans, 2018;Mai, 2011;Nguyen, 2008;Salemink, 1997Salemink, , 2000Salemink, , 2003Vu et al., 2000). For example, some large demonstrations led by some Central Highland ethnic groups took place on a large scale in 2001 and 2004 (Nguyen, 2008;Salemink, 2003). What had happened to the village gate's construction unveils that the local authorities have been just maintaining their old approach with indigenous issues. It is more critical to claim that the locals' culture and value are not respected at all. On the flip side, by accessing the internet, particularly YouTube and Facebook, indigenous people have learnt about their civil rights to conserve their traditional cultures. They have actively made comments, raised questions, and critiqued the government's policies. Thus, the top-down approach of local government only pushes the indigenous villages to be closer to the church where they feel more respected (Dang, 2014;Salemink, 2003Salemink, , 2016. This is why the government has not gained the community's trust although they poured a large amount of money into the Central Highlands villages (Dang, 2019b).
The final challenge is the internal rifts in the community. The previous studies state that a community is homogeneous in terms of spatial unit, social structure history, culture, interests, and demands (Bui, 2012;Ngo, 2002;Nguyen, 2008;Nguyen & Phan, 2021;Phan, 2009). Heritage conservation is often understood as the interaction between the state and community (Malarney, 1996;. This study indicates that in the context of urbanisation, migration, and conversion, the community is no longer a homogeneous group, but it has developed into some different groups. The Local 5's acquisition of the forest presents a strong rise of individualism within a community sharing some common values and interests despite the family and community's disagreement. This issue reflects the external environment's effects on the community and the diverse and complicated nature of the community. Therefore, while implementing any community projects like natural resource or cultural resource conservation, national and local policymakers need to have a proper viewpoint of the community. This is in line with Agrawal and Gibson's research findings that policy makers should focus on the interests and actors within the community, internal and external institutions influencing the decisionmaking process (Agrawal & Gibson, 1999).
In a larger context, some urban ethnic places in Vietnam such as Cat Cat village (Sapa) and B' Nơr C village (Lac Duong) have been maintaining their cultural heritage and growing local tourism. Specifically, Cat Cat village was supported tremendously through the Cat Cat Museum Project (2010) and the Plan for Tourism Development of Lao Cai province to 20,230, vision to 2050(Lao Cai People Committee's People, 2022. Therefore, visitors have chance to understand local culture through traditional folk dances, weaving and archery. Tourism generated higher amount of income for local households up to 200 million dongs annually (equivalent to 8.600 USD) (Hoang, 2022). In term of B' Nơr C village, this place has attracted international and domestic tourists by promoting gong performance and traditional handicraft like embroidery as a form of souvenir. Therefore, every year Lac Duong district receives roughly 1.5 million visitors (Quynh, 2020). By comparing Ako Dhong with those villages, there is a sharp contrast in cultural heritage management and promotion. Unlike Ede's village, two villages mentioned earlier have received tremendous support from the local government for cultural heritage management and conservation. Meanwhile, the local authorities and businesses at Buon Ma Thuot have posed barriers and even aimed to appropriate the community's heritage for their own sake.
Conclusion
By virtue of increasing urbanisation, more and more indigenous people have become urban dwellers across Vietnam. In this context, exploring the challenges of preserving ethnic culture in urban spaces is crucial, but little study has been undertaken. The long ethnographic study at Ako Dhong village reveals the huge challenges facing this Ede community regarding heritage conservation. From the lens of political ecology, and the theory of antagonism between the government of majority group and ethnic minor groups, this study explores the external and internal challenges facing local communities in their efforts to conserve cultural heritage. Notably, the "handshake" between local government and business towards forest resources, the lack of efficient measures to curb mass migration and preserve indigenous habitats, and the top-down policy approach have been the biggest external challenges. Barrier also comes from the internal conflicts because some individuals have broken the community rules to possess forest resources, and damaged trust and community cohesion.
The research findings provide new perspectives about stakeholders involved in preserving ethnic heritage: the community, the local authority, and business. The community is not a homogeneous group, but has developed in some groups having different views, interests, and ambitions. Through the heritage conservation policies, local authorities showcase their support for private businesses in appropriating the community's assets for their benefits. This reflects two different views on ethnic cultural heritage. On the one hand, the local government and businesses view cultural heritage as a type of real estate with high commercial value. On the other hand, cultural heritage is regarded as values generated by previous generations that needs to be well protected from community's perspective. Since cultural heritage plays a vital role in creating a sustainable city, the Buon Ma Thuot city government needs to change its way of viewing of heritage values of Ako Dhong village and issue proper measures to support heritage management and conservation together with the community. Notably, two large land protests related to led by indigenous people in the Central Highlands in 2001 and 2004 demonstrated that upcoming protests may go beyond the authority's control and lead to some unexpected outcomes (Dang, 2014;Nguyen, 2008). Thus, Buon Ma Thuot authorities should not forget this lesson while managing Ako Dhong's forest. The limitation of this study is that the authors did not have the opportunity to interview more private enterprises' representatives. The future research also might compare Ako Dhong with other Ede village's efforts in preserving their culture.
|
2023-07-12T05:28:30.854Z
|
2023-07-10T00:00:00.000
|
{
"year": 2023,
"sha1": "b9adb7bf16712d543451573280249d9406243c00",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311886.2023.2233754?needAccess=true&role=button",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4179658137a3ab8ba3d095b9a0abc3aa7bb0e84c",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
}
|
5958814
|
pes2o/s2orc
|
v3-fos-license
|
Improving the extraction of complex regulatory events from scientific text by using ontology-based inference
Background The extraction of complex events from biomedical text is a challenging task and requires in-depth semantic analysis. Previous approaches associate lexical and syntactic resources with ontologies for the semantic analysis, but fall short in testing the benefits from the use of domain knowledge. Results We developed a system that deduces implicit events from explicitly expressed events by using inference rules that encode domain knowledge. We evaluated the system with the inference module on three tasks: First, when tested against a corpus with manually annotated events, the inference module of our system contributes 53.2% of correct extractions, but does not cause any incorrect results. Second, the system overall reproduces 33.1% of the transcription regulatory events contained in RegulonDB (up to 85.0% precision) and the inference module is required for 93.8% of the reproduced events. Third, we applied the system with minimum adaptations to the identification of cell activity regulation events, confirming that the inference improves the performance of the system also on this task. Conclusions Our research shows that the inference based on domain knowledge plays a significant role in extracting complex events from text. This approach has great potential in recognizing the complex concepts of such biomedical ontologies as Gene Ontology in the literature.
Background
The task of extracting events from text, called event extraction, is a complex process that requires various semantic resources to decipher the semantic features in the event descriptions. Previous approaches identify and represent the textual semantics of events (e.g. gene regulation, gene-disease relation) by associating lexical and syntactic resources with ontologies [1][2][3][4][5]. We further explore the usage of an ontology for incorporating domain knowledge into an event extraction system.
Events from text that have been hand-curated into relational databases by biologists are actually the products of scientific reasoning supported by the domain knowledge of the biologists. This process of reasoning is based on linguistic evidence of such language patterns as "A regulates B" and "expression of Gene C" which refer to the basic events of regulation and gene expression. These basic events can be combined into an event with the compositional structure "A regulates (the expression of Gene C)", where the parentheses enclose the embedded event. In this paper, we call such an event consisting of multiple basic events a complex event and say that it has a compositional structure. We will show that the use of inference based on domain knowledge supports the extraction of complex events from text.
The previous approaches to extracting complex events combine the basic events into compositional structures according to the syntactic structures of source sentences. However, there are two open issues in curating the compositional structures into relational databases. First, the event descriptions in scientific papers are so complicated that it is often required to transform the compositional structures into the structures compatible with the semantic templates of the target databases. Second, an event can be represented across sentence boundaries, even in multiple sentences which are not linked via anaphoric expressions (e.g. 'it', 'the gene').
Biologists with sufficient domain knowledge have little problem in carrying out the two required tasks of structural transformation and evidence combination. Structural transformation is to find an event that has the same meaning as the original event but with a different structure, while evidence combination is to identify a new event that can be deduced from multiple events. We should encode the domain knowledge into a logical form so that our text mining systems can process the compositional structures of events, which are explicitly expressed in text and can be extracted by language patterns, to deduce the events with alternative structures and those implied by a combination of multiple events. We call the explicitly expressed events explicit events and the deduced events implicit events.
Several text mining systems have employed inference based on domain knowledge to fill in event templates [6][7][8]. They can also go beyond sentence boundaries and combine into an event frame the event attributes collected from different sentences. However, they do not use an ontology for representing the inference rules. Moreover, they primarily deal with flat-structured event frames whose participants are physical entities (e.g. protein, residue). To address these issues, we present a novel approach that represents events and domain knowledge with an ontology and combines basic events into a compositional structure where an event participant can be another simpler event.
We utilize Gene Regulation Ontology (GRO), a conceptual model for the domain of gene regulation [9]. The ontology has been designed for representing the compositional semantics of both biomedical text and the referential databases. GRO provides basic concepts and properties of the domain, which are from, and cross-linked to, such biomedical ontologies as Gene Ontology and Sequence Ontology. We use the concepts and properties of GRO to represent the domain knowledge in form of P Q implications, which we call inference rules. We also represent explicit events from text with GRO and apply modus ponens to the inference rules and the explicit events to deduce implicit events. We implemented a system of event extraction with the proposed inference module and evaluated it on three tasks, reporting that the inference significantly improves the system performance.
Results
We performed three evaluations to test our system. Each evaluation takes two steps to answer the following two questions, respectively: 1) How well does the system with the inference module extract events from text and 2) how much does the inference module contribute to the event extraction? First, we ran the system on a manually annotated corpus to estimate the performance of the system. Second, we used the system for a real-world task of populating RegulonDB, the referential database of E. coli transcription regulatory network, to prove the robustness of the system. The first two evaluations are based on the corpora used for our previously reported experiments [10]. Finally, we applied the system to a related task of extracting regulatory events on cell activities and compared the results with the GOA database [11]. While the first two evaluation tasks focus on E. coli, a prokaryotic model organism, the last task deals with human genes and cells. Table 1 shows the event templates for the evaluations. The first two evaluations are to extract instances of the first three event templates in the table, while the last evaluation is to extract instances of the two last event templates. Our system deals with four properties of events: 1) agents which bind to gene regulatory regions or control gene expression and cell activities; 2) patients which are regulated by the agents; 3) polarity, which tells whether the agent regulates the patient positively or negatively; and 4) physical contact, which indicates whether the agent regulates the patient directly by binding or indirectly through other agents. Since the three evaluations only consider the agents and patients, the event templates in Table 1 include only the two properties.
Evaluation against event annotation
We evaluated our system first against a manually annotated corpus. The corpus consists of 209 MEDLINE abstracts that contain at least one E. coli transcription factor (TF) name. Two curators have annotated E. coli gene regulatory events on the corpus and have agreed on the final release of the annotated corpus which is available at http://www.ebi.ac.uk/~kim/eventannotation/ (see [10] for details, including inter-annotator agreement).
We randomly divided the corpus into two sets: One for system development (i.e. training corpus) and the other for system evaluation (i.e. test corpus). The training corpus, consisting of 109 abstracts, has 250 events annotated, while the test corpus, consisting of 100 abstracts, has 375 events annotated. We manually constructed language patterns and inference rules, based on the training corpus and a review paper (see the Methods section for details).
The system successfully extracted 79 events from the test corpus (21.1% recall) and incorrectly produced 15 events (84.0% precision). We consider an extracted event as correct if its two participants and their roles (i.e. agent, patient) are correctly identified, following the evaluation criteria of the previous approaches [3,12]. Among the 79 events, the system has correctly identified polarity of 46 events (58.2% precision) and physical contact of 51 events (64.6% precision), while these two features are not considered for estimating the system performance, following the evaluation criteria of the previous approaches [3,12]. To understand the contribution of the inference on the system, we have run the system without the inference module. It then extracts only 37 out of the successfully extracted 79 events, which indicates that the inference contributes on 53.2% of the correct results. In addition, the inference was involved in the extraction of only three out of the 15 incorrectly extracted events. This result supports our claim that logical inference can effectively deduce implicit textual semantics from explicit textual semantics. We have further focused on the events whose agents are TFs for the purpose of comparing our system with [3,12]. The test corpus has 305 events with TFs as agents. The system has successfully extracted 66 events among them (21.6% recall) and incorrectly produced 6 events (91.7% precision). This performance is slightly better than that of [3] (90% precision,~20% recall) and of [12] (84% precision).
We analyzed the errors of the system as follows: The false positives, in total 15 errors, are mainly due to the inappropriate application of the loose pattern matching method (7 errors) (see the Methods section for details). The other causes include parse errors (2), the neglect of negation (1), and an error in conversion from predicate argument structure to dependency structure (1). These results of error analysis indicate that the three incorrect events, which were extracted by the system with the inference module, are actually due to the incorrect outputs of the prior modules (e.g. pattern matching) passed to the inference module. In short, the inference module caused no incorrect results.
We also analyzed the false negatives. We found that 29.7% of the missing events (88/ 296) are due to the deficiency of the gene name dictionary and that 30.0% (68/296) are due to the lack of anaphora resolution. The rest of the missing events (40.3%) are thus dependent upon pattern matching and inference. It is hard to distinguish errors by pattern matching from those by the inference, because the inference module takes into consideration all semantics from an entire document (i.e. MEDLINE abstract) for the evidence combination. Therefore, the inference together with the pattern matching affects at most 40% of the false negatives.
Evaluation against RegulonDB
We tested the system against the real-world task of populating RegulonDB with E. coli transcriptional regulatory events from the literature. We used four corpora that are relevant to E. coli transcription regulation [10]: 1) the regulon.abstract corpus with 2,704 MEDLINE abstracts which are references of RegulonDB, 2) the regulon.fulltext corpus with the fulltexts of 436 references in RegulonDB, 3) the ecoli-tf.abstract corpus with 4,347 MEDLINE abstracts that contain at least one E. coli TF name, and 4) the ecoli-tf.fulltext with the fulltexts of 1,812 papers among those in the ecoli-tf.abstract.
We have measured the performance of the system for this evaluation task as follows: The precision is measured as the percentage of events found in RegulonDB among the unique events extracted by the system, while the recall is the percentage of the successfully extracted events among those curated in RegulonDB. The version of Regu-lonDB used for the evaluation is 6.2, containing 4,579 E. coli genes, 169 TFs, and 3,590 unique gene regulation events. This evaluation only considers events with TFs as agents because of the purpose of populating RegulonDB. The overall performance is as follows: F-score 0.44, precision 66.6%, and recall 33.1%. Table 2 shows the evaluation results over each test corpus, where the performance of the system without the inference is displayed within pairs of parentheses.
Additionally, we analyzed the effect of event types. The precision for the events of the type "regulation of transcription" is 85%, higher than that of [12] (77% precision), while the overall precision (67%) is predictably lower than that since the system of [12] is developed specifically for extracting regulatory events on gene transcription. We included the events of the other two types, which are hypernyms of "regulation of transcription", into the result set for the evaluation, because of the low recall for the events of "regulation of transcription" (5%). The overall recall (33%) is still lower than that of [12] (45% recall) because of the small size of the regulon.fulltext corpus (436 fulltexts). Note that [12] extracted 42% of RegulonDB events from 2,475 fulltexts of RegulonDB references. We plan to analyze a larger number of fulltexts in the future.
It is remarkable that the inference is inevitable for extracting 93.8% of the Regu-lonDB events that are extracted by our system from the corpora. In contrast, the inference module is involved in the extraction of only 3.2% of the false negative events. The percentage 93.8% is much higher than 53.2% of the first evaluation. The difference may be due to the fact that this second evaluation only counts unique events, while the first evaluation against the event annotations counts all extracted event instances. If so, these results may indicate that only a small amount of well-known events are frequently mentioned in papers in concise language forms, thus extracted by language patterns even without the help of inference, and that the rest of the events are expressed in papers with the detailed procedures of experiments which led to the discovery of the events.
Adaptation for regulation of cell activities
Rule-based systems are criticized for being too specific to the domains for which they have been developed, so much so that they cannot be straightforwardly adapted for other domains. To prove the adaptability of our system, we have applied it to a related topic: Regulation of cell activities. The goal of this new task is to populate the GOA [11], concerning two Gene Ontology (GO) concepts: Regulation of cell growth (GO:0001558) (shortly, RCG) and regulation of cell death (GO:0031341) (shortly, RCD). GOA is a database which provides GO annotations to proteins. In short, the task is to identify the proteins that can be annotated with the two GO concepts. The semantic templates of the two event types are defined in Table 1.
The adaptation included only the following work: We manually collected keywords of the concepts 'growth' and 'death' from WordNet and constructed 40 patterns for the keywords by using MedEvi [13]. As candidate agents, we collected human gene/ protein names from UniProt. We also collected cell type names from MeSH. These are newly built resources that were not required for the first two evaluation tasks. Existing language patterns and inference rules, for example for the concept 'regulation', were reused. We have not used any training corpus to further adjust the system to the new task.
We constructed a test corpus consisting of 13,136 abstracts by querying PubMed with two MeSH terms "Cell Death" and "Cell Enlargement". The system with the inference module extracted 244 unique UniProt proteins associated with RCG events and 266 unique proteins associated with RCD events from the corpus. This evaluation also uses the two measures: Precision, the percentage of unique proteins found in GOA among the extracted proteins, and recall, the percentage of extracted proteins among the protein records in GOA. GOA contains 16 proteins among the 244 proteins of RCG events (6.6% precision) and 100 proteins among the 266 proteins of RCD events (37.6% precision). Currently (2010 July), the GOA has 155 proteins associated with RCG (10.3% recall) and 908 proteins associated with RCD (11.0% recall). These results show that our system can be applied to a related task with minimal adaptations.
We also tested the system without the inference module against the cell corpus. It identifies 193 proteins associated with RCG events and 198 proteins associated with RCD events. GOA contains 13 proteins among the 193 proteins of ROG events (6.7% precision) and 78 proteins among the 198 proteins of RCD events (39.4% precision). The precision almost does not change even after running without the inference module, while the recall drops about 20% without the inference module. This finding is similar to what we found from the results of the second evaluation such that the precision is independent from the inference, while the recall drops significantly without the inference module. But the relatively smaller drop of recall for the new task may indicate that the inference rules developed for the first two evaluations have less effects on the third evaluation than the other two evaluations.
We have manually inspected 20 out of the proteins that are extracted by our system but not found in GOA, for each event type. Among the 20 'false positive' proteins of the RCD concepts, we found evidence that can support the association of 15 proteins with RCD concepts (75%). This means that the real precision can go up to 80% and more importantly that we can identify new protein instances of GO concepts by using our system. Among the 20 'false positive' proteins of the RCG concepts, we located evidence only for 8 proteins (40%). After careful inspection, we realized that the precision of the RCG-related proteins is much lower than that of the RCD-related proteins because the language patterns for RCG events, which we collected from WordNet, are not specific to cell size growth, but may also refer to cell proliferation and development which should be linked to the other GO concepts "cell proliferation" (GO:0008283) and "cell development" (GO:0048468). The lack of training corpus led to this problem, and so we plan to extend the experiment to other GO concepts, establishing training corpora for the concept identification in text.
Discussion
As explained in the Introduction, the inference rules we introduce in this paper are to deduce implicit events from explicit events. Note that unless the explicit events contain enough evidence to an implicit event, we cannot make logical deduction of the implicit event. In other words, the implicit events are alternative representations of the extracted information, where the implicit events do not convey 'new' information compared to the explicit events. The performance comparison between the system with the inference and that without the inference is, in a sense, to see which representations better fit for the target templates, where the inference rules are designed to produce results that better match the target templates. Previous systems often embed linguitic and domain knowledge required for event extraction together into hand-crafted rules or machine learning models, thus biased to target templates. In contrast, our approach of separating the inference rules from the linguistic resources helps us construct language patterns without respect to target templates [5]. Considering the compositional aspect of events, it leads us to the development of phrase-level patterns, which are close to lexical semantics, rather than sentence-level patterns [14]. In addition, we may associate the lexical patterns with the well-defined semantic types of an ontology and focus on the semantic types that are related to a given application task, not worrying about the side-effect of domain-specific patterns. This makes the patterns highly reusable, as shown in the third test case.
Conclusions
We proposed a novel approach to event extraction, using an ontology to represent the semantics of lexical, syntactic, and pragmatic resources. We focused on extracting regulatory events on gene expression and cell activities, which are very important to molecular biology and disease studies. Our system shows the full complexity in the identification of such complex events from the literature and may guide the ontology development to innovative ways of integrating various knowledge resources.
Methods
Our system first recognizes mentions of individual GRO instances in text, which can be the event components. It then combines them into compositional structures of explicit events by using language patterns. The system performs inference based on domain knowledge to deduce implicit events from the explicit events. It finally extracts the events that match pre-defined event templates. Both explicit and implicit events may fit for the database event templates. Figures 1 and 2 show the examples of the extracted events. Figure 1 depicts the three types of structures from the input text: Dependency structure, explicit event, and implicit event. An arrow between the syntactic and semantic structures indicates a correspondence link between two structures for a phrase. The explicit event is composed from phrasal structures to sentential structures by using the patterns in Table 3. The implicit event is deduced from the explicit events by using the inference rules 1 to 3 in Figure 1 Example 1 of event extraction. The figure depicts the three types of structures from the input text: Dependency structure, explicit events, and implicit events. An arrow between the syntactic and semantic structures indicates a correspondence link between two structures for a phrase. The explicit event is combinatorially composed from phrasal structures to sentential structures by using the patterns in Table 3. The implicit event is deduced from the explicit events by using the inference rules 1 to 3 in Table 4. Table 4 is used for the deduction. Table 4. TFBS stands for TranscriptionFactorBindingSiteOfDNA. Figure 2 shows that the explicit events of the two sentences are combined to deduce the implicit event. Rule 4 in Table 4 is used for the deduction. The overall workflow of the system is depicted in Figure 3.
Named entity recognition
We have adopted a dictionary-based approach for named entity recognition. The dictionary contains 15,881 gene/protein and operon names of E. coli, including the names of 169 E. coli TF names, collected from RegulonDB and SwissProt. The recognized names are grounded with UniProt identifiers and labeled with relevant GRO concepts among the followings: Gene, Protein, Operon, and TranscriptionFactor.
Parsing
We have utilized Enju, the HPSG parser [15], for syntactic analysis of sentences. While the Enju parser produces predicate-argument structures, we have developed a module to convert them into dependency structures and selectively merged the predicate-argument structure into the dependency structure. We have identified the dependency structure for the loose matching of language patterns explained below.
Pattern matching
To identify the explicit events from sentences, the system utilizes syntactic-semantic paired patterns, matching the syntactic patterns to the dependency structures and combining the semantic patterns into a semantic structure.
Each pattern is a pair of a syntactic pattern and a semantic pattern. Syntactic patterns comply with dependency structures. The leftmost item within a pair of parentheses (e.g. cause Verb, lesion Noun) is the head of the other items within the parentheses (e.g. Subject:Agent, Object:Patient). A dependent item may be surrounded by another pair of parentheses, which forms an embedded structure (e.g. Pattern 1, Pattern 2). The lexical items in the syntactic patterns are labeled with part-of-speech (POS) tags (e.g. Verb, Noun, Prep), and should be matched to words with the same POS tags. The dependent items have syntactic constraints that indicate their roles with respect to their head items (e.g. Subject, Object), and should be matched to those with the syntactic roles. The dependent items may have semantic variables (e.g. Agent, Patient, Gene), which indicate the semantics of the dependent items. If the semantic variable of a dependent item is a concept of GRO (e.g. Gene), the variable should match a semantic category that is identical to, or a sub type of, the specified concept. The semantic pattern expresses the semantics of its corresponding syntactic pattern. The semantic pattern is represented with GRO concepts (e.g. RegulatoryProcess, Gen-eExpression) and properties (e.g. hasAgent, hasPatient).
The system tries to match the syntactic patterns to the dependency structures of sentences in a bottom-up way. For example, it matches from Pattern 1 to Pattern 4 in Table 3 to the dependency structure of the example (1) depicted in Figure 1. In the process, it considers the syntactic and semantic constraints of the syntactic patterns. For instance, the item 'cause' of the fourth pattern in Table 3 should match the verb 'cause' that has both a subject and an object.
Once a syntactic pattern is successfully matched to a node of dependency structure, its corresponding semantic pattern is assigned to the node as one of its semantics. If the syntactic pattern has dependent items with semantic variables (e.g. Subject:Agent, Object:Patient), the variables (e.g. Agent, Patient) are replaced with the semantics of the children of the node that have been matched to the dependent items. In this way, the semantics of multiple phrases is combined into sentential semantics. In Figure 1, the small boxes with dashed lines show the semantics assigned to the internal nodes of the example (1), which are later combined into the textual sentential semantics.
Note that the node 'lesions' is assigned two pieces of semantics for the two gene names that are the children of the node (i.e. himA, himD). The explicit textual semantics of Figure 1 is one of the two, while the other is a duplicate of Sem1 except that the gene name 'himA' is replaced with 'himD'.
One important feature of the pattern matching is that we loosely match the syntactic patterns to the dependency structures. For instance, the gene name 'fimA' is not a direct child of the preposition 'of', but is matched to the item Object:Gene of the first pattern in Table 4. We have decided to match a dependent item not only to a direct child of the node matched to the head item, but also to any descendant of the node. The feature is based on two reasons: First, it is practically impossible to construct all potential patterns for the event extraction, though a reasonably large number of patterns for gene regulation have been accumulated; and second, the lexical entries not matched to any of the patterns for gene regulation (e.g. 'sevenfold', 'operon', 'fusion') might not affect the extraction of the events.
This loose matching still works under the following strict conditions: 1) An item with a syntactic role (e.g. Subject) can be matched to one of descendants under the sub-tree with the syntactic role; 2) once an item is matched to a node, it is not further matched to the node's descendants; and 3) it does not jump over clausal boundaries (e. g. 'which') and several exceptional words (e.g. 'except').
Inference
The inference step is to transduce explicit textual semantics (or events) into implicit semantics (or events). It deduces a new specific event instance, if possible, by combining any two or more general events. The inference module takes as input the explicit events from a text (i.e. a MEDLINE abstract, a fulltext) identified by the previous module of pattern matching. It applies to the explicit events the inference rules that reflect common sense knowledge and domain knowledge, as exemplified in Table 4. An inference rule has the propositional logic form of P Q, where P is a set of conditions and Q is the conclusion. It works with the modus ponens rule (i.e. P, P Q ⊦ Q). That is, if all the conditions P of a rule match some of the identified events from a text, the conclusion Q is instantiated and then added as an additional event of the text. As the input events are represented with GRO, the inference rules and their resultant events are also represented with GRO.
We have constructed 28 inference rules for dealing with the compositional structures of gene regulation events (e.g. Rules 1, 2) and for deducing biological events from the combination of linguistic events (e.g. Rules 3, 4) by consulting the training corpus and the review paper [16] (see Table 4).
For example, Rules 1 and 2 flatten, if possible, the compositional structure of event descriptions. The explicit events in Figure 1 has a cascaded structure with four basic event instances (i.e. three RegulatoryProcess, one GeneExpression) and is transformed by Rules 1 and 2 to fit for the database template that has only two event instances (i.e. RegulationOfGeneExpression, GeneExpression). Rule 3 deduces the specific event type RegulationOfGeneExpression from a general type of event (i.e. RegulatoryProcess).
Rule 4 reflects the domain knowledge that if a transcription factor both binds to the regulatory region of a gene and regulates the gene's expression level, it is the transcriptional regulator of the gene. Note that the two conditions of Rule 4 can be matched to events from any sentences; in other words, Rule 4 can merge multiple evidence from different sentences into a fact. The function polarity_sum works exactly like NXOR (Not Exclusive OR) operation in Boolean logic. The rules are repeatedly applied over the explicit events from a given text until no additional event is generated.
We have implemented a program that converts the inference rules into Prolog programming codes and a Prolog application that executes the rules over input events. We could not use the OWL-DL reasoners (e.g. Pellet) because of the DL-safe restriction of the reasoners. DL-safe restriction assumes that all instances of rules, both in conditions and in conclusions, should be available at the knowledge base [17]. Unfortunately, however, the rules for the event extraction generate new instances of events and event attributes in the conclusions. Nonetheless, we can still utilize the reasoners to validate the ontology populated with the extracted events.
Extraction
The system finally selects the events that match given semantic templates among those resulted from either pattern matching or inference. Table 1 shows the event templates. The variables are marked with '?' and are matched to the instances of the concepts referred to by the variables. For example, the variable "?Protein" can be matched to a protein name. Non-variable concepts and properties are used as semantic restriction on the events to extracted. For example, the last template in Table 1 can be matched to an instance of NegativeRegulation, which a child of RegulatoryProcess. In addition, the patient of the instance should an instance of CellDeath and the agent can be a gene, where Gene is a descendant of MolecularEntity.
Authors contributions
JJK conceived the study, designed and implemented the system, carried out the evaluations and drafted the manuscript. DRS motivated and coordinated the study and revised the manuscript.
|
2014-10-01T00:00:00.000Z
|
2011-10-06T00:00:00.000
|
{
"year": 2011,
"sha1": "26b91bc583f317c85015c013c9340e3ae2c497fd",
"oa_license": "CCBY",
"oa_url": "https://jbiomedsem.biomedcentral.com/track/pdf/10.1186/2041-1480-2-S5-S3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8edecc9b336359ddbca39e7be052c043c3f221b4",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
255302926
|
pes2o/s2orc
|
v3-fos-license
|
Attitudes toward COVID-19 vaccination during the state of emergency in Osaka, Japan
Background COVID-19 vaccination for general population started on April 12, 2021, in Osaka, Japan. We investigated public attitudes toward vaccination and associated factors of vaccine hesitancy during the third state of emergency. Methods An internet-based, self-reported, cross-sectional survey was conducted in June 2021, using the smartphone health app for residents of Osaka aged ≥18 years. Respondents were asked about their attitudes toward COVID-19 vaccine. Responses “Don’t want to receive vaccines” or “Don’t know” were defined as vaccine hesitancy (vs. “Received [1st dose]”, “Received [2nd dose]”, or “Want to receive vaccines”). Multivariable Poisson regression analysis was conducted to examine the associations between hesitancy and population characteristics. Results 23,214 individuals (8,482 men & 14,732 women) were included in the analysis. Proportions that answered “Received (1st dose)”, “Received (2nd dose)”, “Want to receive vaccines”, “Don’t want to receive vaccines”, “Don’t know”, and “Don’t want to answer” were 14.6%, 3.8%, 70.6%, 4.3%, 6.1%, and 0.5% among men; and 11.3%, 6.0%, 64.9%, 6.2%, 11.0%, and 0.6% among women. Factors associated with vaccine hesitancy included being a woman (aPR = 1.33; 95%CI = 1.23–1.44), age 18–39 (aPR = 7.00; 95%CI = 6.01–8.17) and 40–64 years (aPR = 4.25; 95%CI = 3.71–4.88 vs. 65+ years), living alone (aPR = 1.19; 95%CI = 1.08–1.30 vs. living with 3+ members), non-full-time employment and unemployment (aPRs ranged 1.12 to 1.49 vs. full-time employment), cardiovascular diseases/hypertension (aPR = 0.72; 95%CI = 0.65–0.81), and pregnancy (women of reproductive age only) (aPR = 1.35; 95%CI = 1.03–1.76). Conclusions Most respondents expressed favorable attitudes toward COVID-19 vaccination while hesitancy was disproportionately high in certain populations. Efforts are needed to ensure accessible vaccine information resources and healthcare services.
Introduction
Coronavirus disease 2019 (COVID-19) has evolved into a global public health threat. Besides non-pharmaceutical measures (e.g. social distancing, movement restrictions, promoting personal hygiene), vaccines are expected to play a significant role in ending the COVID-19 pandemic by establishing herd immunity [1,2].
Osaka, the third most populated prefecture (8.8 million) [3] in Japan, experienced one of the worst surges of COVID-19 infections in the nation. During the fourth wave of the pandemic (March-June 2021), daily infections and deaths marked 1,260 and 55, respectively, which were the highest record then of the prefecture [4,5]. On April 25, 2021, the Japanese government declared to place Osaka under the third state of emergency through June 20, 2021. The COVID-19 vaccination for general population started on April 12, 2021, in several prefectures in Japan including Osaka and is currently ongoing throughout the nation as of November 2021 [6].
As getting vaccinated is a personal choice, the decision-making process is affected by a variety of intrapersonal, interpersonal, and social contexts. People's health beliefs including perceived barriers and perceived benefits are considered as key determinants of COVID-19 vaccine hesitancy, and these are directly associated with modifiable or nonmodifiable factors such as gender, education, age, geographical locations, occupation, marital status, and race/ ethnicity [7]. Vaccine hesitancy, or "delay in acceptance or refusal of vaccination despite availability of vaccination services" [8] is a major challenge to the establishment of herd immunity. A cross-sectional study conducted during Feb 2021 (when vaccines were available only for medical workers) reported that the vaccine hesitancy rate was 11.3% among Japanese adults, citing fears of side effects as the top reason [9]. As the national vaccination program now covers the general population, monitoring public attitudes toward vaccination is key to planning, implementing, and evaluating the rollout strategies and interventions to ensure equal access to vaccines.
In light of the above, objectives of the present study are to 1) assess public attitudes toward COVID-19 vaccine and 2) examine prevalence and associated factors of vaccine hesitancy using large-scale data.
Data and study population
An internet-based, self-reported, cross-sectional survey was conducted during June 1-20, 2021, using the smartphone health app "Asmile". The app was launched in 2019 as part of the Osaka Health and Fitness Support Project and is available for all residents of Osaka aged 18 years or older [10]. Upon downloading the app, users provide a copy of documents to verify their identity. The users are also asked to provide a web-based informed consent that their personal information will be anonymized and used for the purpose of informing and improving the public health policies administered by the Osaka prefectural government or municipalities.
With the app, the users are encouraged to keep a record of their daily physical activities and respond to surveys to earn points to apply for lotteries. Detailed information about Asmile are reported elsewhere [11].
The study population of the present study was individuals aged 18 years or older residing in Osaka prefecture. The survey questionnaire was developed by the Osaka prefectural government in collaboration with the Osaka International Cancer Institute. Web-based questionnaires were distributed within the smartphone app Asmile to all users to investigate the changes in health behaviors and attitudes during the third state of emergency in Osaka. Of approximately 246,000 users as of June 2021, 23,460 individuals responded to the survey. After excluding 246 individuals whose basic demographic information (e.g. sex, age, place of residence) unverified, a total of 23,214 respondents (8,482 men and 14,732 women; age range 18-92 years) were included in the analyses. The study was approved by the Institutional Review Board of the Osaka International Cancer Institute (approval number: 20102). The data were deidentified before use.
Attitudes toward COVID-19 vaccination
Respondents were asked "Free-of-charge COVID-19 vaccine rollout has started. Have you already received the COVID-19 vaccine? If not, do you want to receive it?" Response categories included "Received (1 st dose)", "Received (2 nd dose)", "Want to receive vaccines", "Don't want to receive vaccines", "Don't know", and "Don't want to answer". Vaccine hesitancy was defined as either of the responses "Don't want to receive vaccines" or "Don't know". Those who answered "Don't know" likely intended to wait for some time to see how the vaccine rollout went or simply had not considered taking the vaccine at the time of survey. Thus, they were assumed to delay decision-making and intake of the vaccine, meeting the aforementioned definition of vaccine hesitancy [8].
Statistical analysis
Descriptive analyses were conducted to assess public attitudes toward COVID-19 vaccination and the prevalence of vaccine hesitancy. Multivariable Poisson regression analysis was conducted to examine the associations between vaccine hesitancy and population characteristics by excluding individuals who answered "Don't want to answer" to the vaccine attitude question (N = 128). All abovementioned variables except pregnancy were included in the main model. A separate model was fitted for women of reproductive age (18-49 years) only to control for pregnancy and all other covariates. Variance inflation factors were computed for all independent variables in the models to examine correlations, and all values were confirmed to be below 5.0. All analyses were performed using R version 4.1.0.
14.7% of overall respondents reported vaccine hesitancy with higher prevalence among women than men (17.2% vs. 10.4%) ( Table 1). By age, the prevalence ranged from 3.9% among those aged 65+ years to 28.1% among younger adults aged 18-39 years. By employment status, the highest prevalence of vaccine hesitancy was observed among contractors (23.5%) followed by the self-employed (18.4%): the lowest prevalence was seen among unemployed individuals (10.2%). According to the reported health conditions, prevalence of vaccine hesitancy was lowest among those who had diabetes (6.3%) and highest among pregnant women (33.6%).
Note: Prevalence ratios in bold types were statistically significantly lower/higher than 1.00. Data collection was conducted during June 1-20, 2021 with residents of Osaka prefecture aged �18 years through the smartphone health app "Asmile". Respondents were asked "Free-of-charge COVID-19 vaccine rollout has started. Have you already received the vaccine? If not, do you want to receive it?" Vaccine hesitancy was defined as either of the responses "Don't want to receive vaccine" or "Don't know". Multivariable Poisson regression analysis was conducted to examine the associations between vaccine hesitancy and population characteristics. To examine the association between vaccine hesitancy and pregnancy, a separate model was fitted for women of reproductive age (18-49 years). https://doi.org/10.1371/journal.pone.0279481.t001
PLOS ONE
Public attitudes toward COVID-19 vaccine in Osaka, Japan Respondents who reported having cardiovascular diseases or hypertension had lower likelihood of vaccine hesitancy (aPR = 0.73; 95%CI = 0.65-0.81) compared to those who did not have the disease. Among women, pregnant individuals had 1.35 (95%CI = 1.03-1.76) times higher likelihood of vaccine hesitancy.
Discussions
During June 1-20, 2021, in Osaka, over 80% of respondents in both sexes reported having received at least one does of COVID-19 vaccine or intending to receive one. Vaccine hesitancy was reported by 14.7% of overall respondents with higher likelihood among women, and younger, non-full-time working or unemployed, or pregnant individuals. The present study provides a profile of public attitudes toward COVID-19 vaccination to inform the ongoing rollout strategies. Aligned with findings from previous assessments [9,[14][15][16][17], we found that attitudes toward COVID-19 vaccination varied across sex and age groups. This disparity may partially be described by the beliefs among younger individuals that they will not get infected or become seriously ill from COVID-19 [9]. The prevalence of vaccine hesitancy found in the present study is comparable to those of other countries (reported to be 13-29%) [17] and that from the previous study of Japanese adults (11.3%) [9]. While a majority of respondents across Osaka had already received or intended to receive COVID-19 vaccines, efforts are still required to build confidence, convenience, and complacency in a COVID-19 vaccine to achieve the optimal inoculation level for all population groups. Herd immunity is achievable when a majority of the population has gained immunity. Although there are ongoing debates over the required doses and threshold, several studies have suggested that at least 60-70% of the population should be appropriately vaccinated [18][19][20]. The move to begin administering booster shots (third dose) of COVID-19 vaccine from December 1 st , 2021 has been approved by the Japanese health ministry [21]. As the number of individuals who actually receive a COVID-19 vaccine may be lower than those who claim they intend to do so, continued surveillance is warranted to monitor the progress.
The associated factors of COVID-19 vaccine hesitancy found in the present study were largely consistent to those from previous studies [9,[14][15][16]. Okubo et al. revealed younger age, female, living alone, lower socioeconomic status, and presence of severe psychological distress were significantly associated with higher vaccine hesitancy rates [9]. Our findings add up to these findings by showing increased hesitancy among pregnant women. The Japan Society for Infectious Diseases in Obstetrics and Gynecology recommends that pregnant woman should not be excluded from vaccination programs [22,23]. While there are currently limited data on the effect of pregnancy on the etiology of COVID-19 or the long-term safety of COVID-19 vaccines in pregnant women, vaccines are expected to protect expectant mothers from severe illness from COVID-19 [22,23]. As getting vaccinated is a personal choice, it is important to facilitate equal, transparent, and timely dissemination of vaccine information to help all individuals with their decision making.
The present study is subject to several limitations. First, the internet-based sampling led to biased demographic distribution of the respondents. Although such bias was addressed by stratified analyses and multivariable adjustment, the results may not be representative of the entire Asmile users or the general population of Osaka. Specifically, the data collection was conducted on a voluntary basis that may have resulted in biased estimation of vaccine hesitancy. Given that the respondents were assumed to be more health-conscious and had higher internet literacy than those who did not participate in the survey, they possibly had keener perceptions on susceptibility, severity, barriers, and benefits regarding the COVID-19 vaccine.
Since these perceptions contribute to determine the attitude toward the vaccine [7], vaccine hesitancy assessed in our study sample might have limited generalizability. Second, we were unable to conduct nuanced analysis on associations between socioeconomic status and public attitudes or behaviors regarding COVID-19 vaccination due to unavailability of such information in the Asmile survey. Continued assessment is warranted to understand the patterns and changes in those aspects to help plan and implement targeted interventions.
Conclusion
During June 1-20, 2021, the majority (�80%) of respondents in both sexes reported having received the COVID-19 vaccine or intending to receive one. Likelihood of reporting vaccine hesitancy was higher among women and younger, non-full-time working or unemployed, or pregnant individuals. Coordinated efforts are needed through effective communications and community-based interventions to ensure accessible vaccine information resources and healthcare services.
|
2023-01-01T05:06:30.021Z
|
2022-12-30T00:00:00.000
|
{
"year": 2022,
"sha1": "d14f18420a09db0015397b70288765ee26a23afc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d14f18420a09db0015397b70288765ee26a23afc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14210925
|
pes2o/s2orc
|
v3-fos-license
|
Many Channels Lead to Aldosterone☆
Commentary Aldosterone secretion is under the control of potassium, renin and angiotensin (Ang II). Consequently, concepts to explain autonomous al-dosterone secretion as the basis for primary aldosteronism (PA) included the presence of stimulating autoantibodies to the Ang II type 1 receptor (AT1R), gain-of-function mutations in the AT1R and aberrant expression of G-protein-coupled membrane receptors that are responsive to alternative stimuli and have access to the cellular AT1R signaling apparatus (Luft, 2013, Mazzuco et al., 2010). However, while these ideas are great the power to explaining the pathophysiology of PA remained small. The breakthrough came with the systematic clarification of sig-naling pathways which control aldosterone secretion, the application of whole exome sequencing to adrenal disease and the discovery that a mutated channel which is associated with familial and sporadic forms of PA results in an increase in intracellular calcium (Fig. 1) (Choi et al., 2011). The same group also discovered that a mutation in CACNA1H, encoding the voltage-gated T-type calcium channel Cav3.2, is associated with an early onset form of primary hyperaldosteronism (Scholl et al., 2015). The data in the EBioMedicine paper by Daniil et al. strongly supports the disease-driving character of such mutations allowing calcium to influx more readily into the aldosterone-producing adrenal zona glomerulosa cell. And the paper adds familial and sporadic variants of PA with altered CACNA1H sequence to our knowledge database (Daniil et al., 2016). Systematic clinical work and modern genetic analysis combined with an elegant set of molecular, cellular and electro-physiological experiments helped the group to visualize a genotype-phenotype relationship. This relationship was apparent in clinical observations and detectable by means of in vitro investigations into channel properties, calcium signaling, steroidogenic enzyme expression and aldosterone secretion into cell culture supernatants. While the patients with a mild mutation did not, the subject with the severe CACNA1H mutation had early onset PA and multiplex developmental disorder. Interestingly , pathological neurologic features had been reported to occur in patients with PA due to a CACNA1D mutation which is also known to strongly affect intracellular calcium within zona glomerulosa cells (Scholl et al., 2013). The tumors of such patients are comparably small but show strong expression of aldosterone synthase and suppression of renin. Interestingly, it was suggested that some mutations may severely interfere with the cellular calcium homeostasis and even cause the death of an affected adrenocortical cell thus preventing the cell from developing hyperplastic or tumorous tissue. However, less …
Commentary
Aldosterone secretion is under the control of potassium, renin and angiotensin (Ang II). Consequently, concepts to explain autonomous aldosterone secretion as the basis for primary aldosteronism (PA) included the presence of stimulating autoantibodies to the Ang II type 1 receptor (AT1R), gain-of-function mutations in the AT1R and aberrant expression of G-protein-coupled membrane receptors that are responsive to alternative stimuli and have access to the cellular AT1R signaling apparatus (Luft, 2013, Mazzuco et al., 2010. However, while these ideas are great the power to explaining the pathophysiology of PA remained small. The breakthrough came with the systematic clarification of signaling pathways which control aldosterone secretion, the application of whole exome sequencing to adrenal disease and the discovery that a mutated channel which is associated with familial and sporadic forms of PA results in an increase in intracellular calcium ( Fig. 1) (Choi et al., 2011). The same group also discovered that a mutation in CACNA1H, encoding the voltage-gated T-type calcium channel Cav3.2, is associated with an early onset form of primary hyperaldosteronism (Scholl et al., 2015). The data in the EBioMedicine paper by Daniil et al. strongly supports the disease-driving character of such mutations allowing calcium to influx more readily into the aldosterone-producing adrenal zona glomerulosa cell. And the paper adds familial and sporadic variants of PA with altered CACNA1H sequence to our knowledge data-base (Daniil et al., 2016). Systematic clinical work and modern genetic analysis combined with an elegant set of molecular, cellular and electrophysiological experiments helped the group to visualize a genotypephenotype relationship. This relationship was apparent in clinical observations and detectable by means of in vitro investigations into channel properties, calcium signaling, steroidogenic enzyme expression and aldosterone secretion into cell culture supernatants. While the patients with a mild mutation did not, the subject with the severe CACNA1H mutation had early onset PA and multiplex developmental disorder. Interestingly, pathological neurologic features had been reported to occur in patients with PA due to a CACNA1D mutation which is also known to strongly affect intracellular calcium within zona glomerulosa cells (Scholl et al., 2013). The tumors of such patients are comparably small but show strong expression of aldosterone synthase and suppression of renin. Interestingly, it was suggested that some mutations may severely interfere with the cellular calcium homeostasis and even cause the death of an affected adrenocortical cell thus preventing the cell from developing hyperplastic or tumorous tissue.
However, less severe aberrations, including some mutations in the G protein-activated inward rectifier potassium channel 4 (GIRK4) potassium channel, seem to be associated with a milder phenotype of PA, larger tumors and expression of aldosterone synthase in the remaining normal zona glomerulosa tissue as a sign of non-(full) suppression of renin and angiotensin. This may explain why in tumors of patients with malfunctioning GIRK4 channels, the 11beta-hydroxylase is expressed at higher levels within the aldosterone-producing tumors and that such patients form more so-called "adrenal hybrid steroids" than patients with aldosteronomas due to CACNA1D mutations ( Fig. 1) (Williams et al., 2016).
Along these lines, it seems to be very difficult to characterize the point of crossover from a single-nucleotide polymorphism to a mild disease-triggering mutation by means of such studies. An astonishing observation in this context is that mutations which are associated with the formation of aldosterone-producing adenomas were also observed in bilaterally hyperplastic adrenals and may even appear within different nodules in one adrenal gland although each nodule seems to harbour only one single mutation (Fernandes-Rosa et al., 2015). As such it remains open how channelopathies associated with PAwhether inborn or acquiredallow the affected adrenal cell to proliferate and break away from aldosterone-producing cell clusters in order to form aldosterone-producing adenomas.
So far, conventional cell culture studies did not provide the data to reach conclusions on how such mutations cause growth and proliferation of adrenal cortical cells. This may be because the influence of corticotropin, adrenal blood flow and tissue gradients also seem to play an important role in organ physiology and cell differentiation (Dringenberg et al., 2013). Therefore, while this study bridged the gap between clinical observations, the molecular background, its impact on cell physiology and aldosterone secretion, further such studies should address the question how the molecular changes promote cell proliferation and adrenal tumor formation.
Disclosures
The author declares no competing interest. Fig. 1. Both, corticotropin (ACTH) and angiotensin II (Ang II) stimulate adrenal steroidogenesis via binding to their G-protein coupled receptors, MC2R and AT1R, respectively. Separation of glucocorticoid and mineralocorticoid synthesis occurs through different signaling pathways and suppression of the ACTH-stimulated adenylyl cyclase (AC) and protein kinase A activities when Ang II binds to its receptor. Ang II is generated when renin is secreted, leading to elevate intracellular calcium and expression of the aldosterone synthase (CYP11B2). However, the action of Ang II is bypassed in a state of hyperkalemia when potassium is prevented from efflux through the GIRK4 channel causing depolarization of the adrenal zona glomerulosa cell. This mechanism serves to regulate the organism's external potassium balance through an increase in aldosterone. Inherited or sporadic mutations in several ion channels that are employed in the regulation of the intracellular calcium concentration may lead to overactivity of calmodulin kinase and upregulation of CYP11B2, thereby achieving autonomy from control by renin and Ang II: primary aldosteronism. When aldosterone synthesis occurs independently from Ang II the influence of ACTH on steroidogenesis is conserved. Expression of both aldosterone synthase and 11beta-hydroxylase (CYP11B1) results in generation of so-called "adrenal hybrid steroids" which is dependent on the activity balance in the signaling pathways. This sketch is a further development of illustrations by Choi et al. (2011) and Zennaro et al. (2013).
|
2018-04-03T04:31:37.612Z
|
2016-11-01T00:00:00.000
|
{
"year": 2016,
"sha1": "70caa76913805de16ccfd680bb476b1426435643",
"oa_license": "CCBYNCND",
"oa_url": "http://www.ebiomedicine.com/article/S2352396416305096/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70caa76913805de16ccfd680bb476b1426435643",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252471183
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Heat Treatment on Surface, Structural and Optical Properties of Nickel and Copper Phthalocyanines Thin Films
The work presents the effect of annealing on the change of polycrystalline α and β phases of copper and nickel phthalocyanines. We have found that this process has a great influence on the optical properties of the vapor-deposited layers. The performed measurements showed that for various forms of MPc, the values of the refractive index and the extinction coefficient increased, and consequently, so did the absorption coefficient. The AFM images taken showed that the values before and after heating are morphologically different. Raman measurements showed that the band at about 1526 cm−1 (B1g symmetry) has higher intensity for the α form than for the β form. The intensity of this band is related to changing the form of phthalocyanine from α to β. Our measurements have shown that by changing the annealing temperature of the layers, we change their optical properties. As a consequence, we change their optoelectronic parameters, adjusting them to the requirements of new optoelectronic devices, such as solar cells, sensors, displays and OLEDs.
In recent years, there has been seen a great deal of interest in phthalocyanines with different metals as the central atom (called metallophthalocyanines-MPcs). Knowledge of a given phthalocyanine crystal structure provides the opportunity to find specific applications. The proper selection of the MPc thin layer orientation with a combination of determining the morphology and polymorphism of the obtained thin film structure, plays a significant role in optical and electrical properties [15][16][17][18][19][20]. There are many methods of depositing thin layers of phthalocyanine, which are divided into wet and dry. The selection of a specific method results in obtaining different physical properties of the obtained layers. Wet methods include drop-casting, dip-coating, spin-casting and spray-coating, whereas dry methods require more advanced equipment because the entire process takes place in a 2 of 14 vacuum. The most frequently chosen methods of producing layers using the dry method are sputtering and physical vapor deposition.
In the case of phthalocyanines, we can distinguish several polymorphic phases, which are directly linked to the crystal structure. MPcs films exist in several molecular forms that range from amorphous to highly crystalline. However, it should be mentioned that the α and β forms are the most popular and stable forms at room temperature, which differ in the size of the tilt angle of the molecule within the columns and arrangement of the common columns in the crystalline structure. Moreover, the crystal structure is closely related to the deposition process (i.e., method, technology, evaporation rate) as well as the type, quality, orientation and temperature of the substrate surface. The heating of the obtained MPc layers, after the evaporation process, also affects their crystalline form [21][22][23][24][25][26]. Thus, it can be written that the deposition of layers by different methods and on different substrates determines different crystal structures of MPcs. Due to these conditions, MPcs thin layers have different physicochemical properties.
In this paper, we show how the process of thermal evaporation can determine the different crystalline forms of metallophthalocyanines and consequently how this process influences their optical properties as well as structural and surface properties. We compared the experimental results of less known nickel phthalocyanine (NiPc) with copper phthalocyanine (CuPc) [27,28]. We focused our studies on these MPcs because they have a more complex electronic structure since the 3d-like metal orbital lies between the HOMO and LUMO Pc. The optical properties of the α and β forms of CuPc and NiPc were determined using spectroscopic ellipsometry (SE), whereas the structural and surface properties were analyzed by AFM and Raman spectroscopy. Additionally, cyclic voltammetry (CV) measurements have been performed.
The long-range aim of this study is to learn how the thermal evaporation and substituting of different metal atoms into the ring of the phthalocyanines correlate with the surface, structural and optical properties, and how to enhance the properties by controlling the molecular structures so that these layers can be used in solar cells, sensors, displays and OLEDs.
AFM
AFM measurements were used to compare morphology, surface and roughness of CuPc and NiPc thin films before and after annealing at 473 K. The results obtained for NiPc and CuPc samples before and after annealing are shown in Figure 1. To analyze and characterize the surface, two basic parameters such as root mean square (RMS) roughness and average grain size were used. It is visible that the surfaces of MPcs thin films before and after annealing at 473 K are different. This behavior is caused by obtaining a different crystalline form after the annealing of MPcs thin films at 473 K. It means that MPcs have the α form before annealing and the β form after annealing. The RMS roughness for the alpha and beta forms of NiPc were estimated to be 2 nm and 3 nm, respectively. The average grain size for each form was 40 nm and 50 nm, respectively. In the case of CuPc thin films, we observed similar values. The roughness was 4 nm and 5 nm for the sample before and after annealing, respectively, and the grain size was 40 nm and 50 nm. The measurement results are presented in Figure 1.
Therefore, it shows that the annealing process directly affects the homogeneity of the structure. In the case of unannealed samples, despite the larger grain size, the entire surface is homogeneous but has an aggregate and flake-like structure. The structures annealed at the temperature of 473 K have a slightly lower roughness parameter. A similar phenomenon was also observed for the deposition of thin films on substrates that were heated to temperatures above 400 K. Therefore, it shows that the annealing process directly affects the homogeneity of the structure. In the case of unannealed samples, despite the larger grain size, the entire surface is homogeneous but has an aggregate and flake-like structure. The structures annealed at the temperature of 473 K have a slightly lower roughness parameter. A similar phenomenon was also observed for the deposition of thin films on substrates that were heated to temperatures above 400 K. Figure 2 shows the normalized Raman unpolarized spectra of thin layers of copper and nickel phthalocyanines obtained by thermal evaporation in a vacuum. Normalization was completed to the second largest peak, which is shown in Figure S1 (in Supplementary Materials). These spectra were measured for CuPc and NiPc thin films before annealing and after their annealing at 473 K. In the CuPc spectra, we can observe that the band at about 1526 cm −1 (B1g symmetry) has higher intensity for the α form than for the β one. In the case of nickel phthalocyanine, compared with CuPc, the position of this band changes (it is located around 1551 cm −1 ) and the intensity of this band is also higher for the α form than for the β form. It should be noted that the position of this band is characteristic for phthalocyanines and is related to the displacement of C-N-C bridge bonds of the phthalocyanine macrocycle [29]. This relationship allows determining what metal ion is located Figure 2 shows the normalized Raman unpolarized spectra of thin layers of copper and nickel phthalocyanines obtained by thermal evaporation in a vacuum. Normalization was completed to the second largest peak, which is shown in Figure S1 (in Supplementary Materials). These spectra were measured for CuPc and NiPc thin films before annealing and after their annealing at 473 K. In the CuPc spectra, we can observe that the band at about 1526 cm −1 (B1g symmetry) has higher intensity for the α form than for the β one. In the case of nickel phthalocyanine, compared with CuPc, the position of this band changes (it is located around 1551 cm −1 ) and the intensity of this band is also higher for the α form than for the β form. It should be noted that the position of this band is characteristic for phthalocyanines and is related to the displacement of C-N-C bridge bonds of the phthalocyanine macrocycle [29]. This relationship allows determining what metal ion is located in the center of the molecule. Therefore, comparing the intensities of individual spectra, it can be concluded that thin films annealed at the temperature of 473 K have lower intensity compared to unannealed samples. This behavior is related to a polymorphic change of form. The transformation, taking place in the annealing process, changes the angle of the molecule in relation to the substrate, which is directly visible in the Raman spectra. The peak positions and the assigned vibration modes are summarized in Table 1. molecule in relation to the substrate, which is directly visible in the Raman spectra. The peak positions and the assigned vibration modes are summarized in Table 1.
Spectroscopic Ellipsometry Measurements
The experimental ellipsometric azimuths (Ψ and ∆) and the adjustments of the obtained data from the optical model, for α and β forms of NiPc thin films deposited on n-type Si substrates with (100) orientation, are shown in Figure 3. The four-medium optical model of a sample (Si\native SiO 2 \MPcs\ambient) was used to determine both the thickness of the MPcs film and optical constants. The fit of the model is well suited to the experimental results. The reduced mean squared error was used to estimate the quality of the fit (χ 2 ) [30,31]: Figure 4 presents the extinction coefficient (κ) of the studied MPcs extracted from SE measurements. The shape of the κ spectra was parameterized using Gaussian oscillators, while the values of the interband transitions energy were determined from SE and are summarized in Table 2. Figure 4 illustrates four bands formed under the influence of molecular orbitals in the aromatic 18π electron system and overlapping orbitals bonding to the central metal atom. It should be mentioned that phthalocyanines containing certain transition metals (NiPc, CuPc) have more complex electronic structures because the metal 3d-like orbital lies between the HOMO and LUMO Pc. As a result of that, the spectra of these compounds can contain extra features arising from charge transfer transitions [32]. In Equation (1), N and P are the total number of data points and the number of fitted model parameters, respectively. The quantities Ψ j and ∆ j are experimented ('exp') or obtained from model ('mod') ellipsometric angles. Quantities σ Ψj and σ ∆j are standard deviations for measured Ψ and ∆ azimuths. The value of χ 2 for the fits is established to be in the range from 2.4 to 8.5 (see Table 2). Based on these data, the extinction coefficients (κ) and the refractive indices (n) of the studied MPcs were determined. Figure 4 presents the extinction coefficient (κ) of the studied MPcs extracted from SE measurements. The shape of the κ spectra was parameterized using Gaussian oscillators, while the values of the interband transitions energy were determined from SE and are summarized in Table 2. Figure 4 illustrates four bands formed under the influence of molecular orbitals in the aromatic 18π electron system and overlapping orbitals bonding to the central metal atom. It should be mentioned that phthalocyanines containing certain transition metals (NiPc, CuPc) have more complex electronic structures because the metal 3d-like orbital lies between the HOMO and LUMO Pc. As a result of that, the spectra of these compounds can contain extra features arising from charge transfer transitions [32]. Figure 4 presents the extinction coefficient (κ) of the studied MPcs extracted from SE measurements. The shape of the κ spectra was parameterized using Gaussian oscillators, while the values of the interband transitions energy were determined from SE and are summarized in Table 2. Figure 4 illustrates four bands formed under the influence of molecular orbitals in the aromatic 18π electron system and overlapping orbitals bonding to the central metal atom. It should be mentioned that phthalocyanines containing certain transition metals (NiPc, CuPc) have more complex electronic structures because the metal 3d-like orbital lies between the HOMO and LUMO Pc. As a result of that, the spectra of these compounds can contain extra features arising from charge transfer transitions [32]. The first band in the range of 250-300 nm is called the C band and is due to d-π* transitions, which imply a broader d-band. Next, there is the N band that arises from the presence of the d-band associated with the central metal atom and resulting in the d-π* transitions, which have been attributed to the charge transfer transition from the sPz mixing orbital to the electron system of the macrocyclic ring of the phthalocyanine. It should be noticed that the N band is more visible for NiPc compared to CuPc. In the range from 350 to 500 nm, we observe a direct electron transition from the π to π* orbitals. The The first band in the range of 250-300 nm is called the C band and is due to d-π* transitions, which imply a broader d-band. Next, there is the N band that arises from the presence of the d-band associated with the central metal atom and resulting in the d-π* transitions, which have been attributed to the charge transfer transition from the sP z mixing orbital to the electron system of the macrocyclic ring of the phthalocyanine. It should be noticed that the N band is more visible for NiPc compared to CuPc. In the range from 350 to 500 nm, we observe a direct electron transition from the π to π* orbitals. The observed intense transitions, called the Soret band (B band), give the edge of absorption for studied phthalocyanines 7 of 14 in α and β forms [32,33]. The last Q-band is assigned to the first π-π* transitions on the phthalocyanine macrocycle. This band is split into two bands (Davidov splitting). These transitions and shifts are characteristic for phthalocyanines in crystal form, depending on the sample before and after annealing [27,[34][35][36]. χ 2 From Figure 4, we can note the difference in the shape of the absorption spectra of NiPc and CuPc. This feature may depend on the size of the phthalocyanine cavity and the symmetry of the molecule, which determine the state energies and the oscillator strength values. Metallic copper has an ion size similar to the cavity size of phthalocyanine, so it is accommodated in the cavity without any contraction or expansion of the ring and represents the phthalocyanine ring in its equilibrium state with a cavity diameter of 3.87 Å [29]. As a result, its structure is planar and possess approximately D 4h symmetry. In contrast, nickel phthalocyanine has a smaller cavity with a diameter of 3.66 Å; hence, its structure exhibits a contraction of the ring. Therefore, the four isoindole groups are pulled in toward the nickel to accommodate the smaller metal ion. This gives a smaller cavity diameter but also has an effect on the C-N-C bridge bonds, which are significantly lengthened compared to other phthalocyanine structures by around 0.05 Å, and the angle of the C-N-C bond is reduced by around 4 • . Thus, to accommodate this small metal ion, a considerable degree of ring deformation takes place [29].
From Figure 4 and Table 2, it can be seen that the metal ion plays a crucial role in determining the shape and positions of particular bands for studied MPcs thin films. It is caused by different degrees of interaction between the metal ion and the phthalocyanine π system, which can depend on the number of electrons in the outer shell of the central metal [37]. Moreover, the surface morphology also depends on the metal substitution, and it can have a significant influence on the observed extinction coefficient. Additionally, it was found that heat treatment of the surfaces of the produced samples influences the shape of the extinction coefficient. It is also noticeable that the value of the extinction coefficient increases for CuPc annealed at 473 K in the whole of the measured region, which can be a result of the reduction in grains deposited on the substrate by subjecting the sample to the heating process [33,38]. However, for NiPc, this increase is visible only in the range of 250-450 nm.
From the extinction coefficient data, the absorption coefficient was determined using a well-known equation α = 4πκ λ and is shown in Figure 5. It can be seen that in general, the values of the absorption coefficient are higher for the β form of the studied MPcs than for the α form. For CuPc in the range of 250-400 nm, this difference is almost twice. Figure 6 shows the refractive indices (n) of the α and β forms of NiPc and CuPc thin layers, which are the basic properties of the material used in the design of optoelectronic devices. One can see that an anomalous dispersion in the absorption region and normal dispersion in the transparent area are visible. The values of refractive indices for α-NiPc and β-NiPc are close to each other, especially in the region for λ > 800 nm. However, the n is higher for the β form of NiPc compared to the α form. In the case of CuPc, greater Figure 6 shows the refractive indices (n) of the α and β forms of NiPc and CuPc thin layers, which are the basic properties of the material used in the design of optoelectronic devices. One can see that an anomalous dispersion in the absorption region and normal dispersion in the transparent area are visible. The values of refractive indices for α-NiPc and β-NiPc are close to each other, especially in the region for λ > 800 nm. However, the n is higher for the β form of NiPc compared to the α form. In the case of CuPc, greater differences between the values of refractive index for α and β forms are apparent. The value of n for the β form increased almost one and a half times compared with the α form. Figure 6 shows the refractive indices (n) of the α and β forms of NiPc and CuPc thin layers, which are the basic properties of the material used in the design of optoelectronic devices. One can see that an anomalous dispersion in the absorption region and normal dispersion in the transparent area are visible. The values of refractive indices for α-NiPc and β-NiPc are close to each other, especially in the region for λ > 800 nm. However, the n is higher for the β form of NiPc compared to the α form. In the case of CuPc, greater differences between the values of refractive index for α and β forms are apparent. The value of n for the β form increased almost one and a half times compared with the α form.
Electrochemical Investigation
Thin films of NiPc and CuPc in α and β forms, deposited onto FTO/glass electrodes, were characterized by means of cyclic voltammetry (CV) starting from the open circuit values up to 1.24 V/(Ag/AgCl). The reverse scan explored the cathodic branch down to −0.8 V/(Ag/AgCl). Figure 7 shows the superimposition of cyclic voltammograms of NiPc in α and β forms. In particular, Figure 7 indicates the first and the second cycle of the two films. Figure 7 shows the superimposition of cyclic voltammograms of NiPc in α and β forms. In particular, Figure 7 indicates the first and the second cycle of the two films.
For both of them, the anodic branch of the first cycle does not evidence any redox activity of these films. The current increase at around +0.9 V/(Ag/AgCl) can be assigned to oxygen evolution. By inverting the scan direction, NiPc α and β forms show cathodic activity. The α-NiPc shows a reduction onset at −0.22 V/(Ag/AgCl), while the β form shows one at −0.053 V/(Ag/AgCl). The second scan (see Figure 7) shows a shoulder at around −0.16 V and a reduction peak at −0.21 V/(Ag/AgCl) for α-NiPc and β-NiPc, respectively. According to Ding et al. [39], the cathodic processes at −0.16 V/(Ag/AgCl) and −0.21 V/(Ag/AgCl) can be ascribed to the oxygen reduction, which is more evident in the case of NiPc in the β form. The anodic peaks of the second cycle located at 0.77 V/(Ag/AgCl) and 0.88 V/(Ag/AgCl) can be assigned to the interaction between the phthalocyanine macro-ring and the central metal [40]. It is known that the reduction onset is related to the electron affinity according to the following equation: For both of them, the anodic branch of the first cycle does not evidence any redox activity of these films. The current increase at around +0.9 V/(Ag/AgCl) can be assigned to oxygen evolution. By inverting the scan direction, NiPc α and β forms show cathodic activity. The α-NiPc shows a reduction onset at −0.22 V/(Ag/AgCl), while the β form shows one at −0.053 V/(Ag/AgCl). The second scan (see Figure 7) shows a shoulder at around −0.16 V and a reduction peak at −0.21 V/(Ag/AgCl) for α-NiPc and β-NiPc, respectively. According to Ding et al. [39], the cathodic processes at −0.16 V/(Ag/AgCl) and −0.21 V/(Ag/AgCl) can be ascribed to the oxygen reduction, which is more evident in the case of NiPc in the β form. The anodic peaks of the second cycle located at 0.77 V/(Ag/AgCl) and 0.88 V/(Ag/AgCl) can be assigned to the interaction between the phthalocyanine macro-ring and the central metal [40].
It is known that the reduction onset is related to the electron affinity according to the following equation: where E red onset is the reduction potential onset with respect to NHE [41]. NiPc films annealed at different temperatures show electron affinities (EA) of 4.485 eV and 4.652 eV for α and β forms, respectively. Figure 8 shows the first and the second cycles of CV of CuPc films differently annealed. associated to the oxygen evolution reaction is observed starting from 0.9 V/(Ag/AgCl). By inverting the potential scan, both α and β forms show cathodic activity. In particular, they show reduction onsets at −0.19 V/(Ag/AgCl) and −0.25 V/(Ag/AgCl), respectively. According to Equation (2), electron affinity values of 4.515 eV and 4.455 eV were calculated. The second cycles (Figure 8) display anodic activities of CuPc films with a peak at 0.88 V/(Ag/AgCl) in the case of the α form and an anodic wave starting from 0.92 V/(Ag/AgCl) in the case of the β form. The reverse scan shows two reduction peaks at −0.56 V/(Ag/AgCl) and −0.29 V/(Ag/AgCl) for the α and β forms, respectively, which can be associated to the anodic ones, suggesting that quasi-reversible processes take place in CuPc films.
Generally, by comparing the electrochemical responses, it is evident that in both metal-phthalocyanine (NiPc and CuPc) compounds, the annealing temperature affects the redox activity of the films. Higher EAs value are calculated in the case of β forms.
Metallophthalocyanines
Metallophthalocyanines (MPcs) are one of the most important metalloorganic materials used in physics and chemistry. The molecular structure of MPc is shown in Figure 9a. In the structure of phthalocyanines, we can distinguish four isoindole rings, which are connected to each other by an azamethene bridge. Their structure resembles porphyrins, in which we can distinguish four pyrrole rings connected with each other by means of methine bridges (=C-). Two hydrogen atoms or a metal cation may be attached to the center of the phthalocyanine ligand. We can assume that the coordination compound consists of an electron acceptor (a cation or a metal atom), an energetically low empty orbital, and an electron donor. Its function is performed by a ligand because it has unbound, lone pairs of electrons and does not have low-lying empty orbitals energetically. Currently, phthalocyanines in their structure may contain non-transition and transition metals. We can distinguish about 70 elements forming complexes with phthalocyanine. In the case of copper phthalocyanines, the anodic branch of first cycle does not show any process connected to the film itself. Only an increasing of the anodic current associated to the oxygen evolution reaction is observed starting from 0.9 V/(Ag/AgCl). By inverting the potential scan, both α and β forms show cathodic activity. In particular, they show reduction onsets at −0.19 V/(Ag/AgCl) and −0.25 V/(Ag/AgCl), respectively. According to Equation (2), electron affinity values of 4.515 eV and 4.455 eV were calculated.
The second cycles (Figure 8) display anodic activities of CuPc films with a peak at 0.88 V/(Ag/AgCl) in the case of the α form and an anodic wave starting from 0.92 V/(Ag/AgCl) in the case of the β form. The reverse scan shows two reduction peaks at −0.56 V/(Ag/AgCl) and −0.29 V/(Ag/AgCl) for the α and β forms, respectively, which can be associated to the anodic ones, suggesting that quasi-reversible processes take place in CuPc films.
Generally, by comparing the electrochemical responses, it is evident that in both metalphthalocyanine (NiPc and CuPc) compounds, the annealing temperature affects the redox activity of the films. Higher EAs value are calculated in the case of β forms.
Metallophthalocyanines
Metallophthalocyanines (MPcs) are one of the most important metalloorganic materials used in physics and chemistry. The molecular structure of MPc is shown in Figure 9a. In the structure of phthalocyanines, we can distinguish four isoindole rings, which are connected to each other by an azamethene bridge. Their structure resembles porphyrins, in which we can distinguish four pyrrole rings connected with each other by means of methine bridges (=C-). Two hydrogen atoms or a metal cation may be attached to the center of the phthalocyanine ligand. We can assume that the coordination compound consists of an electron acceptor (a cation or a metal atom), an energetically low empty orbital, and an electron donor. Its function is performed by a ligand because it has unbound, lone pairs of electrons and does not have low-lying empty orbitals energetically. Currently, phthalocyanines in their structure may contain non-transition and transition metals. We can distinguish about 70 elements forming complexes with phthalocyanine. [42] have found that θ is ≈26.5° for α-NiPc and α-CuPc, and θ is ≈46.5° for β-NiPc and β-CuPc).
One of the most interesting properties of phthalocyanines is their polymorphism. We can observe this in the crystal and thin layer. Thin layers of phthalocyanines deposited on the various substrates can range from amorphous to crystalline. Molecules are usually arranged in columnar piles in crystalline form as they grow on the substrate. The beststudied phthalocyanine in this respect is CuPc, where nine different polymorphs are known [43]. However, the most interesting and permanent forms are α and β. Figure 1b shows the differences between these two forms of copper phthalocyanine.
Taking into account the differences between the polycrystalline α and β phases, special attention should be paid to the size of the crystallites. In the case of the α form, the crystallites are about 100 Å, while for the β form, they are much larger. The thickness of the obtained layer also affects the grain size, i.e., with increasing thickness, an increase in crystallites is observed [44], which are situated perpendicular to the substrate, and the layer has a poorly packed structure [45]. It is possible to obtain an amorphous phthalocyanine layer by the thermal evaporation method. However, for this purpose, the sublimation process is carried out at a pressure of 0.0013 Pa, and the temperature of the substrate should not exceed 100 K. It is possible to change the amorphous to the crystalline phase. In this case, in order to obtain the α form, it is necessary to anneal the layers at a temperature of 353 K, whereas to obtain the β form, it is necessary to heat the α form at temperatures above 473 K [46]. It should be noted that by changing the polymorphic form, the optical properties also change. [42] have found that θ is ≈26.5 • for α-NiPc and α-CuPc, and θ is ≈46.5 • for β-NiPc and β-CuPc).
Preparation of Thin Films
One of the most interesting properties of phthalocyanines is their polymorphism. We can observe this in the crystal and thin layer. Thin layers of phthalocyanines deposited on the various substrates can range from amorphous to crystalline. Molecules are usually arranged in columnar piles in crystalline form as they grow on the substrate. The beststudied phthalocyanine in this respect is CuPc, where nine different polymorphs are known [43]. However, the most interesting and permanent forms are α and β. Figure 1 shows the differences between these two forms of copper phthalocyanine.
Taking into account the differences between the polycrystalline α and β phases, special attention should be paid to the size of the crystallites. In the case of the α form, the crystallites are about 100 Å, while for the β form, they are much larger. The thickness of the obtained layer also affects the grain size, i.e., with increasing thickness, an increase in crystallites is observed [44], which are situated perpendicular to the substrate, and the layer has a poorly packed structure [45]. It is possible to obtain an amorphous phthalocyanine layer by the thermal evaporation method. However, for this purpose, the sublimation process is carried out at a pressure of 0.0013 Pa, and the temperature of the substrate should not exceed 100 K. It is possible to change the amorphous to the crystalline phase. In this case, in order to obtain the α form, it is necessary to anneal the layers at a temperature of 353 K, whereas to obtain the β form, it is necessary to heat the α form at temperatures above 473 K [46]. It should be noted that by changing the polymorphic form, the optical properties also change.
Preparation of Thin Films
Thin layers of nickel and copper phthalocyanines were obtained by thermal evaporation in a vacuum (p = 2 × 10 −6 Torr). Quartz and n-type silicon with (100) orientation were used as substrates. Each of them was properly cleaned with acetone, ethanol and finally rinsed in deionized water. Afterwards, the substrate and material were placed in a thermal evaporation chamber and initially heated to remove water vapor; then, they were brought to the evaporation temperature (393 K for CuPc and NiPc). The steaming process was continued until the required layer thickness was achieved. The deposition rate was 0.1 nm/s. From this procedure, the thin films of α phase with a thickness of about 30-40 nm were obtained. The β form was formed as a result of eight hours of annealing the α form at 473 K [21].
Experimental Methods
The Innova (Bruker) measuring system in a tapping mode was used to take AFM images (selected area 2 × 2 µm 2 ). Two-dimensional (2D) images conversion software was used to estimate the average size of aggregates and the RMS roughness parameter. Measurements were made at room temperature.
The Raman spectra were recorded using the Raman spectrometer (Senterra by Bruker Optik) in the spectral range of 500-1700 cm −1 , where a laser of 532 nm (10 mW) was used as a source of excitation [47]. The laser beam was tightly focused on the sample surface through a Leica 20× microscope objective. To prevent any damages of the sample, an excitation power was fixed at 5 mW.
We used the spectroscopic ellipsometry (SE) in order to characterize the optical properties such as refractive index (n) and extinction coefficient (κ) of the studied films deposited on n-type Si substrates with (100) orientation. The SE measurements were made using a V-VASE ellipsometer (J.A. Woollam Co., Inc., Lincoln, NE, USA) in the range of 250-2000 nm for three angles of incidence (65 • , 70 • , 75 • ). The complex dielectric functions ( ε = ε 1 + iε 2 , where ε 1 = n 2 − κ 2 and ε 2 = 2nκ are the real and imaginary parts, n and κ = α λ 4π are the refractive index and extinction coefficient, α' is the absorption coefficient, λ is wavelength) for the investigated thin films were calculated directly from the ellipsometric data using the WVASE32 software. These optical constants were parameterized using Gauss-shape dispersion relation in the absorption regime.
The electrochemical properties of CuPc and NiPc in α and β forms were investigated by cyclic voltammetry (CV). Thin films of CuPc and NiPc α and β were deposited on ITO/quartz substrates and used as working electrodes in a three-electrode cell. Graphite wire and Ag/AgCl (3.0 M KCl) were used as counter and reference electrodes, respectively. For avoiding the dissolution of metal phthalocyanines films in common organic solvents, the voltammograms were recorded in 0.1 M KCl (Sigma-Aldrich for molecular biology, ≥99.0%) aqueous solution. Cyclic voltammograms were acquired by using an Ivium Vertex Potentiostat/Galvanostat at a scan rate of 200 mVs −1 starting from the open circuit potential in a potential window between 1.24 and −0.8 V/(Ag/AgCl). All the measurements were carried out at room temperature in an aerated solution.
Conclusions
The influence of heat treatment of nickel and copper phthalocyanines (NiPc and CuPc), which leads to the transition of phthalocyanine from α to β and the rearrangement of the molecular structure, on their structural and optical properties in terms of use in OLED technology is presented. We found that this change influenced the physical properties of the studied organic materials. We have shown that the physical properties of the studied NiPc and CuPc thin layers are closely related to the polymorphic phase of each of the phthalocyanines.
In our research, we observed a change in the intensity of Raman spectra for the samples before and after annealing at 473 K. It was also found that the heat treatment of the studied MPcs increased the values of the refractive index and the extinction coefficient as well as the absorption coefficient. Moreover, it was shown that the values of the extinction coefficient, the absorption coefficient and the refractive index are higher for α-NiPc than for α-CuPc. In contrast, from electrochemical study, we found that the electron affinity is the highest for β-NiPc.
We noticed that the obtained results show the stability of NiPc and CuPc thin layers after the thermal evaporation process and annealing.
Our results show that the produced layers are suitable for use in solar cells, sensors, displays and OLEDs.
|
2022-09-24T15:11:00.129Z
|
2022-09-21T00:00:00.000
|
{
"year": 2022,
"sha1": "6a9b7d567e3898c214c56e55413a72fdef8109b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/19/11055/pdf?version=1663750252",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7a8adf07b792575bb333a9de9b19a66a5d5b1076",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255439681
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Changes in Circulatory Cytokines and Chemokines Levels in Mild to Severe COVID-19 Patients
Immune dysregulation is a key feature of the coronavirus disease-2019 (COVID-19). However, disparities in responses across ethnic groups are underappreciated. This study aimed to determine the relationship between chemokines and cytokines and the severity of COVID-19. Multiplex magnetic bead-based Luminex-100 was used to assess chemokine and cytokine levels in COVID-19 patients at admission (day-1) and after 4 days. The mean age of the patients recruited was 54.3 years, with 19 (63.3%) males. COVID-19 patients had significantly lower lymphocyte, monocyte, hemoglobin and eosinophil levels than controls (p < 0.05). COVID-19 patients showed significantly higher neutrophil levels than controls (p < 0.05). The baseline levels of IL-2, IL-6, IL-8, IL-10, and IFN-α/γ significantly increased in COVID-19 patients (p < 0.05). Chemokine levels (IP-10, MCP-1, MIG, and CCL-5) were significantly in COVID-19 patients. IL-8, IP-10, and MIG levels were significantly higher in the patients with severe COVID-19 (p < 0.05). Individuals with mild COVID-19 showed significantly higher levels of INF-α, IL-2, IL-6, and IL-8, whereas IL-10 levels were significantly lower (p < 0.05). TNF-levels decreased significantly in individuals with severe COVID-19, whereas IL-6, IL-8, and MIG levels increased (p < 0.05). After 4 days, INFα-, IL-2, IL-6, IL-8, IP-10, and MIG levels were significantly higher in patients with mild disease, whereas IL-6, MIG, and TNF-αlevels were significantly higher in patients with severe disease (p < 0.05). Thus, we conclude that COVID-19 is characterized by INF-α/γ, IL-6, IL-10, IP-10, MCP-1, MIG, and CCL5 dysregulation. IL-8, MIG, and IP-10 levels distinguish between moderate and severe COVID-19. Changes in INF-α, IL-2, IL-6, IL-8, IP-10, and MIG levels can be used to monitor disease progression. Supplementary Information The online version contains supplementary material available at 10.1007/s12291-022-01108-x.
Introduction
With variants alpha and delta exhibiting higher transmissibility, the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has shown a superior ability to adapt to the human host during the pandemic. One of the most critical unknowns in 2022 is whether this development will continue [1]. The Omicron variant demonstrated that the virus adapted. As delta continuously evolves, variants may become more transmissible, cause more or less severe diseases, and/or acquire immune escape mechanisms. SARS-CoV-2 infection increases inflammation owing to the overactive immunological response of the host [2]. When pathogens stimulate human immune cells, they produce cytokines and chemokines or respond via cytokine and chemokine receptors [3,4]. These extraneously secreted proteins promote leukocyte trafficking and recruitment of other inflammatory factors, thereby regulating the nature of immune responses and controlling immune cell trafficking and cellular arrangement of immune organs [5]. Determining which cytokines are produced in response to an immune insult determines whether an immune response develops and whether that response is cytotoxic, humoral, or cell-mediated. A cascade of reactions can be observed in response to cytokines, and many cytokines are frequently required to synergize in order to exhibit maximal function.
Selection of Study Population
In the present study, a total of 60 subjects were enrolled, of which 30 were confirmed (RT-PCR nasal swab positive) cases of COVID-19 that were admitted to COVID-Ward, Dr. Ram Manohar Lohia Institute of Medical Sciences, Lucknow, India, from December 2020 to May 2021, i.e., Alpha variant (B.1.1.7). Subsequently, patients were stratified into categories: mild (n = 11), fever, respiratory symptoms with SpO 2 > 95%, and severe (n = 19), respiratory distress, SpO 2 < 90%, ventilator, and intensive care unit is required as per ICMR guidelines [11]. Thirty age-and sex-matched non-COVID-19 volunteers were recruited as the controls. Blood samples were collected at the time of admission (day 1), followed by day 4, irrespective of disease progression, improvement in clinical symptoms, laboratory findings, or arterial oxygen saturation. None of the individuals included in the present study had an infection or an inflammatory disease. The Institutional Ethics Committee approved the research protocol (IEC113/20).
Sampling and Estimation of Cytokines Profiles
Peripheral blood samples were collected in a plain vacutainer on day1 and 4 and serum was separated from the blood and stored at − 80 °C.
Data Analysis
Variables with a normal distribution were reported as mean ± standard deviation and compared using the t-test. The chi-square test was used to compare categorical variables represented as percentages. All tests for statistical significance were two-tailed and a p < 0.05 was considered statistically significant. The pearson correlation coefficient was used to establish the relationship between the cytokines and chemokines. All statistical analyses were conducted using version 21.0 of the SPSS program (SPSS Inc., Chicago, IL, USA).
Haematological Parameters Associated with the Severity of COVID-19
Baseline characteristics of the study population are shown in (Table 1). The Mean ± SD of age in patients and control group was 54.31 ± 14.66 and 48.85 ± 12.40 years, respectively, with no significant difference (p = 0.18) shows an adequate match. The lymphocyte, monocyte, hemoglobin and eosinophil populations were significantly lower in COVID-19 patients than in controls (p < 0.05). In contrast, neutrophil levels were significantly higher in COVID-19 patients than in controls (p < 0.001).
Discussion
Most studies investigating the role of cytokines and chemokines in the pathogenesis of COVID-19 have revealed a broad array of elevated inflammatory mediators during the cytokine storm, without specifying the time points of infection. Therefore, it is crucial to analyze temporal changes in cytochemokines to capture the treatment window when designing drugs that target critical immune molecules. COVID-19 Patients exhibit multiple haematological and immunological manifestations. We hypothesize that the infection with SARS-CoV-2 induces an aberrant cytokine and chemokine response that causes disease progression. Immune activation upregulates the expression of virus receptors on T cells, which further contributes to T cell death and upregulates cytokines by using the cytokine signaling pathway; therefore, all inflammatory markers were increased in severe patients compared to mild and healthy controls.
The populations of lymphocyte monocytes and eosinophils were significantly decreased. In contrast, neutrophil levels were higher in COVID-19 patients than in controls. The results of this study, which are consistent with those of previous studies, revealed low leukocyte and high neutrophil counts in COVID-19-positive patients. Thus, it can be said leukopenia and neutrophilia may be indicative of COVID-19 disease [12,13]. The effects of viral pneumonia on the immune system include decreased leukocyte and increased neutrophil counts.
Our data revealed that the levels of cytokines, including INFα and γ, IL-2, IL-6, and IL-8, were significantly increased in COVID-19 patients compared to controls. Furthermore, IL-8 and MIG levels were significantly increased, whereas IP-10 levels were decreased in severe COVID-19 patients. A possible mechanism is that lymphocytes express the ACE-2 receptor, predisposing them to direct virus target sites. The virus infects and destroys lymphocytes [13] by attaching and attacking them. Second, the virus may destroy lymphatic organs such as the thymus and spleen, predisposing to a decrease in lymphocyte production. Third, cytokines such as tumor necrosis factor-alpha (TNF-α) and interleukin (IL)-6 may become disorganized, leading to lymphocyte death. The proliferation of lymphocytes may be reduced in COVID-19 patients who are critically ill due to elevated metabolic parameters, such as lactic acid, which produces hyperlactic acidemia [13]. COVID-19 treatment may exacerbate lymphopenia. Cytokines and their receptors are essential for the pathophysiology of viral infections [14]. In individuals with sepsis, the serum concentrations of proinflammatory cytokines are increased. Some researchers have also hypothesized that cytokine storms contribute to COVID-19 disease [15]. Patients with severe COVID-19 have higher serum levels of proinflammatory cytokines (TNF-α, IL-8, and IL-6) than those with mild, similar to severe acute respiratory syndrome (SARS) and the Middle East respiratory syndrome (MERS) [12,16]. In addition, Th l (T helper I) cells, natural killer (NK) cells, and CD8 + T cells are the primary sources of IFN-α [13]. In COVID-19 patients, the elevation in IFN-α release implies a Th 1 cell response. One of the immune system's tactics [17] for eradicating viral infections is the development of a Th 1 response. A robust IFN-α response can improve the prognosis of COVID-19 patients. By negatively regulating Th 2 cytokine production, IFN-α alters the Th l /Th 2 ratio away from the Th 2 response. Interferon and interferoninducible chemokines are also involved in the host antiviral response by promoting viral clearance before activating the adaptive immune system [18]. IFN-α-inducible protein 10 (IP-10 or CXCL10) is a key antiviral factor, notably in respiratory tract infections [19]. In numerous viral infections, plasma and bronchial alveolar lavage fluid (BALF) levels of CXCL10 have been reported to increase and are linked to illness severity [19,20]. CXCL8, another essential chemokine, acts as a neutrophil trafficking mediator. This chemokine plays a significant role in inflammatory processes, particularly in viral infections. The concentration of CXCL8 in nasal fluid has been reported to correlate with the severity of acute respiratory infections [21].
In addition, CCL5 and CXCL9 (MIG) are involved in the inflammatory state of chronic hepatitis C-infected patients [22]. Mice lacking CCL3 display delayed viral clearance when infected with influenza virus or murine cytomegalovirus [23,24]. The expression of CCL3, CCL4, and CCL5 in HIV-infected patients is linked to Th 1 immune response [25]. CXCR4 and CCR5 are co-receptors for HIV entry, although their ligands, CXCL12 and CCL5, suppress HIV infection [26]. Chemokines are crucial inflammatory mediators in the immune response to eliminate pathogens. Overproduction is a primary cause of hyperinflammation. In the recent COVID-19 outbreak, chemokines may be the direct cause of acute respiratory illness syndrome, a critical consequence contributing to the mortality of almost 40% of severe patients. All studies revealed IL-6 levels. Remarkably, all but one study revealed a significant increase in IL-6 levels between patients with and without severe COVID-19. The level of circulating IL-6 was also much higher in severely ill patients with SARS (517 ± 769 pg/ml) than in less severely ill patients (163 ± 796 pg/ml) [24]. Moreover, two studies [24,25] that assessed the level of circulating cytokines in severe MERS patients revealed that the level of IL-6 was elevated in severe MERS patients compared to mild groups. In five [23,[26][27][28] COVID-19 trials and one [29] SARS study, the circulating inflammatory chemokine IL-8 levels were recorded. Four of the five COVID-19 investigations [26][27][28]30] indicated a significant increase in IL-8 levels in severe COVID-19 patients compared to those in mild groups. In contrast, one study [29] on SARS patients revealed a significant decrease in the level of IL-8 in severe SARS patients (143 ± 41 pg/ml) compared to that in the non-severe group (165 ± 51 pg/ml). Parallel to the above conclusion, a reduction in chemokines and cytokines (INF-, IL-2, IL-6, IL-8, and IP-10, MIG) was observed in our investigation. This study was limited by the small sample size, which may lack the statistical ability to detect minute variations in mean concentrations of cytokines and chemokines. The subgroups were defined according to age, sex, and comorbidities.
In the present study, we observed changes in cytokine levels between the mild and severe groups within the first four days of symptom onset to 4 days. The dynamic changes were both in inflammatory cytokines (IL-6(Δ = 81.1), IL-8(Δ = 17.23), and IL-2(Δ = 2.29)) and chemokines (IP-10(Δ = 208.67), MCP (Δ = 60.27), and MIG (Δ = 38.7]). All of these proteins are involved in the innate immune response. The interaction between SARS-CoV-2 and the host immune system can cause hyperinflammation in critical cases at the beginning (day 1) of COVID-19. Our results show significant change in levels (day 1 to day-4) of inflammatory cytokines and chemokines (IL-6 (Δ = 94.09), IL-8 (Δ = 53.28), IP-10(Δ = 29.96) and MIG (Δ = 57.23) in the severe and ICU patient groups. On the one hand, it is thought that elevated serum levels of IL-8 and IL-6 in COVID-19 patients act as anti-inflammatory or immunosuppressive cytokines to prevent hyperinflammation and are induced by the rapid accumulation of proinflammatory cytokines. In contrast, high levels of IL-8 and IL-6 in severe COVID-19 patients can be a signal of an overactive immune response, which may play a detrimental pathological role in COVID-19 severity.
The present study focused on the direct measurement of cytokines and chemokines in peripheral blood. However, in the context of the rapidly changing cytokine environment after viral infection, we do not have a well-rounded understanding of the cause of this vigorous inflammatory response. However, this study showed that during the occurrence of pathogenic SARS-CoV-2 infection, a violent cytokine storm causing pathological immune damage may be a real "killer" in critically ill patients. However, the current study is limited and detailed molecular biology principles and broader epidemiology are lacking. Therefore, future studies should focus on identifying specific inflammatory response signalling pathways in patients and animals infected with SARS-CoV-2.
Conclusions
This study revealed that cytochemokines play an essential role in immunity and immunopathology during SARS-CoV-2 infection. Hence, an increase in the circulating levels of IL-8, IP-10, and MIG distinguishes patients with mild and severe COVID-19 infections. Changes in TNF-α, IFN-α, IL-6, IL-8, and MIG levels over time suggest the course of the disease. Therefore, assessment of cytochemokines may indicate disease progression and facilitate the development of more decisive treatment strategies.
|
2023-01-06T05:06:13.696Z
|
2023-01-03T00:00:00.000
|
{
"year": 2023,
"sha1": "12ea6b74d68c32a5fe71f0c0e1b055815a65a674",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12291-022-01108-x.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "691dc9a26ed10803d08b61a0fc0b13c55f49cd6a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251467810
|
pes2o/s2orc
|
v3-fos-license
|
A dichotomy on the self-similarity of graph-directed attractors
This paper seeks conditions that ensure that the attractor of a graph directed iterated function system (GD-IFS) cannot be realised as the attractor of a standard iterated function system (IFS). For a strongly connected directed graph, it is known that, if all directed circuits go through a vertex, then for any GD-IFS of similarities on $\mathbb{R}$ based on the graph and satisfying the convex open set condition (COSC), its attractor associated with this vertex is also the attractor of a (COSC) standard IFS. In this paper we show the following complementary result. If a directed circuit does not go through a vertex, then there exists a GD-IFS based on the graph such that the attractor associated with this vertex is not the attractor of any standard IFS of similarities. Indeed, we give algebraic conditions for such GD-IFS attractors not to be attractors of standard IFSs, and thus show that `almost-all' COSC GD-IFSs based on the graph have attractors associated with this vertex that are not the attractors of any COSC standard IFS.
Introduction
An iterated function system (IFS) {S i } i is a finite set of distinct contracting maps on a complete metric space which we will assume here to be R n [11]. The attractor of the IFS is the unique nonempty compact set K ⊂ R n such that If these maps are all contracting similarities, we say that this IFS is a standard IFS, and call K a self-similar set. A contracting similarity S (x) on R can be written as S (x) = ρx + b where ρ ∈ (−1, 1) \ {0} is the contraction ratio. Separation conditions for IFSs are often required to ensure 'not too much overlapping' in the union (1.1). A frequent condition is the open set condition (OSC), meaning that there exists a nonempty open set U ⊆ R n , such that m i=1 S i (U) ⊆ U with this union disjoint. We say that the IFS satisfies the convex open set condition (COSC) if U can be chosen to be convex, or we can (equivalently) take U = int(conv K) where 'conv ' denotes the convex hull, 'int' denotes the interior of a set. We say that the IFS satisfies the convex strong separation condition (CSSC) if we can take U = int(conv K) such that S i (conv K) ∩ S j (conv K) = ∅ for any i j.
We also consider graph-directed IFSs [12] based on a given digraph. A directed graph (or a digraph for brevity), G := (V, E) , consists of a finite set of vertices V and a finite set of directed edges E (for brevity we often omit 'directed') with loops and multiple edges allowed. Let E uv ⊂ E be the set of edges from the initial vertex u to the terminal vertex v. A graph-directed iterated function system (GD-IFS) on R n consists of a finite collection of contracting similarities {S e : e ∈ E uv } from R n v to R n u for u, v ∈ V, where R n u is a copy of R n associated with vertex u. We write ρ e ∈ (−1, 1) \ {0} for the contraction ratio of the similarity S e in R. We always require the digraph satisfies that d u ≥ 1 for every u ∈ V ( [12], [4,Section 4.3]), where d u is the out-degree of u (the number of directed edges leaving u). For a GD-IFS (V, E, (S e ) e∈E ) based on such a digraph, there exists a unique list of non-empty compact sets (F u ⊂ R n u ) u∈V such that, for all u ∈ V, (1.2) F u = v∈V e∈E uv S e (F v ), see [12] or [4,Theorem 4.3.5 on p.128]. We call the above (F u ) u∈V the (list of) attractors of the GD-IFS, and each F u is called a GD-attractor. A (finite) directed path e 1 e 2 · · ·e k is a consecutive sequence of directed edges e i ∈ E (i = 1, · · ·, k) for which the terminal vertex of e i is the initial vertex of e i+1 (i = 1, · · ·, k − 1). For a directed path e = e 1 e 2 · · · e k with edges e i (1 ≤ i ≤ k), the corresponding contractive mapping is given by S e = S e 1 •S e 2 •· · ·•S e k , and its contraction ratio along e is ρ e = ρ e 1 ρ e 2 · · · ρ e k . The convex open set condition (COSC) means that these (U u ) u∈V can all be chosen to be convex. In one-dimensional case, one can take (1.4) (U u ) u∈V = (int(conv F u )) u∈V , since conv F u ⊂ U u for each u ∈ V (see Proposition 5.2 in the Appendix). We say that a GD-IFS satisfies the CSSC (convex strong separation condition), if the union (1.5) v∈V e∈E uv S e (conv F v ) (which belongs to conv F u ) is disjoint for each u ∈ V. GD-attractors and GD-IFSs appear naturally in dynamical systems and fractal geometry. For example, certain complex dynamical systems can be regarded as conformal GD-IFSs using a Markov partition, see [7,Section 5.5]. For another occurrence, the orthogonal projection of certain self-similar sets may be GD-attractors [8,Theorem 1.1]. We will work with COSC (including CSSC) GD-IFSs defined on R based on digraphs with d u ≥ 2 for every vertex u in V throughout this paper.
We say that a digraph is strongly connected if, for all vertices u, v ∈ V, there is a directed path from u to v (we allow u = v). For brevity, we will assume throughout that a strongly connected digraph always satisfies d u ≥ 2 for all u ∈ V. This is because, if d v = 1 (v ∈ V) then F v is just a scaled copy of another GD-attractor F w (w ∈ V \ {v}). Then F v is self-similar (with the COSC) if and only if F w is self-similar (with the COSC), since if K is the attractor of the IFS {ρ i x + b i } i , then ηK + l is the attractor of the IFS {ρ i x + ηb i + (1 − ρ i )l} i (η, l ∈ R). We can do a reduction as in [5, pp.607] on any strongly connected digraph and associated GD-IFS, to obtain a subgraph and new GD-IFS with d u ≥ 2 for all u ∈ V such that each attractor is similar to one of the original ones.
A natural question arises, "When does a GD-IFS of similarity mappings have attractors which cannot be realised as attractors of any standard IFS?". In particular we seek algebraic conditions involving the parameters underlying the GD-IFS similarities that ensure this is so. Some cases were examined in an earlier paper [3] which showed that, for a class of strongly connected digraphs, it is possible to construct CSSC GD-IFSs on R with attractors that cannot be obtained from a standard IFS, with or without the CSSC. Another paper [2] uses a different argument to construct CSSC GD-IFSs on R with attractors that cannot be obtained from a standard IFS. This paper further investigates this issue for all strongly connected digraphs (or even wider classes of digraphs).
For a strongly connected digraph G, it is known in [2, Lemma 5.1] (see also Theorem 5.4 in the Appendix) that, if all directed circuits in G go through a vertex u ∈ V, then for any (COSC) GD-IFS based on G, its attractor F u is also the attractor of a (COSC) standard IFS. By way of contrast, we will show that if, for some vertex u ∈ V, not all directed circuits in G go through u, then it is possible to define GD-IFSs of similarities satisfying the COSC so that the corresponding attractor F u is not the attractor of a standard IFS of similarities satisfying the COSC (Lemma 4.4). Moreover, this is true for 'almost all' choices of similarities in a natural sense (Theorem 4.8). The proof basically relies on identifying a characteristic of the 'gap length set', where we use a shorter systematical algebraic argument 'ratio analysis' rather than the categorising method of [3, Section 6] which only works for certain classes of digraphs. In fact we can relax the strong connectivity of G in this construction (Lemma 4.1) and the 'ratio analysis' method may have further applications to other related problems. We finally apply [2, Theorem 1.4] (see also Theorem 5.6 in the Appendix) to show immediately that, there exists GD-IFSs of similarities with the CSSC so that the corresponding attractor F u is not the attractor of a standard IFS.
GD-IFSs considered in this paper are inhomogeneous, by which we mean GD-IFSs of contracting similarities with not all contraction ratios equal. We will require the COSC condition, which is easy to verify from the parameters of a GD-IFS by solving simultaneous linear inequalities. There are difficulties in relaxing this condition to OSC (even in R) where many problems still remain open even for standard IFSs, such as the affine-embedding problem [10, Conjecture 1.1] or the inverse fractal problem (determining the generating IFSs of a standard IFS attractor) [9]. The question considered here can be viewed as an inverse-type problem, where we show certain GD-attractors have no generating standard IFS (with or without the COSC). Previous results on inhomogeneous self-similar sets also require this condition [9,Section 4] or stronger conditions such as SSC and restrictions on Hausdorff dimension [1,6,10]. Thus one might expect similar difficulties for inhomogeneous GD-attractors. This paper is organised as follows. In Section 2, we first introduce and obtain an expression for the gap length set of COSC GD-attractors, and we then introduce our algebraic method 'ratio analysis', and derive a key lemma (Lemma 2.9) relating the ratio sets of GD-IFSs and standard IFSs with the COSC. In Section 3 we introduce natural vector sets and construct GD-IFSs satisfying the COSC or the CSSC. In Section 4 we use the GD-IFSs constructed in Section 3 to show that the corresponding GD-attractors are not the attractors of COSC standard IFSs using both the 'ratio analysis' lemmas and the tool developed in [2]. We provide some examples to illustrate our assertions.
Gap length sets and ratio analysis
be the unique decomposition of the disjoint non-empty bounded complementary intervals {U i = (a i , b i )} i (see for example [13, Chapter 2, Theorem 9]), which will be called the gaps of K numbered by decreasing length (and left to right for equal length intervals).
Definition 2.1 (Gap length set). Define the gap length set of a compact set K ⊂ R to be For each vertex u ∈ V, we arrange the edges leaving u, denoted by e (k) u (k = 1, · · ·, d u ) in the following way. Denote by ω(e) the terminal vertex of an edge e ∈ E, then the interiors of the intervals S e (conv F ω(e) ) are disjoint due to the COSC. We rank these intervals in order from left to right, and denote the kth interval by with the edges (and also the GD-IFS {S e } e∈E ) arranged according to this order.
Definition 2.2 (Basic gaps). With the above notation, for each u ∈ V and 1 ≤ k ≤ d u − 1 (d u ≥ 2), let λ (k) u be the length of the complementary open interval between S (k) u conv F ω(e (k) u ) and S (k+1) u conv F ω(e (k+1) u ) (possibly λ (k) u = 0). All such complementary intervals (possibly empty) are called the basic gaps of this ordered COSC , be the set of strictly positive lengths of the basic gaps associated with vertex u ∈ V, see Figure 1. As standard IFSs are one-vertex GD-IFSs, this definition is also applicable to standard IFSs when we will omit the single vertex.
The GD-attractors (F u ) u∈V of any GD-IFS can be determined in the following way, see [12,Equation (15)]. For any list of compact sets (I u ) u∈V , we define where E m u denotes the set of paths of length m leaving u and ω(e) denotes the terminal vertex of path e. Note that if (2.5) I 1 u ⊂ I u for each u ∈ V, then the sequence I m u decreases in m in the sense that I m+1 u ⊆ I m u for every m ≥ 1, since I m+1 From this, it is known that for each u ∈ V, provided that (2.5) is satisfied. In particular, taking I u = conv F u for each u ∈ V, we see that (2.5) is satisfied, since by (1.2) In this case, the (2.8) is true. Moreover, by taking convex hulls in (2.9), we know that conv F u ⊆ conv I 1 u ⊆ conv conv F u = conv F u , which gives that (2.10) conv I 1 u = conv F u = I u , meaning that the two endpoints of the interval conv I 1 u coincide with those of the interval conv F u = I u . This fact will be used shortly.
Throughout this paper, the product AB of sets A, B ⊂ R is defined to be AB = {ab : a ∈ A, b ∈ B}, and when we encounter the product of a set in R and a constant, regard the constant as a set in R. If A is an empty set then AB is also empty.
The following proposition gives a characterization for the gap length set of an attractor F u of any COSC GD-IFS, which slightly extends a result in [3, below equation (5.2) in Section 5] to the case when a GD-IFS satisfies the COSC. Proposition 2.3. Let (V, E) be a digraph with d u ≥ 2 for all u ∈ V, and let F u be a GD-attractor of a GD-IFS in R with the COSC based on (V, E). With the above notation, the gap length set GL(F u ) of the attractor F u is given by Λ v |ρ e | : e is a directed path from u to v with length m .
When there is no directed path from u to v, the set {|ρ e |} is understood to be empty.
where E m uv is a collection of paths from vertex u to vertex v with length m. From this and using the COSC, we see that for every v ∈ V and every directed path e of length m from u to v, showing that Λ v = ∅ for all v ∈ V to which a directed path from u exists. Thus (2.11) is trivial in this case.
In the sequel, we assume that GL(F u ) ∅. Let u ∈ V be a vertex. Set I u := conv F u for each u ∈ V, and (2.8) holds true by virtue of (2.9). So the gaps of F u will be given by where G (r) u for 1 ≤ r ≤ d u − 1 form the basic gaps of F u , whose lengths form the set Λ u by using (2.10) with the property that two intervals I u and I 1 u have the same endpoints, see Figure 1). On the other hand, for any m ≥ 1, due to the COSC, the interiors of the level-m intervals S e I ω(e) e∈E m u are disjoint for any m. We know by (2.6) that (2.14) ω(e) (using (2.13)).
The above union consists of disjoint complementary open intervals S e (G (r) ω(e) ), whose lengths are given by |ρ e |·λ (r) ω(e) , which form the gap length sets at the mth-level for any m ≥ 1. Summing up over m will give the double union in the right-hand side of (2.11), and so (2.11) follows from (2.12) and the definition of GL(F u ).
2.2. Ratio analysis. We will use "ratio analysis" to analyse sets Θ of positive real numbers in (0, ∞), in terms of strictly decreasing geometric sequences {θ r k } ∞ k=0 that are contained in Θ.
This concept arises quite naturally as the characteristic set GL(F u ) contains many geometric sequences. The following definition will be used in studying R GL(F u ) (θ) later on.
are non-zero vectors whose entries are nonnegative integers (resp. nonnegative rationals, rationals). Let A Z + = {1} ∪ A Z * + , that is, the union of all products n i=1 a m i i where (m i ) i are nonnegative integer vectors (including the zero vector). Similarly, A Q = {1} ∪ A Q * and A Q + = {1} ∪ A Q * + . We will analyse GL(F u ) given by (2.11) with the following Lemma.
Condition (ii) in Lemma 2.6 means that the sets {λ j A j } m j=1 are disjoint. Proof. (i) Let θ ∈ Θ. Assume that R Θ (θ) ∅. Let r ∈ R Θ (θ). By (2.15), there exists θ ∈ Θ such that θ ∈ {θ r k } ∞ k=0 ⊂ Θ, so by the pigeonhole principle we can find some λ l such that {θ r k } ∞ k=0 λ l A l is infinite. Write this infinite subsequence as under the partial order defined by inequality of all coordinates. Therefore, we have by (2.17) ∈ (Q n + ) * using (2.18), it follows that r ∈ A Q * + by definition. Therefore, for all θ ∈ Θ, thus proving our assertion (i).
(ii) For m ≥ 2, suppose that there exist distinct p, q ∈ {1, · · ·, m} such that θ r k ∈ λ p A p and θ r j ∈ λ q A q for some k, j ∈ Z + . Write leading to a contradiction to our assumption. Thus, there exists a unique integer l ∈ {1, · · ·, m} such that {θ r k } ∞ k=0 ⊂ λ l A l .
It remains to show (2.16). In fact, if (2.16) were not true, then θ r k ∈ λ t A t for some integer k ≥ 0 and some t l.
leading to a contradiction. The assertion (2.16) follows.
The following corollary will be used to describe a certain 'homogeneity' property of (the gap length sets of) attractors of COSC standard IFSs.
Corollary 2.7. Let X ⊂ (0, 1) and Λ ⊂ (0, ∞) be two finite sets. Then For any r ∈ X Z * + and k ∈ Z + , we have r k ∈ X Z + and so thus showing that r ∈ R ΛX Z+ (θ) by definition (2.15) with Θ = ΛX Z + , so the first inclusion follows. The second inclusion also follows by taking A = X, λ j ∈ Λ and each A j = X Z + in Lemma 2.6(i) (so that Θ = ΛX Z + ).
As an application of Lemma 2.6 and Corollary 2.7, we derive a key lemma that will be used to distinguish the attractor of a COSC GD-IFS from that of a COSC standard IFS. Definition 2.8 (Absolute contraction ratio set). The absolute contraction ratio set of a GD-IFS is defined to be the set of the absolute values of the contraction ratios of the similarities, that is {|ρ e | : e ∈ E}. Lemma 2.9. Let (F v ) v∈V be the attractors of a COSC GD-IFS based on a digraph with d v ≥ 2 for all v ∈ V, with absolute contraction ratio set A. Assume that for some u, the set F u is not an interval (or a singleton) and is the attractor of some COSC standard IFS with absolute contraction ratio set X.
The assertion (ii) of Lemma 2.9 gives a necessary condition that a COSC GD-attractor F u is also the attractor of some COSC standard IFS in the following way: if there exists two elements θ 1 , θ 2 ∈ GL(F u ) such that (2.21) holds for θ 1 whilst (2.22) holds for θ 2 , then F u is not the attractor of any COSC standard IFS. This assertion will be used in Lemma 4.1 below.
Proof. (i) Let Λ be the set of nonzero basic gap lengths of some COSC standard IFS with the attractor F u , and let X be the absolute contraction ratio set. Regard this standard IFS as a GD-IFS based on ({v}, {e j } m j=1 ) where e j are loops of the single vertex v, all directed paths of length k ≥ 1 are now e i 1 e i 2 · · · e i k where i l = 1, 2, · · · , m for all l = 1, 2, · · · , k. By (2.11), Note that GL(F u ) is non-empty by using our assumption that F u is not an interval or a singleton. On the other hand, Corollary 2.7 implies that Recall that a directed circuit containing u is a directed path from u to u. We write the union given by (2.11) as Since the absolute contraction ratios are all in A (so that |ρ e | ∈ A Z + ), it follows from Lemma 2. 6 leading to the inclusions in (2.20), as desired.
We will show that (2.22) is true. We first claim that A Q * + is the union of two disjoint sets A Q * In fact, if (2.25) were not true, there would exist three elements We need to show where not all p i , q j are zero. Thus, if all q j are zero, then a = This proves (2.26) by using (2.25).
by using (2.20) and (2.26), we have We will show the following inclusion Then any element x ∈ X Q * + can be written as Note that the numbers k l=1 p i,l r l and k l=1 q j,l r l all belong to Q + . Since r l > 0 for some l while q j ,l > 0 for this l and some j , we have k l=1 q j ,l r l > 0 for this j . Therefore, we obtain (2.28). Finally, by (2.20) and (2.28), we have for all θ ∈ GL(F u ), , from which, we easily conclude that (2.22) holds by using (2.25).
Construction of GD-IFSs
We will construct COSC (CSSC) GD-IFSs in terms of vector sets in Euclidean spaces, to analyse the existence and extent of non-trivial GD-IFSs whose attractors are not attractors of any (COSC) standard IFS. For so that n ≥ N (recall that d i denotes the number of the edges leaving vertex i). Define the subset P 0 in the Euclidean space R n , with n given in (3.1), by Each vector x in P 0 consists of two kinds of entries: the entries {x (k) i } i∈V,1≤k≤d i all lie in the set (−1, 1) \ {0}, and will specify the contraction ratios of GD-IFSs to be constructed, whilst the other entries {ξ (k) i } i∈V,1≤k≤d i −1 are all non-negative, and will specify the basic gap lengths.
For vertex i ∈ V, let {e i (k) : 1 ≤ k ≤ d i } be the set of edges leaving i, which are arranged in some order which will henceforth remain fixed. For a point x in P 0 , we look at its entries {x (k) i } i∈V,1≤k≤d i and define an N × N matrix M x (s) for any s > 0 by For each edge e i (k) (1 ≤ k ≤ d i ) leaving vertex i ∈ V, we define the mappings associated with a point x in P 0 by and ω(e i (k)) denotes the terminal vertex of the edge e i (k) as before.
Note that for any point x ∈ P 0 , the mapping S e i (k) defined as in (3.7) has the contraction ratio x (k) i ∈ (−1, 1) \ {0}, therefore it is a contracting similarity, and For any two vectors b, as in (3.5), (3.6) and any point x in P 0 , we define the closed intervals (which may be singletons) for each vertex i ∈ V by We will work with a subset P of P 0 defined by where the matrix M x (1) is defined by (3.3) with s = 1, and r σ (M) denotes the spectral radius of a matrix M, which is the largest absolute value (complex modulus) of the eigenvalues of M.
We show that any point in P will give arise to at least one COSC GD-IFS on G, in form of (3.7), whose contraction ratios are {x (k) i } i∈V,1≤k≤d i and whose attractor F i at each vertex i has the convex hull I i given by (3.10), having the basic gap lengths With the same notation above, let x be any point in P as in (3.12) and b be any vector as in (3.5). Let (l i ) i∈V be a vector of real numbers given by where M T denotes the transpose of a matrix M. Then any GD-IFS (x, b), given by (3.7), (3.9) and (3.13) and having attractors {F i } i∈V , satisfies the following properties.
(i). For each vertex i ∈ V, we have l i > 0 and satisfies the COSC. The basic gaps of attractor F i for i ∈ V are given by the following open intervals in R which are arranged in order from left to right. The corresponding basic gap lengths are since, by repeatedly using definition (3.8) Also note that since, by using definition (3.12) of P, the matrix (id − M x (1)) is invertible and can be written as , from which it follows by definition (3.13) that by using the fact that M x (1) is a nonnegative matrix and that d i −1 k=1 ξ (k) i > 0 by (3.12). We claim that Combining this with (3.17), (3.20). This proves our claim.
We next show that the contracting similarity S e i (k) associated with the edge e i (k) satisfies for each vertex i ∈ V and each 1 ≤ k ≤ d i . This is easily seen by looking at the two endpoints of interval I ω(e i (k)) , depending on whether x (k) i > 0 or not. Indeed, by definition (3.10) with vertex i being replaced by vertex ω(e i (k)), k)) , from which S e i (k) (I ω(e i (k)) ) = [S e i (k) (b (1) ω(e i (k)) ), S e i (k) (b (1) ω(e i (k)) + l ω(e i (k)) )] i < 0, we similarly have that S e i (k) (b (1) ω(e i (k)) + l ω(e i (k)) ) = b (k) i and so S e i (k) (I ω(e i (k)) ) = [S e i (k) (b (1) ω(e i (k)) + l ω(e i (k)) ), S e i (k) (b (1) ω(e i (k)) )] we know by (3.24) that the closed intervals {I i (k) : 1 ≤ k ≤ d i } are arranged in order from left to right, which together with (3.20) implies that with the disjoint union.
We are now in a position to prove the assertions (i), (ii).
(i). We will use (3.24) and definition (3.13) to derive (3.14). Indeed, recall that the intervals I i are defined in (3.10). Note that l i > 0 for each i ∈ V by (3.18). As in (2.4), for each vertex i ∈ V we let where E m i is the set of edges of length m leaving vertex i, and ω(e) is the terminal of path e as before. We show that for each vertex i ∈ V i |l ω(e i (k)) ] (using (3.24)), from which, using the fact that b (k) i + l i . Therefore, the (3.28) holds for all m ≥ 1 by induction.
Since condition (2.5) holds using that I 1 showing that (3.14) holds true.
(ii). Applying (3.14) with i replaced by vertex ω(e i (k)), the terminal of the edge e i (k), from which it follows by (3.24) that We show that (x, b) satisfies the COSC. Taking U i = int(conv F i ), from (3.14) , so that each open set U i is not empty as l i > 0. It follows that i |l ω(e i (k)) ] = I i (k) (using (3.30)) and its neighbour by using definition ( with the disjoint union, as the intervals I i (k) and I i (k + 1) are separated by distance ξ (k) i , which are strictly positive.
Remark 3.2. Note that any point x belongs to P if This is because using the elementary fact that the spectral radius of a nonnegative matrix is no greater than any row sum, see for example [14,Equation (1.9)]. Therefore, every x ∈ P 0 satisfying (3.31) belongs to P, and all the assertions (i), (ii) in Lemma 3.1 hold true, provided that (l i ) i∈V are chosen as in (3.13).
We now look at subsets P, depending on a number δ > 0, which will give rise to a special class of GD-IFSs, satisfying the CSSC, having attractors {F i } i∈V with the property that conv F i = [0, 1], and all the basic gaps of F i have the same length δ. (recall our assumption that the out-degree d i at vertex i satisfies d i ≥ 2 for all i). We define a set A(δ) by Let M x (1) be an N × N matrix associated with point x as in (3.3) for s = 1. For each x ∈ A(δ), the spectral radius of matrix M x (1) is less than 1, since and hence, where the set P is as in (3.12). Moreover, Let {b (k) i } i∈V,1≤k≤d i be a family of real numbers given by 1 (l 1 , l 2 , · · · , l N ) = (1, 1, · · · , 1). (3.42) In this situation, for x ∈ A(δ), the contracting similarities defined in (3.7) read for a variable t ∈ R for i ∈ V, 1 ≤ k ≤ d i , which will give arise to a GD-IFS satisfying the CSSC. This will be used in Theorem 4.10 below. i | so that |x (1) i | ∈ F i . The basic gaps of the attractor F i are given by (3.45) |x (1) i | + · · · + |x (k) i | + (k − 1)δ, |x (1) i | + · · · + |x (k) i | + kδ for every 1 ≤ k ≤ d i − 1, so that the basic gap lengths are all equal to the same number, δ say. Moreover, the GD-IFS (x) satisfies the CSSC.
Criteria for graph directed attractors not to be self-similar sets
In this section we give some sufficient conditions under which GD-attractors cannot be realised as attractors of any standard IFSs with or without the COSC.
For a directed path L, let A(L) (resp. A(L c )) be the set of the absolute values of the contraction ratios of the similarities associated with the edges in L (resp. not in L). Recall the definition of Λ u from (2.3).
Lemma 4.1. Assume that (V, E) is a digraph with d w ≥ 2 for all w ∈ V and L is a directed circuit that does not go through every vertex in V. Let u be a vertex outside L and v a vertex in L, assume that there exists a directed path from u to v. Consider a COSC GD-IFS based on this digraph. With the notation above, suppose that the following three conditions hold: Then the graph-directed IFS attractor F u is not the attractor of any COSC standard IFS.
Basically, condition (i) means that linear combinations of numbers {log |ρ e | : e ∈ A(L)} over Q * , that is, e∈A(L) q e log |ρ e | for (q e ) e∈A(L) ∈ (Q #A(L) ) * where (Q #A(L) ) * is the set of non-zero vectors in Q #A(L) as before, are different from those of numbers {log |ρ e | : e ∈ A(L c )} over Q * + , while condition (ii) means that not all basic gaps associated with u and v are empty, and condition (iii) means that log(λ (k) w /λ (m) z ) for all distinct basic gaps of positive lengths are different from linear combinations of numbers {log |ρ e | : e ∈ E} over Q. Note that condition (i) requires a certain homogeneity, on the ratios of the gap length set of a COSC self-similar GD-attractor, which does not necessarily hold when (ii) and (iii) are satisfied. Note that among the three conditions (i), (ii), (iii), no two of them imply the third.
Proof. We show that the strict dichotomy required by Lemma 2.9 (ii) for a graph-directed attractor fails for F u satisfying the conditions of this theorem.
Let u be a vertex outside L and v a vertex in L. For any w u in V, let R(uw) = {|ρ e | : e is a directed path from u to w}, and let R(uu) = {1} ∪ {|ρ e | : e is a directed circuit containing u}. With the above notation, the union (2.24) becomes By condition (ii), we can choose two non-zero basic gap lengths λ u ∈ Λ u , λ v ∈ Λ v . Since there exists a directed path e from u to v, we can choose a number Recall that ρ L denotes the product of the contraction ratios on the edges of L. For each integer k ≥ 0, we define eL k by eL 0 := e and eL k := eL · · · L k times for k ≥ 1, all of which are directed paths from u to v, so that |ρ eL k | ∈ R(uv). Note that (4.1), which implies (4.2) by definition (2.15) with θ being replaced by θ ∈ GL(F u ). Set We claim that To see this, taking the decomposition of Θ = GL(F u ) given by (4.1), the requirements for Lemma 2.6 (ii), with λ j varying in {λ ∈ Λ w : w ∈ V}, A j varying in {R uw : w ∈ V} and with A = A(L) A(L c ), are satisfied by assumption (iii). Thus, there is a unique w ∈ V and a unique λ ∈ Λ w such that (4.5) {θ r k } ∞ k=0 ⊂ λR(uw), and (4.6) θ r k λ R(uz) for all (λ , z) (λ, w) and all k ≥ 0 by (2.16). Thus, λ u ∈ {θ r k } ∞ k=0 ⊂ λR(uw). On the other hand, noting that 1 ∈ R(uu) so that (4.7) λ u ∈ λ u R(uu), we conclude that λ = λ u , w = u by (4.6).
On the other hand, since u is not in the circuit L, any directed circuit L containing u must also visit some edge outside L as well, implying that |ρ L | ∈ (A(L)) Z + (A(L c )) Z * Noting that by assumption (i) 1 are strictly less than 1. Finally, since (4.3) and (4.11) hold simultaneously, Lemma 2.9(ii) implies that F u cannot be the attractor of any COSC standard IFS.
Note that the assumption 'there exists a directed path from u to v' in Lemma 4.1 is necessary. The following example shows that without this assumption, the GD-attractor may be an attractor of some standard IFS (with or without the COSC). having GD-attractors F 1 , F 2 , F 3 associated with vertices 1, 2, 3 respectively. By (1.2), the set F 3 satisfies which is an attractor of the standard IFS {S e , S e }. Note that there is no directed path from vertex 3 to other two vertices 1, 2.
We give an example to illustrate Lemma 4.1. Our example is a digraph that has three vertices and is not strongly connected.
Let L = e 1 (1) be a loop (circuit) so that vertex u = 3 is outside L whilst vertex v = 1 is inside L. A directed path from u to v is labelled by e 3 (2). Let x be a point given by The point x belongs to the set P in (3.12) by using (3.31) as the sum of each row of matrix M is bounded by 1, that is, (3.13), that is, Let b = (0, 0, 0) and let (x ) := (x , b) be a GD-IFS constructed as in Lemma 3.1, which is given by (3.8), with = (l 1 , l 2 , l 3 ) determined by (4.12). By Lemma 3.1, such a GD-IFS, (x ), satisfies the COSC, and the basic gap length sets at three vertices are respectively (4.13) so that the sets of positive gap lengths at the vertices are given by Since L = e 1 (1) is a loop, we see that To verify condition (i), we need to show that where A(L), A(L c ) are given as in (4.15). Otherwise, there would exist some non-zero rational number q such that (2), e 2 (1), e 2 (2)}, so that d 1 = 2, d 2 = 2 , see Figure 4. Let {p j } 1≤ j≤4 be four distinct primes arranged in ascending order so that 2 ≤ p j < p j+1 , and let p 5 be a positive number such that log p 5 is not a rational linear combination of {log p j } 1≤ j≤4 . Let λ > 0 be any real number, and let x be a vector given by Note that x ∈ P in (3.12) by using (3.31), since 2 ) T be given by (3.13), that is, Clearly, such a GD-IFS (x, b) has absolute contraction ratio set given by A := {p −1 i } 4 i=1 . Applying Lemma 3.1, (x, b) satisfies the CSSC, whose basic gap lengths sets are Λ 1 = {ξ (1) 1 } = {λ} (at vertex 1) and Λ 2 = {ξ (1) 2 } = {λp 5 } (at vertex 2). Let F 1 , F 2 be the attractors of (x, b) at vertices 1 and 2.
We will use Lemma 4.4 to show that F 1 (or F 2 ) is not the attractor of any COSC standard IFS, noting that (V, E) contains a directed circuit (loop) not passing through vertex 1 (or through vertex 2).
Condition (i ) is clear since the contraction ratios
are distinct, and 1 A Q * by using Proposition 5.5 in the Appendix. Condition (ii ) is trivial since the basic gap lengths are λ, λp 5 that are strictly positive.
It remains to verify condition (iii), or equivalently to check that However, this is trivial by noting that s 4 (the same is true for p −1 5 ) for any rationals (s i ) 4 i=1 , since log p 5 is not a rational linear combination of {log p j } 1≤ j≤4 . Therefore, all the assumptions (i ), (i ), (iii) in Lemma 4.4 are satisfied, so the GD-attractor F 1 (or F 2 ) is not the attractor of any standard IFS with the COSC. We next show that for n-dimensional Lebesgue almost all vectors in P, all the conditions in Lemma 4.4 hold for their corresponding GD-IFSs. Let P 1 be a subset of P given by (4.21) i > 0. Definition 4.7 (Admissible set). With the notation as above, we say that a point x = (x 1 , x 2 , · · · , x n ) in the set P 1 is admissible if for any two distinct vectors (p i ) n i=1 and (q i ) n i=1 of nonnegative rationals. The set of all admissible points is denoted by A.
Note that the admissible set A depends only on the numbers of vertices and their out-degrees, but is independent of any vertex itself and the order of edges. If (x 1 , x 2 , · · · , x n ) ∈ A, then for any two distinct indices i, j, taking p i = 1, p k = 0 for all k i and q j = 1, q k = 0 for all k j in (4.22), in a way of (3.7), (3.13), for any b in (3.5), The following says that the size of the admissible set A is very large.
Theorem 4.8. Let G = (V, E) be a strongly connected digraph with d w ≥ 2 for all w ∈ V, containing a vertex u ∈ V outside a directed circuit. With the notation as above, if x ∈ A then the attractor F u of the corresponding GD-IFS, (x, b), defined as in (4.24) for any b, is not the attractor of any COSC standard IFS. Moreover, with n given as in (3.1), that is, the complement of the set A in P has n-dimensional Lebesgue measure zero.
i ∈ R and let x = (x 1 , x 2 , · · · , x n ) be an admissible point. By Lemma 3.1, the corresponding GD-IFS, (x, b) associated with the vectors x, b, satisfies the CSSC. We will show that such a GD-IFS (x, b) also satisfies all three conditions (i ), (ii ), (iii) in Lemma 4.4.
Clearly, the GD-IFS (x, b) satisfies condition (ii ) by noting that Λ i ∅ for each vertex i ∈ V, since all the basic gap lengths sitting at vertex i are ξ (1) i , ξ (2) i , · · · , ξ (d i −1) i by Lemma 3.1(ii), which are strictly positive since the vector x belongs to P 1 .
We show condition (i ). Let As not all s i are zero, we see that ( are two distinct nonnegative rational vectors. This contradicts the admissibility of x as defined in (4.22), thus (4.26) is true.
By using (3.7) and (4.23), all the contraction ratios of the COSC GD-IFS (x, b) have different absolute values. Since 1 A Q * as A Q * ⊂ X Q * , where A is the absolute contraction ratio set of (x, b), condition (i ) is satisfied.
For condition (iii), suppose that there exists some a ∈ A Q such that a = λ (k) , contradicting (4.26). Thus condition (iii) is also satisfied. Therefore, by applying Lemma 4.4, the attractor F u of the GD-IFS (x, b) is not the attractor of any COSC standard IFS.
We finally show that L n (P \ A) = 0. For this, note that (4.27) L n (P \ P 1 ) = 0 where P 1 is defined as in (4.21), since P \ P 1 lies in the union of hyperplanes ξ (k) i = 0. We just need to show L n (P 1 \ A) = 0. Let As p i q i for some i, say without loss of generality for i = 1, then from which, it follows that any vector in P 1 \ A lies in an at most (n − 1)-dimensional manifold. Since there are countably many such equations, the union of countably many such manifolds has n-dimensional Lebesgue measure zero in R n . There for some two distinct vectors (s i ) 6 i=1 and (t i ) 6 i=1 of nonnegative rationals. From this, we know that Thus, condition (4.22) fails if λ is chosen as in (4.28). In particular, condition (4.22) fails if λ = 1 √ p 5 on taking s i = t i for i = 1, 2, 3, 4 whilst s i = t i + 1 for i = 5, 6, However, the GD-attractor F u , associated with such a non-admissible point x, is not the attractor of any COSC standard IFS by Example 4.6.
We further consider the situation by removing the 'COSC'. We will apply Corollary 3.4 and Theorem 5.6 in the Appendix.
Theorem 4.10. Let G = (V, E) be a strongly connected digraph with d j ≥ 2 for every vertex j ∈ V, containing a vertex i ∈ V outside a directed circuit. Let x ∈ A(δ) (see definition (3.34)) satisfying that, for every vertex j i in V, where m j , n j ∈ [1, d j − 1] are integers. Let (x) be corresponding CSSC GD-IFS constructed as in Corollary 3.4, with GD-attractors (F j ) j∈V . Then F i is not the attractor of any standard IFS.
Proof. Let x ∈ A(δ). Recall that the corresponding GD-IFS, (x) = {S e i (k) } i∈V,1≤k≤d i associated with point x, is given by (3.43), where {b (k+1) i } i∈V,1≤k≤d i −1 are real numbers in (0, 1) defined as in (3.41) (b (1) i = 0 for every i ∈ V). We apply Theorem 5.6 in the Appendix to prove this theorem. Clearly, conditions (1), (2) in Theorem 5.6 are satisfied. In order to verify condition (3), we need to show that for every vertex j i, We first show (4.31). Indeed, note that the point |x (1) i | belongs to the attractor F i by Corollary 3.4(ii). However, this point does not belong to any attractor F j ( j i), since it falls in some basic gap (see formula (3.45)) of F j by using assumption (4.29).
Similarly, the point 1 − |x (1) i | belongs to the set 1 − F i but does not belong to any attractor F j ( j i), since it also falls in some basic gap of F j by using assumption (4.30), and thus (4.32) is also true, as required.
By the definition of A(δ), any vector x ∈ A(δ) satisfies that for all j ∈ V, j } j∈V to be any numbers such that (4.34) is satisfied. Such a class of points satisfy condition (4.33), which implies that conditions (4.29), (4.30) are both satisfied.
Appendix
In this appendix we derive some general properties and secondary results that are used in the main part of the paper.
The following proposition on ordering integer lattice points is used in Lemma 2.6. Recall that Z + denotes the set of all nonnegative integers. Proof. We write − → x := (x i ) n i=1 ∈ Z n + . Consider the set of integers: If S is unbounded, then we are done by fixing some vector − → x and taking − → y = ( Otherwise S is bounded by an integer N in which case we prove the proposition by induction on n. When n = 1 it is trivial. Assume that the proposition holds for n − 1. For each 1 ≤ j ≤ n and each α ∈ {0, 1, · · · , N}, define = α}, a (possibly empty) collection of all vectors in B whose j-th entries equal to the same number α and take the smallest value. Since The next proposition generalises a well-known result for standard IFSs to GD-IFSs. thus showing that F u ⊂ I 1 u ⊂ U u by virtue of (2.8). The proof is complete.
The directed paths in GD-IFSs play the same role as the finite-length words in standard IFSs, as the following proposition suggests. We will frequently use the following fact that, for any u ∈ V and m ≥ 1, by repeatedly using definition (1.2) (recall that E m u is the totality of all paths of length m leaving u). The following proposition concerns the disjointness of images of components under mappings corresponding to different words. Proposition 5.3. Let G = (V, E) be a digraph and (F u ) u∈V be the GD-attractors of a GD-IFS = (V, E, (S e ) e∈E ) based on it. Assume that each F u is not a singleton. Let e , e be two directed paths with e e e if |e | ≤ |e | (where e is a directed path which may be empty). If satisfies the COSC on R, then the interiors of S e (conv F ω(e ) ) and S e (conv F ω(e ) ) are disjoint. Similarly, if satisfies the CSSC then S e (F ω(e ) ) and S e (F ω(e ) ) are disjoint.
Assume now that satisfies the COSC. By (1.4), one can take which is non-empty by our assumption that F u is not a singleton. For any two paths ee 1 , ee 2 with common path e and distinct edges e 1 , e 2 , the interiors of two intervals (5.4) int(S ee 1 (conv F ω(e 1 ) )) ∩ int(S ee 2 (conv F ω(e 2 ) )) = ∅ by using the COSC, since S ee 1 (conv F ω(e 1 ) ) = S e (S e 1 (conv F ω(e 1 ) )) and S ee 2 (conv F ω(e 2 ) ) = S e (S e 2 (conv F ω(e 2 ) )) and the interiors of S e 1 (conv F ω(e 1 ) ) and S e 2 (conv F ω(e 2 ) ) are disjoint as the edges e 1 , e 2 have the same initial vertex, namely the terminal of path e. Let e be the longest common path of e and e (which may be empty). Write e = ee 1 p 1 and e = ee 2 p 2 , where e 1 e 2 are two distinct edges and p 1 , p 2 are some paths (possibly empty). By (5.3), S e (conv F ω(e ) ) = S ee 1 p 1 (conv F ω(p 1 ) ) ⊂ S ee 1 (conv F ω(e 1 ) ), S e (conv F ω(e ) ) = S ee 2 p 2 (conv F ω(p 2 ) ) ⊂ S ee 2 (conv F ω(e 2 ) ), thus the interiors of S e (conv F ω(e ) ) and S e (conv F ω(e ) ) are disjoint by using (5.4).
The assertion for the CSSC is similar. The proof is complete.
The following was essentially proved in [2, Lemma 5.1], except that we also consider the COSC case.
Theorem 5.4. Let G = (V, E) be a strongly connected digraph with d v ≥ 2 for all v ∈ V. If every directed circuit goes through a vertex u ∈ V, then for any (resp. COSC) GD-IFS based on G, its attractor F u is also the attractor of a (resp. COSC) standard IFS.
Proof. Set N := #V, the number of vertices in V. Let L(u) be the set of all circuits having u as their initial and terminal, and which do not contain another shorter circuits, that is, where the symbol e uv 1 v 2 ···v k u = e uv 1 e v 1 v 2 · · · e v k u is understood to be a path consisting of consecutive edges. We claim that by using the fact that every circuit goes through vertex u.
To see this, we have by (5.2) that Since any directed path e in E N u can be written as e = e uv 1 v 2 ···v N , we see that at least one of vertices v 1 , v 2 , · · · , v N must be u, otherwise, one of them would appear twice, thus producing a circuit, contradicting the assumption that every directed circuit goes through vertex u. There exists some index k such that v k = u and the path visits u the second time (besides the initial time), and e = e uv 1 v 2 ···v k−1 uv k+1 ···v N = e uv 1 v 2 ···v k−1 u e uv k+1 ···v N = e e , where e = e uv 1 v 2 ···v k−1 u ∈ L(u) and e is a path with initial u if it exists (possibly e is empty and the following argument will become easier). From this, we know that S e (F ω(e) ) = S e (S e (F ω(e ) )) ⊂ S e (F u ), since S e (F ω(e ) ) ⊂ F u by (5.2). It follows that If the GD-IFS further satisfies the COSC, we claim that the IFS Φ given by (5.6) also satisfies the COSC. Indeed, by definition of the COSC and the fact that Φ has attractor F u , we need only to show that the interiors of two intervals S e (conv F u ) and S e (conv F u ) are disjoint, where e , e are in L(u). But this assertion immediately follows from Proposition 5.3.
The following easy property of powers of primes is used in the examples in Section 4.
Proposition 5.5. Let {a i } n i=1 be distinct positive prime numbers. Then Proof. Suppose to the contrary, that 1 ∈ A Q * . Then 1 = n i=1 a −1 i s i for some non-zero vector (s i ) n i=1 of rationals. Let q be the least common denominator of the rationals s i . Taking the qth power, it follows that where s + i = max{s i , 0}, s − i = max{−s i , 0} so that s i = s + i − s − i . As the s i are not all zero, the vectors of integers (qs + i ) n i=1 and (qs − i ) n i=1 are distinct. By the uniqueness of the prime factorisation of the integer m, we see that (qs + i ) n i=1 = (qs − i ) n i=1 , a contradiction. The following assertion was essentially obtained in [2, Theorem 1.4 and the end of Section 1]. Here we give a simpler proof under stronger assumptions withconditions (2), (3) in the next theorem.
Theorem 5.6. Let G = (V, E) be a strongly connected digraph with d w ≥ 2 for each w ∈ V. Suppose that a given GD-IFS of similarities based on G satisfies the CSSC, and conv F w = [0, 1] for each w ∈ V. For some vertex u ∈ V, suppose the following conditions hold.
(1) There is a directed circuit that does not pass through u.
(3) For each vertex v u, we have F u F v and 1 − F u F v . Then F u is not the attractor of any standard IFS defined on R.
Proof. The proof is divided into two steps.
Step 1. We claim that, for any v ∈ V and any contracting similarity f with f (F u ) ⊂ F v , there exists some path e leaving v with terminal ω(e) = u such that (5.7) f (F u ) ⊂ S e (F u ).
Indeed, as F v consists of the level-1 cells S e (F ω(e) ) for edges e leaving v by using (1.2), the f (F u ) must belong to only one of those cells, say (5.8) f (F u ) ⊂ S e (F ω(e) ) for some edge e leaving v.
Otherwise, there are two points in f (F u ) lying in two distinct level-1 cells, and as f (F u ) ⊂ F v , we know that f (F u ) spans a basic gap of F v , implying that f (F u ) has a gap, containing a basic gap of F v , whose length is clearly greater than or equal to δ. However, this is impossible, because all gap lengths of F u do not exceed δ by assumption (2) and (2.11), so that all the gap lengths of f (F u ) are strictly smaller than δ by using the contractivity of f . By (5.2), it follows that S e (F ω(e ) ) for any m ≥ 1, where E m v is the set of all paths leaving v with the same length m as before. As f (F u ) has fixed diameter and cells S e (F ω(e ) ) have arbitrarily small diameters by taking m large, we can choose a longest directed path e 1 leaving v, which exists by using (5.8) and the fact that distinct cells of the same length are disjoint (see Proposition 5.3), such that (5.9) f (F u ) ⊂ S e 1 (F ω(e 1 ) ).
Step 2. We show that F u is not the attractor of any standard IFS. Assume to the contrary that there exists a standard IFS { f i } such that As f i (F u ) ⊂ F u , using (5.7) with v = u, we know that f i (F u ) ⊂ S e i (F u ), and so where each e i is a directed circuit from initial u to terminal u. By condition (1), there is a vertex w u contained in a circuit L that does not pass through u. By the strong connectivity, we can find a simple path L 1 (i.e. a path visits any vertex at most for once) from u to w. Note that the path L 1 L m from u to w visits u only once. We can pick an integer m so large that the path length is greater than max{|e i |} i . By (5.2) and (5.11), (5.12) S L 1 L m (F w ) ⊂ F u ⊂ i S e i (F u ).
Note that {S e } satisfies the CSSC by assumption (2), and so S L 1 L m (F w ) is disjoint with any set S e i (F u ) in (5.12) using Proposition 5.3, since the path L 1 L m does not start with any of these paths e i , otherwise L 1 L m would visit u twice. This contradicts (5.12), thus showing that F u is not self-similar.
|
2022-08-11T01:16:13.940Z
|
2022-08-10T00:00:00.000
|
{
"year": 2022,
"sha1": "eda06ed6bc0f96a00b3dc4419686a9975fb77479",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eda06ed6bc0f96a00b3dc4419686a9975fb77479",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
243243134
|
pes2o/s2orc
|
v3-fos-license
|
Acceptability and Impact of Group Interpersonal Therapy (IPT-G) on Kenyan Adolescent Mothers Living with HIV: A Qualitative Analysis
Background Task shifting is a well-tested implementation strategy, within low- and middle-income countries (LMICs), that addresses the shortage of trained mental health personnel. Task sharing can increase access to care for patients with mental illnesses. In Kenya, community health workers [CHWs are a combination of community health assistants (CHAs) and community health volunteers (CHVs)], have played a crucial role in this front. In our study, we seek to assess the acceptability and feasibility of IPT-G delivered by CHWs among depressed postpartum adolescents (PPAs) living with HIV. Method Twenty-four PPAs were administered IPT-G by trained CHWs from two health centers. A two-arm study design (IPT-G intervention and treatment as usual) with an intent to treat was used to assess acceptability and feasibility of IPT-G. Participants who scored >10 on Edinburgh postnatal depression scale (EPDS) and who were 6–12 weeks postpartum were eligible for the study using purposeful sampling. Participants were equally distributed into two groups: one group for intervention and another for wait-list group. This was achieved by randomly allocating numerical numbers and separating those with odd numbers (intervention group) and even numbers (wait-list group). Focus group discussions (FGDs) and in-depth interviews ascertained the experience and perceptions of the postpartum adolescents and the CHWs. In addition to weekly face-to-face continuous supportive supervision for CHWs, phone calls, short messages services, and WhatsApp instant messaging services were also utilized. Results The CHWs found the intervention useful for their own knowledge and skill-set. On participation, 21out of the 24 adolescents attended all sessions. Most of the adolescents reported improvement in their interpersonal relationships with reduced distress and lessening of HIV-related stigma. Primary health care workers embraced the intervention by availing space for sessions. Conclusion Our study demonstrates possible benets of task shifting in addressing mental health problems within low-resource settings in Kenya and group IPT is demonstrated to be both acceptable and feasible by health workers and adolescents receiving care. Acceptability and feasibility of IPT-G when delivered by a non-medical specialist in routine clinical settings was assessed. We conducted our intervention between August 2018 and July 2019. Twenty-four postpartum adolescents (PPAs) aged 15-24 years and living with HIV were recruited from two sites: Kangemi health center and Kariobangi health center. They were also expected to be 6-12 weeks postpartum. The two study sites were chosen since it is situated on the two opposite poles of Nairobi city, and both are home to a population of low economic status. CHWs included CHAs, (n=2), and CHVs, (n=6) delivered IPT-G to PPAs. Both study sites had same numbers of CHWs.
Background
Shortage of mental health personnel and associated challenges Chronic shortage of well-trained health personnel in Sub-Saharan Africa (SSA) necessitated task shifting as a key implementation strategy. Task shifting in SSA health systems typically mean CHWs are trained to help increase number of health services being provided to reduce cost and improve delivery of care (WHO, 2007). The concept of task shifting involves rational distribution from well-trained to less specialized health workers or those who have been trained on a limited time duration on speci c skills in a given area of need (WHO, 2006). Several barriers to task shifting revolve around the need to strengthen health systems by improving systemic and physical structures (Dawson et al., 2014). CHWs undergo challenging experiences while task-shifting: undertaking tasks that may drain them emotionally and physically, persistent problems of inadequate training, unstructured supervision, and poor remuneration or complete lack of reward or entitlements to any form of bene t in some instances (Mundeva et al., 2018).
Task shifting is meant to reduce workloads for overburdened specialist health workers and improve patients' linkage to services (Mwai et al., 2013). In Kenya, CHWs are not well compensated and sometimes expected to work on voluntary basis with poorly structured responsibilities causing them to take roles requiring more skills than their abilities (Angwenyi et al., 2013). Embracing task shifting of key preventive and promotive activities in the HIV program using CHWs promises to be a good step towards achieving the 90-90-90 goals. UNAIDS, too, has identi ed the engagement of community workers as essential in HIV prevention and advocacy (UNAIDS and Stop AIDS Alliance, 2015).
Prevalence of HIV in peripartum adolescents in Kenya
Our study is targeting one of the most vulnerable youth populations: adolescent mothers living with HIV. In SSA, adolescent girls aged 15-24 years represent 10% of the population, and using 2017 estimates, this group accounts for 25% of the new HIV infections (UNAIDS, 2018). The prevalence of HIV in Kenya among female adolescents aged 15-24 is 4% (NACC, 2016), and these young girls are two times more likely to contract HIV than their male counterparts (NASCOP -Kenya, 2016).
The overall prevalence of adolescent pregnancy in Africa is 18.8% and 19.3% for the Sub-Saharan region (Kassa, Arowojolu, Odukogbe, & Yalew, 2018). By the age of 18, 42% of adolescents from SSA living in urban areas will have become pregnant, and more than 50% of their rural counterparts would be so (UNAIDS, 2019). This early pregnancy also increases their chances of HIV infections (Christo des et al., 2014). Most adolescents are seen to be infected with HIV by older men aged late 20s and early 30s who may not even be aware of their status and thus, unlikely to be on anti-retroviral therapy (ART) (de Oliveira et al., 2017).
A study in Malawi found that adolescents living with HIV had a depression prevalence of 18.9% (Kim et al., 2014). A similar study in Kenya focusing on mental health outcomes among adolescents living with HIV documented almost same depression prevalence of 17.8% (Kamau et al., 2012). Worldwide, postpartum depression (PPD) prevalence in adolescents is higher than that for adults, and it is estimated
Role of psychosocial interventions for adolescents
Psychological interventions have been recommended for persons living with HIV to mitigate common mental health illnesses, including depressive illnesses (Sherr et al., 2011), with no side effects against ART (Cruess et al., 2003). Psychological interventions for adolescents should aim at addressing issues related to psychosocial development, training on social skills including life skills, and shaping their behaviors towards a future productive life through livelihood and vocational training (Martinez et al., 2014). It has been found that adolescents have signi cant risks of frequent unprotected sex with an equally high risk of contracting HIV and having an unplanned pregnancy or both at some time (Schunter et al., 2014). Adolescents engage in risk-taking behaviors and girls may be less assertive to negotiate for condoms; hence, higher chances of unprotected sex are serious considerations (Januraga et al., 2014) in trying to address well-being of adolescent girls.
Prevention of mother-to-child transmission clinic (PMTCT) has provided a conducive environment for addressing life challenges associated with HIV infection among perinatal women. In the year 2017, global ART coverage among men aged 15 years and above was 53% compared with 65% of women for the same age (UNAIDS, 2018). In a recent study from six sub-Saharan countries, HIV-related stigma has been associated with delays in treatment and di culties in adhering to ART, as shown in systematic review studies (Ammon et al., 2018;Croome et al., 2017). An emerging nding of note is that depression treatment improves adherence to ART, which is key to improved quality of life (Sin & DiMatteo, 2014;Wagner et al., 2020).
A Nigerian study on psychological intervention for adolescents living with HIV utilized support groups using Facebook groups for a 5-weekly session. The ability for adolescents to interact, learn more about HIV, share experiences, and their fears on social media was seen to help them in coping with their status (Dulli et al., 2018).
Group Interpersonal Psychotherapy (IPT-G) and its relevance for this vulnerable population A study assessed interpersonal relationships between youth and their families and found out that poor relationships led to depression (Okawa et al., 2018). Lower caregiver supervision was also associated with higher depression (Bhana et al., 2016). IPT-G is well-poised to help address interpersonal di culties that adolescents are predisposed to. It is conceptualized around four problem areas: grief and loss, interpersonal role disputes, role transitions, and interpersonal de cits/social isolation. During adaptation of IPT-G for depressed adolescents, Mufson et al (2004) reported that the intervention targets an individual's interpersonal relations with other persons in a given family/society where therapeutic bene ts are achieved (Mufson et al., 2004). For example, when postpartum adolescents living with HIV are put together in a therapy session, they appreciate that their unpleasant experiences in life also affect other persons in similar situations. This motivates the adolescent to try new interpersonal interactions that enhance better social functioning in society. IPT-G provides adolescents with peers who have similar di culties and utilizes synergies in groups to understand their interpersonal problems and develop new ways of coping.
Our study is in line with a call by WHO to embrace strengthening of Universal Health coverage. Primary health care depends on health system structures acknowledging various levels of health care services where specialized health workers are deployed at the referral levels and CHWs are based at the community level and both are interdependent (Ministry of Health, 2020; WHO-Unicef, 1978; WHO, 2017).
We aim to assess qualitatively the acceptability and feasibility of IPT-G for postpartum adolescents living with HIV being delivered by CHWs within routine clinical settings in Nairobi, Kenya.
Screening for depressive symptoms was carried out using EPDS (Cox et al., 1987) to monitor change in scores following IPT-G intervention. Interpersonal inventory was used to assess the interpersonal relations (emotional attachments and social interaction) among the PPAs. IPT Knowledge test with 15 questions with a maximum score of 30 was used to assess CHWs skills and performance. If 70% of the items (21 points) are answered correctly, it a rms competence of an IPT practitioner.
Acceptability and feasibility of IPT-G delivered by CHWs was assessed through FGDs among PPA (n=19) and CHWs (n=7) for them to share their experiences and perception about IPT-G. In-depth interviews were conducted with the medical staff within the two sites: nursing o cer-in-charge (n=2), mentor-mother (n=2), and CHAs (n=2). The audio was recorded for all sessions to enable auditing of each weekly session as they progress to improve therapy quality. Field notes were kept during every session to document observations, experiences, and perceptions of CHWs, PPA, and other health staff within the facility (see Table 1).
Data analytic approach
Transcription of audio-recorded qualitative data and eld notes were coded to identify common emerging themes highlighting barriers and facilitators of IPT-G. We conducted FGDs for CHWs and PPAs who had participated in the study and in-depth interviews for CHAs and health care management staff within the two health centers to understand their experiences and perceptions (McCain, 1988). Finally, we determined feasibility by evaluating the recruitment process, weekly observations during sessions, and completion rate of the intervention session whereas acceptability was determined by inquiring how individual participants felt about the intervention, participant's knowledge, perceptions, barriers, and gains achieved from the intervention (Sekhon et al., 2017). (See Table 2).
Results
Among the 24 participants, the majority (21,87.5%) were aged between 21-24 years and were mostly residing with their partners (17,70.8%) with parity of less than 2 children (20,83.3%). About half had attained secondary and above education level (despite unemployment ranking high at 79.2%), and with most of them earning less than 100 USD per month (22,91.7%) (see Table 1).
Qualitative acceptability and feasibility data
CHWs found the intervention useful in terms of how the intervention built their knowledge and skills and successfully delivered IPT-G. During the training of lay health workers, we found that they demonstrated a good understanding of depression and reported competency in delivering IPT-G. The adolescent mothers bene ted from the IPT-G as they narrated how they could now function better, communicate better with their families and partners, interact socially, manage their anger, and even resume work for an income or education (see Table 3).
Our adolescent participants acknowledged that IPT-G helped alleviate social isolation, anger, hopelessness, and low mood, which are typical depressive symptoms. After IPT-G intervention, our adolescent participants can now acknowledge they lived with horrible thoughts and a feeling of hopelessness around their past and felt liberated and at great ease with their new situation. They narrated how they were able to socialize with others and perform their family responsibilities effectively, including looking after the baby without negative self-perceived stigma despite them living with HIV (see Table 4).
The study had a retention and follow-up rate of 21 (87.5%) out of the 24. Among the 8 CHWs recruited to deliver IPT-G, one of the CHV unfortunately suddenly passed away though remaining 7 CHVs who participated successfully completed the delivery of intervention. One of the CHA from Kangemi health center stated that the IPT-G delivery process was workable considering it was held weekly; hence that sort of time allocation was possible (see Table 5).
Capacity building through a collaborative care approach was used, which involved engaging Director of Mental Health Services and cascading the partnership along the health care management team downwards to the level of CHWs. The entire health management team was very supportive of our intent by issuing us with clearance and linking us to all the relevant clinical staff (See Figure 1).
Pre and post IPT-G changes observed
In both study sites, there was a shortage of trained mental health personnel. Kangemi health center has only one nurse in training under sponsorship of a non-governmental organization. In Kariobangi health center, there are only two health care workers with post-diploma training in psychiatry. There was no mental health designated space, and counselors under HIV testing services (HTS) were operating in mobile tents. After our study, most health care providers within the two health centers had appreciated the impact of IPT-G on adolescents attending prevention of mother-to-child transmission clinic (PMTCT), citing improved social functioning, better communication, and appealing personal hygiene/grooming by the adolescent mothers.
Training of CHWs on IPT-G Screening questionnaires were administered to CHWs before training to assess their knowledge around mental health concepts, such as stress and depression. They all demonstrated a fair understanding of the difference between these. The CHWs appreciated continuous supportive supervision, and they felt that all their concerns arising from weekly sessions were being addressed promptly and adequately. Furthermore, it was very encouraging to hear from the CHWs that our participants after intervention became role models in the community by imparting skills of managing their day to day issues of life (see table 5).
The loss of one of the CHVs emotionally affected most of our participants, and it could be because he was youthful and could easily identify with him during subsequent sessions. The lead researcher, clinical supervisor, and research assistant organized a loss and grief therapy session for the group (adolescents and CHWs) using one of the IPT problem areas. We also visited the family and arranged for tree planting within the facility, whereby family members and clinical staff where invited for the function, as a means of bringing closure for all who knew the deceased CHV. We noti ed the ethics o ce of the incident, which was documented in our protocol.
Discussion
CHWs found the intervention useful in addition to their existing knowledge and skills and were able to deliver IPT-G successfully. Our research team found that through task shifting, IPT-G can be disseminated to other adolescents in similar settings so long as recommended by WHO, training and continuous supervision of CHWs is upheld (WHO, 2007). We can now embrace ndings from other studies where nonspecialist (CHWs) can be trained on speci c skills to deliver an intervention with similar effectiveness just like when being delivered by specialists in mental health (Kredo et al., 2014;Murray et al., 2017). In cognizance of Universal Health Care, we too lend our voice towards the need to consider CHWs to be involved in mental health care delivery to cover for the shortage of trained mental health specialist (Ministry of Health, 2020). The CHWs were very satis ed with their achievements, and some shared their experiences on the milestones towards IPT-G delivery: "At the beginning of the sessions, there were challenges because all the people were still new, so it is later that people came to know each other and developed trust in the group and everyone could say all her issues". (SK, age 50, Kariobangi) The intervention empowered some of the adolescent mothers to help others, which was one of our intended purpose towards disseminating bene ts to the community level.: "Even the sessions were very good; because some used to come and tell you, I have this and that problem and I passed through this, but I went and did what you told me, and I have succeeded, you get it? One member in our group, I recall, and even at the moment, she normally rings me and tells me, 'that thing helped me, and I am able to help others'." (NG, age 43, Kangemi) The IPT-G sessions formed part of the entry point to enlarging the scope of their social support and even after the intervention they continued to share their issues: "So, it was a very big problem at the beginning, but I am glad that as the sessions were proceeding, we became good friends; actually, we developed a rapport, and even until now, some still call 'when are we meeting again?' [Laughter] --." (EO, age 24, Kariobangi) Others were of the view that follow-up is of importance because most of the participants open-up to their issues at the middle of the sessions: "What I can say; when I joined-let me talk about group two, we came well but after attending like three sessions is when you could see them now pouring out [Laughter] they tell you all their troubles in life--." (FC, age 30, Kangemi) The CHWs were now able to connect easily with the adolescent mothers after bonding with them during the sessions: "So, we are also still doing follow-ups. Some even call by themselves, sometimes I call them, some even when they come for clinics, they just come looking for us which is good so, so they trust us. So, there is that trust, there is that friendship, there are so many bene ts that came out of this, so I was just gladly sharing." (EO, age 24, Kariobangi) Our adolescent participants bene ted from the IPT-G as they narrated how they could now communicate, interact with others, manage their anger, and even resume work for an income. It is worth noting that in a study that assessed interpersonal relationships between youth and their families or caregivers it was found that poor relationships were strongly related to depression symptoms (Miller et al., 2016;Spence et al., 2016).
"I am called JI, and it helped me now I can talk to people, I was hopeless but now…. yes, I am of importance. When I saw that I am this way, I felt that there is no life and that I am not important." (JI, age21, Kangemi) One of the adolescent mothers narrated how HIV-related stigma used to torment her before the before our intervention: " I know myself since I know how I used to feel; when I sit down, I ask myself, 'what kind of life is this?' I used to feel guilty. I don't want people's stories. I just feel that a story may arise and reach the point of the infection, so what will I say? You know at that point you will just be forced to remain silent; you will not talk-you will never have what to say because they are negative, and you are positive, and they want to talk about that infection --." (JA, age22, Kariobangi) It was clear that irritability was negatively affecting communication with their partners as one of the participants narrated how she used to live while having depressive symptoms: "In the past, whenever I could get angry, I could not cook…. I cannot eat, if it is talking to people, I cannot talk to him; everybody sits apart when it comes to washing, everybody washes their clothes." (SK, age23, Kangemi) Besides, adolescent participants could appreciate that IPT-G helped them alleviate depressive symptoms (social isolation, anger, hopelessness, and low mood). Most of them reported better sleep patterns, good appetite, increased social interactions, and decreased HIV-related stigma. Our study population was noted to have several challenges which seemed to support previous studies where they found out that adolescents living with HIV are more vulnerable to mental health disorders due to medical and psychosocial stressors associated with HIV/AIDS (Mellins & Malee, 2013;Nanni et al., 2015). There is no doubt that our intervention addressing depression will improve adherence to ART among our study participants, which is key to good quality of life (Sin & DiMatteo, 2014). "It has helped me when I am with people I have accepted myself the way I am and then anger issues nowadays are not there, at least I can make friends; in the past, I could not make friends, but now at least I can sit with somebody and share something with her that is helpful." (SA, age23, Kariobangi) Isolation was one of the maladaptive behaviors associated with depressive symptoms as indicated by on of the adolescent mother: "I used to lock myself in the house, but currently, I can get out and make stories with neighbors, when I am called for a job I can go." (RA, age 18, Kangemi) The male partner's role towards how relationships evolve through pregnancy and motherhood among adolescents is worth looking into to help adolescent girls in abusive marriages. One qualitative study suggested that postpartum adolescents living with their partners or caregivers will bene t from social support as postpartum adolescents transit early motherhood challenges (UNESCO, 2017). See vignette from our participants here highlighting this point further: "I am FW; it has helped me I am not the same as I used to be; I used to be angry; I could get out of the house at night due to anger. Now we talk well in the house… yes [Laughter]." (FW, age21, Kariobangi) We noted the devastating effects of depression as highlighted by one of the participants on how isolation, marital con ict, and persistent distress used to affect her daily living: "I think I was depressed; I used to sit in the house and had no friends. But when I started coming to this group, I found friends here, outside I have also made friends. I used to be stressed as to why my husband doesn't go to work and frequent disagreements. But since I started coming here for advice, I realized that I am not the only one who has problems, so for me, it has gotten out; I just feel that it has helped me a lot, stress is usual, but for now, I feel stress free." (FW, age21, Kariobangi) Our study acknowledges that adolescents living with HIV are vulnerable to negative community perceptions, which agrees with a similar study in Uganda (Ashaba et al., 2019). These could manifest with suicidal ideations. "I can remember there is one who said after the sessions, 'nowadays I can get out and talk with women, I can go out of the gate because when I was alone, I felt that I am not okay because I have HIV and I have given birth at a younger age. But now I have gotten that courage after these sessions to go outside I can talk, I have that courage,' so that made her feel that they are many and she is not alone." (LN, age 34, CHA, Kangemi) One of the CHV encountered sudden death outside the study area, which caused a lot of grief among the participants. Besides, we realized most of our CHWs had experienced the loss of a loved one and were also struggling with healing, which our sessions also supported them as they narrated to us. The CHWs were affected by the loss of a colleague (CA), and in addition to other previous bereavement worsened their state (Shear, 2012), and IPT-G helped them cope: "CA also left me, he was my friend, and it drew me so much down, but now I am ne, isn't it? --During CA's time of demise, it was like everybody in Kangemi felt like was carrying something [Silence]." (LN, age 34, CHA, Kangemi) One of the CHW expressed how IPT-G helped her process the thoughts and emotions of loss and grief associated with past incidents: "It has helped me a lot, the second thing let me say I was given a husband by Kangemi people, when we were doing training, and I lost the husband (refers to CA, deceased CHV), so IPT helped me to go through the loss and grief, anyway it was not a real husband [Laughter]. But he was a very good friend of mine.
Anyway, when I was undergoing through the sessions, I lost most of very important people, but IPT helped me. You know when we were talking to these girls, and they are also expressing, 'I lost my kid, I lost my daddy,' and I was also like it was like me losing the people whom I care about, so we were going through these together. It was a process for all of us, so it healed me, and it healed them; that was very good." (EO, age 24, CHV, Kariobangi) The community health assistant was mourning the loss of her child during the sessions and had to say this in appreciation of the intervention: "--and when I came here, I was so much stressed, and it could have resulted in depression because there is something, I lost my child on delivery, then I lost both parents at the same time. So, when we started talking, I felt that I had put the load down, you get it? so when we continued and reached the middle, and I realized that I am good." (NG, age 43, CHV, Kangemi) Continuous supportive supervision was appreciated by the CHWs and they felt that all their concerns arising from the weekly session was addressed promptly and adequately. During our group sessions, supportive supervision emphasized joint problem solving, mentoring, and two-way communication "It was just like Kangemi; at rst for the people with whom you are not familiar enough, they could not open up; you ask her a question, and she feels like where do you want to lock and take her? But now the second time they will be free, and the third time she will feel like 'I can remember what happened to me and how it is on somebody else,' because we had supervision like from CHA and MK and could correct us when wrong and it becomes normal." (SN, age 65, Kariobangi) Furthermore, it was very encouraging to hear from the CHWs that our participants after intervention became role models in the community by imparting skills of managing their day-to-day life issues. For CHWs and adolescents to be able to understand the concept of IPT-G and even use it to help others, the learning cycle was achieved by both actual experiences and empowering them to practice through supportive supervision (Huber, 1991;Newell, 2005) "I think even these girls have become role models in the community…we told them 'if you nd somebody who is in a situation maybe you can assist and feel that it closer to yours or it is similar to yours, you are now empowered, and you can help this person at your level and if you feel that it is di cult is when you can refer.' Still, now they are doing on their own, which is good." (EO, age 24, CHV, Kariobangi) One of the CHA a rmed that time for IPT-G sessions was easily embedded into their routine activities since it consumes 90 minutes only once a week.
"Not a burden because it was usually once a week, and if you are meeting those clients later, it is the one in which you can negotiate the time that you will meet, so I don't think, maybe if others think otherwise." (LN, age 34, CHA, Kangemi) Nevertheless, several CHWs acknowledged the lack of designated space for mental health services as one of the limitations towards adopting IPT-G delivery to routine PMTCT services: "I think for us (in) Kariobangi we had a big challenge when it came to the venue of the meeting because sometimes you nd we are here sometimes we are displaced at the tent, maybe the other tent is very dirty, sometimes we are in this other tent we come here displaced, so it was a very big challenge [Cross talk] --." (EO, age 24, Kariobangi) The space to administer mental health services was missing within the health centers as shown by how we had to improvise a room for the sessions: "We were okay; we had been given the maternity, a place somewhere…. You see, if we close the middle of the room too and pull the curtains, it was so good; it didn't have an issue." (LN, age 34, Kangemi) Another CHW seemed helpless of the situation and consented to the state of the available space despite all the challenges associated with it: "But let's just say it was okay; even if it is bad, it is our place, so we cannot say that it is bad [Laughter], but it was ne." (JA, age 24, Kangemi) The lack of speci ed venue for mental health services posed uncertainty on effective service delivery for the CHWs: "The space was okay; it was somewhere where we don't have issues with noise, but we don't know next." (SN, age 65, Karionbangi) We were able to form a strong collaborative team by creating a WhatsApp group and took issues of the CHWs and PPA with a lot of importance by demonstrating practical empathy, thus improving commitment for all of us in the study process. The use of technology proved equally useful, just like a Nigerian study on psychological intervention for adolescents living with HIV where support groups were engaged using Facebook groups for a 5-weekly session. The ability for them to interact, learn more about HIV, share experiences, and their fears a rmed that social media could help them cope with their status (Dulli et al., 2018).
"When I lost my child on delivery, I also want to thank YO, I don't know what to tell him, but there is a time I looked and realized that he is a very different person. We hadn't known each other-we had known each other for only three days, but when I had a problem he came with MK to my house and consoled me, and I felt that it wasn't even about friendship, they are just part of like my family, and he helped me." (LN,age 34,CHA,Kangemi) One of the CHW underscored the potential impact IPT-G could have in the community as indicated in her explanation: "--it has changed us, as much as we were the teachers, but we feel that it helped us. So I just pray that more of that kind comes and the way I said that when you just change one person he/she will change ve others and those ve others will change others and at the end of the day you may nd that you have changed the whole country and even the world, so thank you". (JA,age 24,Kangemi) Generally, our ndings of challenges affecting PPA did not come as a surprise considering previous studies had also identi ed that perinatal adolescents, who are also parenting, face several di culties, such as social stigma, lack of emotional support, poor healthcare access, and stresses around new life adjustments (Kumar et al., 2018). Our IPT-G intervention helped adolescents by improving their interpersonal relationships, communication processes, and overall mental health.
Strengths
We were able to qualitatively establish acceptability and feasibility of IPT-G delivered to postpartum adolescents living with HIV by CHWs in primary health care settings. The two study sites make the ndings more reliable for a representative outcome for low-resource urban settings.
Limitations
Despite all the exciting, positive impact on the participants' lives from this intervention, as seen from the qualitative ndings, the sample size was not powered. Thus, other quantitative results may not offer strong enough argument when discussing the intervention's impact on this vulnerable population.
Conclusion
The shortage of trained mental health workers has led to inaccessibility of mental health services both in urban and rural settings. To enhance mental health services to the broader population, our study sought to assess acceptability and feasibility of IPT-G being delivered to postpartum adolescents living with HIV by CHWs in Nairobi area at primary health care within routine clinical settings. Our ndings a rm that this intervention has a promise of being delivered by CHWs for this speci c population and the need for supportive supervision to enhance delity. We recommend a follow-up study with a larger representative sample size from diverse community within urban and rural settings with a view to scaling-up IPT-G at the primary health care level in all counties. "I had an issue with my parents, but now we are okay, and we communicate well". (RJ, age23, Kangemi) "About my estranged partner, I can say that at the moment, even though we have not yet met, we are ne because we are communicating." (MM, age24, Kangemi) "This group has helped me in terms of communication." (JF, age24, Kangemi) "It helped me to interact with people." (RA, age21, Kangemi) Improved anger management "I am called S, but that one is informal, but in reality, I am called NC; by the way, even me, it has helped me so much, I have seen that several things have changed. The anger I used to have is no longer there; I just feel that I am okay." (NC, age24, Kariobangi) "For me, I got assistance because even now I see that getting angered is not so much there, I just see that life is okay; I don't want-I mean I know how to control anger, and it cannot rise the way it used to happen to me." (JA, age22, Kariobangi) IPT-G delivery process "They were nice on my side, caring, they used to concentrate on us, they were social; let me say they were just nice." (JF, age24, Kangemi) "They used to understand us." (JI, age21, Kangemi) "They were nice people; in case you got stuck on something they could help you [Inaudible]." (SK, age23, Kangemi) "They used simple terms….and for those that were di cult they used to elaborate." (MR, age24, Kangemi) "They were just using simple terms." (RA, age18, Kangemi)
Relevant Quotations
Overall Perceptions towards IPT by postpartum adolescents "I can tell him(lead researcher) thanks because he has helped us so much, we had stress, we were lonely but the way he organized this group it has helped us and we have found means of helping our colleagues out there and that he should just continue that way without giving up and God grant him strength and life." (FW, age21, Kariobangi) "I can tell him (lead researcher) that he assisted us so much because if some of us could still be where we used to be then, we wouldn't exist till now; but he did an important thing and assisted us so he should continue that way and may God bless him." (NC, age24, Kariobangi) "I can tell him (lead researcher) thanks for creating this organization it has helped all of us on how we can also educate other people to be happy, and he should just continue with that spirit." (LA, age19, Kariobangi) "I can tell him thanks so much, he helped us a great deal because we came here with stress, but we have been assisted, and we say thank you so much and it should just continue this way [Crosstalk]." (SA, age23, Kariobangi) "I want to tell him (lead researcher) thanks for he has made me able to believe in myself, and may God give him strength to proceed with this program for others who are behind and are like us to get assistance." (EA, age24, Kariobangi) "For me, it had helped me, I have lived positively and I am not bothered by what people say [Some silence]." (LA, age19, Kariobangi) "I am EAO and for me this content made me more aggressive, I mean I am not the way I used to be; I have many differences such that when I walk in other places I am not afraid I know who I am now, I have decided to live a positive life and I am used to it, and if I see someone else I teach her the way I have been taught." (EO, age24, Kariobangi) "It was worse until hey! I used to keep quiet so much because I used to feel whenever somebody talks to me just a little then I get angry and I just wish that we ght [Laughter] I mean I used to get extremely angry --." (JA, age22, Kariobangi) "I was depressed but I had several regrets as I was just questioning myself but nowadays it is over. The regrets were 'why have I become pregnant early?" (LA, age19, Kariobangi) "I am another one; whenever someone could anger me just a little I could moody, sitting in the bed with tears, and when I cook the food cannot even be eaten because of too much salt [Laughter] or raw ugali; but for now I have changed and I am good, even if you talk I act like I am not hearing or I go and sit outside and come back when he is quiet." (JN, age21, Kariobangi) "Before I joined this group, I was so depressed; I used to be an angry person, my anger was so much near, even if someone wrongs me where we live then I carry it into the house on the kids and even on the husband; but since I came here, I have changed." (LA, age19, Kariobangi) "--at the moment I am stress-free and I just know how I can handle it; I sleep and wake up when I have forgotten those things but I normally thank God because I have never had stress." (RJ, age23, Kangemi) "It has been a nice thing, every time I see you. I normally get impressed and just smile because at least there is progress in our lives, we are not the same as the way we used to be okay?" (EO, age24, Kariobangi) "I think that will help a lot because you see for the young girls you might nd that when they do HIV test the husband is negative and the lady is positive so you see the girl gets stressed because she is harassed by the husband "where did you get the virus?" And whatever, it will help a lot [Silence]." (JA,age 24,Kangemi) Building capacity and acknowledging the team "Follow-up of that client, you have done IPT for this client you have done termination session; you have referred that client from facility to the community, who will do that follow-up? The community health workers. They are the linkage between the facility and the community and community to the facility." (LN,age 34,Kangemi) Third IPT session is when disclosure happens "--yes, they exist; when we started, we had a challenge because one we had not been trained, we had a challenge but we reached where we reached. I can say that even those girls when we start with them there are usually problems but the good thing is that mid-way they pick and can listen to you and sessions go well." (JA,age 24,Kangemi) IPT Impact in the community "In fact we always talk about that group wherever I go and maybe I meet a partner I always talk about that group, that's why I am saying I am still looking for a partner to support them, I always talk about it because we felt that we changed them and we wouldn't like them to get lost on the way, we would love them to come back and help the others." (LN,age 34,Kangemi) "I think there is a time when we began IPT, those girls most of them were locking themselves in the house.…. but now after the sessions in Kariobangi, we realized that they went back to work. Most of them have gone back to work, some other have started businesses, MA got a job and also has a business, EL has a business. So, people have gone back to work, there is SA has gone back to work; people are going back to work, people have started businesses but for one or two we are still following." (EO, age24, Kariobangi) Group strengthens cohesion and empowers "She (PPA) realizes that she is not the only one and that they are several, don't you even see that the rst thing they were accepting so much whenever she hears that this one is also like me, even this one and the other one so she feels that 'we are many." (NG,age 43,Kangemi) "It used to help them to unwind because maybe somebody has an issue and feels like 'maybe it is only me who has this issue' so when she nds somebody in the group who says almost like similar to hers then she also opens her heart and speaks." (EO, age24, Kariobangi) Figure 1 Collaborative structure for IPT implementation
|
2020-10-29T09:07:08.430Z
|
2020-10-22T00:00:00.000
|
{
"year": 2020,
"sha1": "c4261ab2e4261cb940ddcc84a5c979bfa61bfaab",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-93468/v1.pdf?c=1605222703000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "99b679cf00bcfe475e660b5028528768a8897d8c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
258557002
|
pes2o/s2orc
|
v3-fos-license
|
Target-Side Augmentation for Document-Level Machine Translation
Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns. To address this challenge, we propose a target-side augmentation method, introducing a data augmentation (DA) model to generate many potential translations for each source document. Learning on these wider range translations, an MT model can learn a smoothed distribution, thereby reducing the risk of data sparsity. We demonstrate that the DA model, which estimates the posterior distribution, largely improves the MT performance, outperforming the previous best system by 2.30 s-BLEU on News and achieving new state-of-the-art on News and Europarl benchmarks.
Introduction
Document-level machine translation (Gong et al., 2011;Hardmeier et al., 2013;Werlen et al., 2018;Maruf et al., 2019;Bao et al., 2021;Feng et al., 2022) has received increasing research attention.It addresses the limitations of sentence-level MT by considering cross-sentence co-references and discourse information, and therefore can be more useful in the practical setting.Document-level MT presents several unique technical challenges, including significantly longer inputs (Bao et al., 2021) and relatively smaller training data compared to sentence-level MT (Junczys-Dowmunt, 2019;Liu et al., 2020;Sun et al., 2022).The combination of these challenges leads to increased data sparsity (Gao et al., 2014;Koehn and Knowles, 2017;Liu et al., 2020), which raises the risk of learning spurious patterns in the training data (Belkin et al., 2019;Savoldi et al., 2021) and hinders generalization (Li et al., 2021;Dankers et al., 2022).
To address these issues, we propose a targetside data augmentation method that aims to reduce sparsity by automatically smoothing the training distribution.The main idea is to train the document MT model with many plausible potential translations, rather than forcing it to fit a single human translation for each source document.This allows the model to learn more robust and generalizable patterns, rather than being overly reliant on features of particular training samples.Specifically, we introduce a data augmentation (DA) model to generate possible translations to guide MT model training.As shown in Figure 1, the DA model is trained to understand the relationship between the source and possible translations based on one observed translation (Step 1), and then used to sample a set of potentially plausible translations (Step 2).These translations are fed to the MT model for training, smoothing the distribution of target translations (Step 3).
We use standard document-level MT models including Transformer (Vaswani et al., 2017) and G-Transformer (Bao et al., 2021) for both our DA and MT models.For the DA model, in order to effectively capture a posterior target distribution given a reference target, we concatenate each source sentence with a latent token sequence as the new input, where the latent tokens are sampled from the observed translation.A challenge to the DA model is that having the reference translation in the input can potentially decrease diversity.To address this issue, we introduce the intermediate latent variable on the encoder side by using rules to generate ngram samples, so that posterior sampling (Wang and Park, 2020) can be leveraged to yield diverse translations.
Step 1. DA model training: Step 2. Target-side data augmentation: Step 3. MT model training: One reference: Figure 1: Illustration of target-side data augmentation (DA) using a very simple example.A DA model is trained to estimate the distribution of possible translations y given a source x i and an observed target y i , and the MT model is trained on the sampled translations ŷj from the DA model for each source x i .Effectively training the DA model with the target y i , which is also a conditional input, can be challenging, but it is achievable after introducing an intermediate latent variable between the translation y and the condition y i .
of-the-art results on News and Europarl.Further analysis shows that high diversity among generated translations and their low deviation from the gold translation are the keys to improved performance.To our knowledge, we are the first to do target-side augmentation to enrich output variety for document-level machine translation.
Related Work
Data augmentation (DA) increases training data by synthesizing new data (Van Dyk and Meng, 2001;Shorten and Khoshgoftaar, 2019;Shorten et al., 2021;Li et al., 2022).In neural machine translation (NMT), the most commonly used data augmentation techniques are source-side augmentations, including easy data augmentation (EDA) (Wei and Zou, 2019), subword regularization (Kudo, 2018), and back-translation (Sennrich et al., 2016a), which generates pseudo sources for monolingual targets enabling the usage of widely available monolingual data.These methods generate more source-target pairs with different silver source sentences for the same gold-target translation.On the contrary, target-side augmentation is more challenging, as approaches like EDA are not effective for the target side because they corrupt the target sequence, degrading the autoregressive modeling of the target language.
Previous approaches on target-side data augmen-tation in NMT fall into three categories.The first is based on self-training (Bogoychev and Sennrich, 2019;He et al., 2019;Zoph et al., 2020), which generates pseudo translations for monolingual source text using a trained model.The second category uses either a pre-trained language model (Fadaee et al., 2017;Wu et al., 2019) or a pre-trained generative model (Raffel et al., 2020;Khayrallah et al., 2020) to generate synonyms for words or paraphrases of the target text.The third category relies on reinforcement learning (Norouzi et al., 2016;Wang et al., 2018), introducing a reward function to evaluate the quality of translation candidates and to regularize the likelihood objective.In order to explore possible candidates, a sampling from the model distribution or random noise is used.Unlike these approaches, our method is a target-side data augmentation technique that is trained using supervised learning and does not rely on external data or large-scale pretraining.More importantly, we generate document-level instead of word, phrase, or sentence-level alternatives.
Previous target-side input augmentation (Xie et al., 2022) appears to be similar to our targetside augmentation.However, besides the literal similarity, they are quite different.Consider the token prediction P (y i |x, y <i ).The target-side input augmentation augments the condition y <i to increase the model's robustness to the conditions, which is more like source-side augmentation on condition x.In comparison, target-side augmentation augments the target y i , providing the model with completely new training targets.
Paraphrase models.Our approach generates various translations for each source text, each of which can be viewed as a paraphrase of the target.Unlike previous methods that leverage paraphrase models for improving MT (Madnani et al., 2007;Hu et al., 2019;Khayrallah et al., 2020), our DA model exploits parallel corpus and does not depend on external paraphrase data, similar to Thompson and Post (2020).Instead, it takes into account the source text when modeling the target distribution.More importantly, while most paraphrase models operate at the sentence level, our DA model can generate translations at the document level.
Conditional auto-encoder.The DA model can also be seen as a conditional denoising autoencoder (c-DAE), where the latent variable is a noised version of the ground-truth target, and the model is trained to reconstruct the ground-truth target from a noisy latent sequence.c-DAE is similar to the conditional variational autoencoder (c-VAE) (Zhang et al., 2016;Pagnoni et al., 2018), which learns a latent variable and generates diverse translations by sampling from it.However, there are two key differences between c-VAE and our DA model.First, c-VAE learns both the prior and posterior distributions of the latent variable, while the DA model directly uses predefined rules to generate the latent variable.Second, c-VAE models the prior distribution of the target, while the DA model estimates the posterior distribution.
Sequence-level knowledge distillation.Our DA-MT process is also remotely similar in form to sequence-level knowledge distillation (SKD) (Ba and Caruana, 2014;Hinton et al.;Gou et al., 2021;Kim and Rush, 2016;Gordon and Duh, 2019;Lin et al., 2020), which learns the data distribution using a large teacher and distills the knowledge into a small student by training the student using sequences generated by the teacher.However, our method differs from SKD in three aspects.First, SKD aims to compress knowledge from a large teacher to a small student, while we use the same or smaller size model as the DA model, where the knowledge source is the training data rather than the big teacher.Second, the teacher in SKD estimates the prior distribution of the target given source, while our DA model estimates the posterior distribution of the target given source and an observed target.Third, SKD generates one sequence for each source, while we generate multiple diverse translations with controlled latent variables.
Target-Side Augmentation
The overall framework is shown in Figure 1.Formally, denote a set of training data as D = {(x i , y i )} N i=1 , where (x i , y i ) is the i-th sourcetarget pair and N is the number of pairs.We train a data augmentation (DA) model (Section 3.1) to generate samples with new target translations (Section 3.2), which are used to train an MT model (Section 3.3).
The Data Augmentation Model
We learn the posterior distribution P da (y|x i , y i ) from parallel corpus by introducing latent variables where z is the latent variable to control the translation output and Z i denotes the possible space of z, φ denotes the parameters of the DA model, and α denotes the hyper-parameters for determining the distribution of z given y i .
The space Z i of possible z is exponentially large compared to the number of tokens of the target, making it intractable to sum over Z i in Eq. 1.We thus consider a Monte Carlo approximation, sample a group of instances from p α (z|y i ), and calculate the sample mean where Ẑi denotes the sampled instances.
There are many possible choices for the latent variable, such as a continuous vector or a categorical discrete variable, which also could be either learned by the model or predefined by rules.Here, we simply represent the latent variable as a sequence of tokens and use predefined rules to generate the sequence, so that the latent variable can be easily incorporated into the input of a seq2seq model without the need for additional parameters.
Specifically, we set the value of the latent variable z to be a group of sampled n-grams from the observed translation y i and concatenate x i and z into a sequence of tokens.We assume that the generated translations y can be consistent with the observed translation y i on these n-grams.To this end, we define α as the ratio of tokens in y i that is observable through z, naming observed ratio.For a target with |y i | tokens, we uniformly sample n-grams from y i to cover α × |y i | tokens that each n-gram has a random length among {1, 2, 3}.
For example, given that α = 0.1 and a target y i with 20 tokens, we can sample one 2-gram or two uni-grams from the target to reach 2 (0.1 × 20) tokens.
Training.Given a sample (x i , y i ), the training loss is rewritten as where the upper bound of the loss is provided by Jensen inequality.The upper bound sums log probabilities, which can be seen as sums of the standard negative log-likelihood (NLL) loss of each (x i , z, y i ).As a result, when we optimize this upper bound as an alternative to optimizing L da , the DA model is trained using standard NLL loss but with | Ẑi | times more training instances.
Discussion.As shown in Figure 1, given a sample (x i , y i ), we adopt a new estimation method using the posterior distribution P da (y|x i , y i ) for our DA model.The basic intuition is that by conditioning on both the source x i and the observed translation y i , the DA model can estimate the data distribution P data (y|x i ) more accurately than an MT model.Logically, an MT model learns a prior distribution P mt (y|x i ), which estimates the data distribution P data (y|x i ) for modeling translation probabilities.This prior distribution works well when the corpus is large.However, when the corpus is sparse in comparison to the data space, the learned distribution overfits the sparsely distributed samples, resulting in poor generalization to unseen targets.
The Data Augmentation Process
The detailed data augmentation process is shown in Figure 2 and the corresponding algorithm is shown in Algorithm 1. Below we use one training example to illustrate.
DA model training.We represent the latent variable z as a sequence of tokens and concatenate z to the source, so a general seq2seq model can be used to model the posterior distribution.Compared to general MT models, the only difference is the structure of the input.
Specifically, as the step B shown in the figure, for a given sample (x i , y i ) from the parallel data, we sample a number of n-grams from y i and extend the input to (x i , z), where the number is determined according to the length of y i .Take the target sentence "most free societies accept such limits as reasonable , but the law has recently become more restrictive ."as an example.We sample "societies" and "has recently" from the target and concatenate them to the end of the source sentence to form the first input sequence.We then sample "the law" and "as reasonable" to form the second input sequence.These new input sequences pair with the original target sequence to form new parallel data.By generating different input sequences, we augment the data multiple times.
Algorithm 1 Target-side data augmentation.
▷ Add the gold pair 6: for j ← 1 to M do 7: α ∼ Beta(a, b) ▷ Sample an observed ratio 8: zj ∼ Pα(z|yi) ▷ Sample a latent value 9: ŷj ∼ Pφ(y|xi, zj) ▷ Sample a translation 10: Target-side data augmentation.Using the data "C.Extended Input" separated from the extended data in step B, we generate new translations by running a beam search with the trained DA model, where for each extended input sequence, we obtain a new translation.Here, we reuse the sampled z from step B. However, we can also sample new z for inference, which does not show an obvious difference in the MT performance.By pairing the new translations with the original source sequence, we obtain "E.Augmented Data".The details are described in Algorithm 1, which inputs the original parallel data and outputs the augmented data.
The MT Model
We use Transformer (Vaswani et al., 2017) and G-Transformer (Bao et al., 2021) as the baseline MT models.The Transformer baseline models the sentence-level translation and translates a document sentence-by-sentence, while the G-Transformer models the whole document translation and directly translates a source document into the corresponding target document.G-transformer improves the naïve self-attention in Transformer with group-attention (Appendix A) for long document modeling, which is a recent state-of-the-art document MT model.
Baseline Training.The baseline methods are trained on the original training dataset D by the standard NLL loss (4) Augmentation Training.For our target-side augmentation method, we force the MT model to match the posterior distribution estimated by the
DA model
(5) where Y i is the possible translations of x i .
We approximate the expectation over Y i using a Monte Carlo method.Specifically, for each sample (x i , y i ), we first sample z j from P α (z|y i ) and then run beam search with the DA model by taking x i and z j as its input, obtaining a feasible translation.
Repeating the process M times, we obtain a set of possible translations as the step D in Figure 2 and Algorithm 1 in Section 3.2 illustrate.
Subsequently, the loss function for the MT model is rewritten as follows, which approximates the expectation using the average NLL loss of the sampled translations where θ denotes the parameters of the MT model.The number | Ŷi | could be different for each sample, but for simplicity, we choose a fixed number M in our experiments.
Experiments
Datasets.We experiment on three benchmark datasets -TED, News, and Europarl (Maruf et al., 2019), representing different domains and data scales for English-German (En-De) translation.
The detailed statistics are displayed in Baselines.We apply target-side augmentation to two baselines, including sentence-level Transformer (Vaswani et al., 2017) and document-level G-transformer (Bao et al., 2021).We further combine back-translation and target-side augmentation, and apply it to the two baselines.
Training Settings.For both Transformer and G-Transformer, we generate M new translations (9 for TED and News, and 3 for Europarl) for each sentence and augment the data to its M + 1 times.For back-translation baselines, where the training data have already been doubled, we further augment the data 4 times for TED and News, and 1 for Europarl, so that the total times are still 10 for TED and News, and 4 for Europarl.
We obtain the translations by sampling latent z with an observed ratio from a Beta distribution Beta(2, 3) and running a beam search with a beam size of 5. We run each main experiment three times and report the median.More details are described in Appendix B.3.
Main Results
As shown in Table 2, target-side augmentation significantly improves all the baselines.Particularly, it improves G-Transformer (fnt.) by 1.75 s-BLEU on average over the three benchmarks, where the improvement on News reaches 2.94 s-BLEU.With the augmented data generated by the DA model, the gap between G-Transformer (rnd.) and G-Transformer (fnt.)narrows from 1.26 s-BLEU on average to 0.18, suggesting that fine-tuning on sentence MT model might not be necessary when augmented data is used.For the Transformer baseline, target-side augmentation enhances the performance by 1.33 s-BLEU on average.These results demonstrate that target-side augmentation can significantly improve the baseline models, especially on small datasets.
Comparing with previous work, G-Transformer (fnt.)+Target-sideaugmentation outperforms the best systems SMDT, which references retrieved similar translations, with a margin of 1.40 s-BLEU on average.It outperforms previous competitive RecurrentMem, which gives the best score on TED, with a margin of 1.58 s-BLEU on average.Compared with MultiResolution, which is also a data augmentation approach that increases the training data by splitting the documents into different resolutions (e.g., 1, 2, 4, 8 sentences per training instance), target-side augmentation obtains higher performance with a margin of 1.72 s-BLEU on average.With target-side augmentation, G-Transformer (fnt.)achieves the best-reported s-BLEU on all three datasets.
Compared to the pre-training setting, targetside augmentation with G-Transformer (fnt.)outperforms Flat-Transformer+BERT and G-Transformer+BERT, which are fine-tuned on pretrained BERT, with margins of 1.46 and 0.70 s-BLEU, respectively, on an average of the three benchmarks, where the margins on News reaches 3.54 and 1.92, respectively.The score on bigger dataset Europarl even excels strong large pretraining G-Transformer+mBART, suggesting the effectiveness of target-side augmentation for both small and large datasets.
Back-translation does not enhance the performance on TED and Europarl by an adequate margin, but enhances the performance on News significantly, compared to the Transformer and G-Transformer baselines.Upon the enhanced baselines, target-side augmentation further improves the performance on News to a new level, reaching the highest s/d-BLEU scores of 28.69 and 30.41, respectively.The results demonstrate that target-side augmentation complements the back-translation technique, where a combination may be the best choice in practice.
Posterior vs Prior Distribution
We first compare the MT performance of using a posterior distribution P (y|x i , y i ) in the DA model (Eq. 5 in Section 3.3) against using the prior distribution P (y|x i ).As shown in Table 3, when using a prior-based augmentation, the performance improves by 0.64 s-BLEU on average compared to using the original data.After replacing the DA model with the posterior distribution, the performance improves by 1.75 s-BLEU on average, which is larger than the improvements obtained by the prior distribution.The results suggest that using a DA model (even with a simple prior distribution) to augment the target sequence is effective, and the posterior distribution further gives a significant boost.
Generated Translations.We evaluate the distribution of generated translations, as shown in Table 4. Using prior distribution, we obtain translations with higher Diversity than posterior distribution.(gen+gold) -trained on both generated and gold translations.(gen only) -trained on generated translations.
However, higher Diversity does not necessarily lead to better performance if the generated translations are not consistent with the target distribution.As the Deviation column shows, the translations sampled from the posterior distribution have a much smaller Deviation than that from the prior distribution, which confirms that the DA model estimating posterior distribution can generate translations more similar to the gold target.
Accuracy of Estimated Distribution.As more direct evidence to support the DA model with a posterior distribution, we evaluate the perplexity (PPL) of the model on a multiple-reference dataset, where a better model is expected to give a lower PPL on the references (Appendix C.1).As shown in the column PPL in Table 4, we obtain an average PPL (per token) of 7.00 for the posterior and 8.68 for the prior distribution, with the former being 19.4% lower than the latter, confirming our hypothesis that the posterior distribution can estimate the data distribution P data (y|x i ) more accurately.
Sampling of Latent z
Scale.The sampling scale | Ŷ| in Eq. 7 is an important influence factor on the model performance.Theoretically, the larger the scale is, the more accurate the approximation will be.Figure 3 evaluates the performance on different scales of generated translations.The overall trends confirm the theoretical expectation that the performance improves when the scale increases.At the same time, the contribution of the gold translation drops when the scale increases, suggesting that with more generated translations, the gold translation provides less additional information.In addition, the performance of scale ×1 and ×9 have a gap of 0.75 s-BLEU, suggesting that the MT model requires sufficient samples from the DA model to match its distribution.In practice, we need to balance the performance gain and the training costs to decide on a suitable sampling scale.
Observed Ratio.Using the observed ratio (α in Eq. 1), we can control the amount of information provided by the latent variable z.Such a ratio influences the quality of generated translations.As Figure 4a shows, a higher observed ratio produces translations with a lower Deviation from the gold reference, which shows a monotonic descent curve.In comparison, the diversity of the generated translations shows a convex curve, which has low values when the observed ratio is small or big but high values in the middle.The diversity of the generated translations represents the degree of smoothness of the augmented dataset, which has a direct influence on the model performance.
As Figure 4b shows, the MT model obtains the best performance around the ratio of 0.4, where it has a balanced quality of Deviation and Diversity.When the ratio further increases, the performance goes down.Comparing the MT models trained with/without the gold translation, we see that the performance gap between the two settings is closing when the observed ratio is bigger than 0.6, where the generated translations have low Deviation from the gold translations.
The Diversity can be further enhanced by mixing the generated translations from different observed ratios.Therefore, instead of using a fixed ratio, we sample the ratio from a predefined Beta distribution.As Figure 4c shows, we compare the performance on different Beta distributions.The performance on TED peaks at Beta(1, 1) but does not show a significant difference compared to the other two, while the performance on News peaks at Beta(2, 3), which has a unimodal distribution with an extremum between the ratio 0.3 and 0.4 and has a similar shape as the curve of Diversity in Figure 4a.Compared to Beta(2, 2), which is also a unimodal distribution but with an extremum at the ratio 0.5, the performance with Beta(2, 3) is higher by 0.66 s-BLEU.Granularity of N-grams.The granularity of n-grams determines how much order information between tokens is observable through the latent z (in comparison, the observed ratio determines how many tokens are observed).We evaluate different ranges of n-grams, where we sample n-grams according to a number uniformly sampled from the range.As Figure 5 shows, the performance peaks at [1, 2] for TED and [1, 3] for News.However, the differences are relatively small, showing that the performance is not sensitive to the token order of the original reference.A possible reason may be that the DA model can reconstruct the order according to the semantic information provided by the source sentence.
Different Augmentation Methods
Source-side and Both-side Augmentation.We compare target-side augmentation with the sourceside and both-side augmentations, by applying the DA model to the source and both sides.As Table 5 shows, the source-side augmentation improves the baseline by 1.12 s-BLEU on average of TED and News but is still significantly lower than the target-side augmentation, which improves the baseline by 2.17 s-BLEU on average.Combining the Table 6: Target-side augmentation vs paraphraser on sentence-level MT, evaluated on IWSLT14 German-English (De-En).♢ -nucleus sampling with p = 0.95.generated data from both the source-side and targetside augmentations, we obtain an improvement of 2.42 s-BLEU on average, whereas the source-side augmented data further enhance the target-side augmentation by 0.25 s-BLEU on average.These results suggest that the DA model is effective for source-side augmentation but more significantly for target-side augmentation.
Paraphrasing.Target-side augmentation augments the parallel data with new translations, which can be seen as paraphrases of the original gold translation.Such paraphrasing can also be achieved by external paraphrasers.We compare target-side augmentation with a pre-trained T5 paraphraser on a sentence-level MT task, using the settings described in Appendix C.3.
As shown in Table 6, the T5 paraphraser performs lower than the Transformer baseline on both the dev and test sets, while target-side augmentation outperforms the baseline by 1.57 and 1.55 on dev and test, respectively.The results demonstrate that a DA model is effective for sentence MT but a paraphraser may not, which can be because of missing translation information.
In particular, the generated paraphrases from the T5 paraphraser have a Diversity of 40.24, which is close to the Diversity of 37.30 from the DA model.However, when we compare the translations by calculating the perplexity (PPL) on the baseline Transformer, we get a PPL of 3.40 for the T5 paraphraser but 1.89 for the DA model.The results suggest that compared to an external paraphraser, the DA model generates translations more consistent with the distribution of the gold targets.
Further Analysis
Size of The DA model.The condition on an observed translation simplifies the DA model for predicting the target.As a result, the generated translations are less sensitive to the capacity of the DA model.Results with different sizes of DA models confirm the hypothesis and suggest that the MT performance improves even with much smaller DA models.The details are in Appendix C.2.
Case Study.We list several word, phrase, and sentence cases of German-English translations, and two documents of English-German translations, demonstrating the diversity of the generated translations by the DA model.The details are shown in Appendix C.4.
Conclusion
We investigated a target-side data augmentation method, which introduces a DA model to generate many possible translations and trains an MT model on these smoothed targets.Experiments show our target-side augmentation method reduces the effect of data sparsity issues, achieving strong improvement upon the baselines and new state-ofthe-art results on News and Europarl.Analysis suggests that a balance between high Diversity and low Deviation is the key to the improvements.To our knowledge, we are the first to do target-side augmentation in the context of document-level MT.
Limitations
Long documents, intuitively, have more possible translations than short documents, so a dynamic number of generated translations may be a better choice when augmenting the data, which balances the training cost and the performance gain.Another potential solution is to sample a few translations and force the MT model to match the dynamic distribution of the DA model using these translations as decoder input, similar to Khayrallah et al. (2020).Such dynamic sampling and matching could potentially be used to increase training efficiency.We do not investigate the solution in this paper and leave the exploration of this topic to future work.
Target-side augmentation can potentially be applied to other seq2seq tasks, where the data sparsity is a problem.Due to the limitation of space in a conference submission, we will leave investigations on other tasks for future work.
A G-Transformer
G-Transformer (Bao et al., 2021) has an encoderdecoder architecture, involving two types of multihead attention.One is for global document, naming global attention, while another is for local sentence, naming group attention.
Global Attention.The global attention is simply a normal multi-head attention, which attends to the whole document.
where matrix inputs Q, K, V are query, key, and value for calculating the attention.Group Attention.The group attention differentiates the sentences in a document by assigning a group tag (Bao andZhang, 2021, 2023;Bao et al., 2023) to each sentence.The group tag is a number used to identify a specific sentence, which is allocated in the order of sentences, where the group tag for the first sentence is 1, second sentence 2, and so on.
The group tag sequences are used to calculate an attention mask to avoid cross-sentential attention args = (Q, K, V, GQ, GK ), where G Q and G K are group-tag sequences for query and key.The function M (G Q , G K ) calculates the attention mask that for a group tag in G Q and a group tag in G K , it returns a big negative number if the two tags are different, otherwise it returns 0.
Combined Attention The two multi-head attentions are combined using a gate-sum module where W and b are trainable parameters, and ⊙ denotes element-wise multiplication.G-Transformer uses group attention on low layers and combined attention on top 2 layers.
B.1 Datasets
The three benchmark datasets are as follows.TED is a corpus from IWSLT2017, which contains the transcriptions of TED talks that each talk corresponds to a document.The sentences in source and target documents are aligned for translation.We use tst2016-2017 for testing and the rest for development.
News is a corpus mainly from News Commentary v11, where the sentences are also aligned between the source and target documents.We use newstest2016 for testing and newstest2015 for development.In addition, we use newstest2021 from WMT21 (Farhad et al., 2021), which has three references for each source, to evaluate the quality of the estimation of data distribution.
Europarl is a corpus extracted from Europarl v7, where the train, development, and test sets are randomly split.
We pre-process the data by tokenizing and truecasing the sentences using MOSES tools (Koehn et al., 2007), followed with a BPE (Sennrich et al., 2016b) of 30000 merging operations.
B.2 Metrics
The sentence-level BLEU score (s-BLEU) and document-level BLEU score (d-BLEU) are described as follows.
s-BLEU is calculated over sentence pairs between the source and target document, which is basically the same with the BLEU scores (Papineni et al., 2002) for sentence NMT models.
d-BLEU is calculated over document pairs, taking each document as a whole word sequence and computing the BLEU scores between the source and target sequences.
For analysis, we measure the Deviation and Diversity of generated translations.
Deviation is simply defined as the distance to perfect s-BLEU score Deviation(ŷ, y) = 100 − s-BLEU(ŷ, y), (11) accept such limits as reasonable 1) consider these restrictions useful 2) regard such restrictions as reasonable 3) take these constraints as certain passiv bewegte ohren sobald der kopf etwas tut .
ears that move passively when the head goes .1) ears moving passively when the head does something .2) passively moving ears once the head goes .
an object constructed out of wood and cloth with movement built into it to persuade you to believe that it has life 1) an object made out of wood and cloth , with movement built in to persuade you to believe that 's alive .
2) an object built out of wood and cloth with movement to perpetuate you to believe it 's alive .
3) a wooden and cloth object with movement built in to make you believe that it 's alive .Sentence sie lebt nur dann wenn man sie dazu bringt .it only lives because you make it .1) it only lives when you get it to do .2) it lives only as you make it .
3) it only lives because you get them to do it .in jedem moment auf der bühne rackert sich die puppe ab .so every moment it 's on the stage , it 's making the struggle .1) at every moment on the stage , it 's making the struggle of puppet .
2) every moment on the stage it reckers down the puppet .3) so every moment it 's on the stage , the puppet is racking off .er demonstriert anhand einer schockierenden geschichte von der toxinbelastung auf einem japanischen fischmarkt , wie gifte den weg vom anfang der ozeanischen nahrungskette bis in unseren körper finden .he shows how toxins at the bottom of the ocean food chain find their way into our bodies , with a shocking story of toxic contamination from a japanese fish market .1) he demos through a shocking story of toxic burden on a japanese fish market , how poisoning their way from the beginning of the ocean food chain into our bodies .2) he demos through a shocking story of toxin impact on a japanese fish market , how poised the way from the ocean food chain to our bodies .3) he demos through a shocking story of toxin contamination at a japanese fish market , with how toxins find the way from the beginning of the ocean food chain to our bodies .
Table 8: Translations generated by the DA model on IWSLT14 German-English.generate 6 translations for each source sentence without using the document context.It is worth noting that different from the previous paraphrasing augmentation method (Khayrallah et al., 2020), where the MT model learns the paraphraser's distribution directly, we use sampled text output to train the MT models.
C.4 Case Study
Our case study demonstrates that the DA model generates diverse translations at word, phrase, and sentence levels.Several cases for German-English translation are listed in Table 8.
We further list two document-level translations, through which we can have a direct sense of how target-side augmentation improves MT performance, as Table 9 shows.
Figure 2 :
Figure2: The detailed data augmentation process, where the parallel data is augmented multiple times.
Figure 3 :
Figure 3: Impact of the sampling scale for z, trained on G-Transformer (fnt.) and evaluated in s-BLEU on News.(gen+gold) -trained on both generated and gold translations.(gen only) -trained on generated translations.
Figure 4 :
Figure 4: Impact of the observed ratio for z, trained on G-Transformer (fnt.) and evaluated in s-BLEU.Beta(a,b) -The function curves are shown in Appendix B.3.
Figure 5 :
Figure 5: Impact of the granularity of n-grams, trained on G-Transformer (fnt.) and evaluated in s-BLEU.
Figure 6 :
Figure 6: The probability density function of Beta(a, b) distributions.
~ (), ~ (| ) most free societies accept such limits as reasonable , but the law has recently become more restrictive .
Table 1
Liu et al. (2020)d descriptions are in Appendix B.1.Metrics.We followLiu et al. (2020)to use sentence-level BLEU score (s-BLEU) and document-level BLEU score (d-BLEU) as the major metrics for the performance.We further define two metrics, including Deviation and Diversity, to measure the quality of generated translations from
Table 2 :
Main results evaluated on English-German document-level translation, where "*" indicates a significant improvement upon the baseline with p < 0.01.(rnd.)-parameters are randomly initialized.(fnt.)-parameters are initialized using a trained sentence model.♢ -we adjust the hyper-parameters for augmented datasets.♡ -we augment the training data by back-translating each target to a new source instead of introducing additional monolingual targets.
the DA model for analysis.The detailed description and definition are in Appendix B.2.
Table 3 :
MT performance with prior/posterior-based DA models, evaluated in s-BLEU.
Table 4 :
Quality of generated translations and accuracy of the estimated distributions from the DA model, evaluated on News.
|
2023-05-09T01:16:10.711Z
|
2023-05-08T00:00:00.000
|
{
"year": 2023,
"sha1": "9b4a5fc9225187937432e9701d568ba740c9896a",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.acl-long.599.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "30cc726d2ccf391bbe0e4b5b83c2624d0be82f9b",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
270426686
|
pes2o/s2orc
|
v3-fos-license
|
Common Attractors for Generalized F -Iterated Function Systems in G -Metric Spaces
: In this paper, we study the generalized F -iterated function system in G -metric space. Several results of common attractors of generalized iterated function systems obtained by using generalized F -Hutchinson operators are also established. We prove that the triplet of F -Hutchinson operators defined for a finite number of general contractive mappings on a complete G -metric space is itself a generalized F -contraction mapping on a space of compact sets. We also present several examples in 2-D and 3-D for our results.
In his 1981 seminal work, Hutchinson [15] established mathematical foundations for iterated function systems (IFSs) and showed that the Hutchinson operator defined on R k has as its fixed point a bounded and closed subset of R k called an attractor of IFS [16,17].Several researchers have obtained useful results for iterated function systems (see [18,19] and references therein).Nazir, Silvestrov, and Abbas [20] established fractals by employing F-Hutchinson maps in the setup of metric space.Recently, Navascués [21] presented the approximation of fixed points and fractal functions by means of different iterative algorithms.Navascués et al. [22] established some useful results of the collage type for Reich mutual contractions in b-metric and strong b-metric spaces.Thangaraj et al. [23] constructed an iterated function system called Controlled Kannan Iterated Function System based on Kannan contraction maps in a controlled metric space and used it to develop a new kind of invariant set, known as a Controlled Kannan Attractor or Controlled Kannan Fractal.Recently, Nazir and Silvestrov [24] investigated a generalized iterated function system based on pair of self-mappings and obtained the common attractors of these maps in complete dislocated metric spaces, established the well-posedness of the attractor problems of rational contraction maps in the framework of dislocated metric spaces, and obtained the generalized collage theorem in dislocated metric spaces.
In this paper, we consider the triplet of generalized F-contractive operators and define generalized F-Hutchinson operators to obtain the common attractors in complete G-metric spaces.The contractive conditions are different from those in [24], and both dislocated metric spaces and G-metric spaces are independent to each other.We construct some new common attractor point results based on a generalized F-iterated function system in G-metric spaces.We define F-Hutchinson operators with a finite number of general F-contractive operators in the complete G-metric space and show that these operators are themselves general F-contractions.It is worth mentioning that we are obtaining these results without using any type of commuting conditions of selfmaps in non-symmetric G-metric space.At the end, we present several nontrivial examples of common attractors as a result of F-Hutchinson operators.
Mustafa and Sims [5] established the following notion of G-metric.
Definition 2 ([25]
).Let {y n } be a sequence in G-metric space (Z, G).Then, (a) {y n } ⊂ Z is G-convergent sequence if, for any ε > 0, there is a point y ∈ Z and a natural number N such that for all n, m ≥ N, G(y, y n , y m ) < ε; {y n } converges to y ∈ Z whenever G(y m , y n , y) → 0 as m, n → ∞ and {y n } is Cauchy whenever G(y m , y n , y l ) → 0 as m, n, l → ∞.
Remark 1 ([28]
).In G-metric space (Z, G), let H G : CB(Z) × CB(Z) × CB(Z) → [0, +∞) be a mapping defined as In G-metric space (Z, G), for P, Q, R, S, U , V ∈ C G (Z), the following are satisfied: Similarly, Hence, Wardowski [29] defined F-contraction maps for fixed point results as follows.Let F : R + → R be a continuous map satisfying the following conditions: (F 3 ) There exists θ ∈ (0, 1) such that lim We denote a set as a collection of all F-contractions.Definition 4. In G-metric space (Z, G), a self-map h : Z → Z is called an F-contraction on Z if for all u, v, w ∈ Z, there exists F ∈ and τ > 0 such that τ + F(G(hu, hv, hw)) ≤ F(G(u, v, w)) whenever G(hu, hv, hw) > 0.
We discuss F-iterated function systems in G-metric space.First, we define generalized F-contractive operators as a preliminary result.Definition 5.In G-metric space (Z, G), let f, g, h : Z → Z be three self-mappings.A triplet (f, g, h) is called a generalized F-contraction mappings if for all u, v, w ∈ Z, there exists F ∈ and τ > 0 such that τ + F(G(fu, gv, hw)) ≤ F(G(u, v, w)) whenever G(fu, gv, hw) > 0.
Theorem 1.Consider G-metric space (Z, G) and let f, g, h : Z → Z be continuous maps.If the triplet of mappings (f, g, h) is a generalized F-contraction, then (i) the elements in C G (Z) are mapped to elements in C G (Z) under f, g and h; (ii) if for an arbitrary U ∈ C G (Z), the mappings f, h, g : C G (Z) → C G (Z) are defined as then, the triplet (f, g, h) is a generalized F-contraction on (C G (Z), H G ).
Proof.(i) Since f is a continuous and the image of a compact subset under a continuous mapping, f : for all u, v, w ∈ Z such that G(fu, gv, hw) > 0. Now, and hence, Consequently, there exists τ * > 0 such that Thus, the triplet (f, g, h) is a generalized F-contraction mappings on (C G (Z), H G ).
Proposition 2. In G-metric space (Z, G), suppose the mappings f k , g k , h k : Z → Z for k = 1, . . ., q are continuous and satisfy Proof.We give a proof by induction.If q = 1, then, the result is true trivially.For Hence, the result is true for q = 2. Suppose that for q = n, the result holds, that is, , and from Lemma 1 (iii), we have Hence, the result is true for q = n + 1.Thus, the triplet (Υ, Ψ, Φ) is also a generalized F-contraction on C G (Z). where . ., q be continuous maps, where each triplet (f k , g k , h k ) for k = 1, . . ., q is a generalized F-contraction, then {Z; Consequently, a generalized F-iterated function system in G-metric space is a finite collection of generalized F-contractions on Z. Definition 8. Let (Z, G) be a complete G-metric space and U ⊆ Z a non-empty compact set.Then, U is the common attractor of the mappings Υ, Ψ, Φ : where the limit is taken relative to the G-Hausdorff metric.
Main Results
Now, we establish the results of common attractors of generalized F-Hutchinson contraction in G-metric spaces.
Theorem 2. In a complete G-metric space (Z, G), let {Z; (f k , g k , h k ), k = 1, . . ., q} be the generalized F-iterated function system.Define Υ, Ψ, Φ : If the mappings (Υ, Ψ, Φ) are generalized F-Hutchinson contractive operators, then Υ, Ψ and Φ have a unique common attractor U * ∈ C G (Z), that is, Additionally, for any arbitrarily chosen initial set R 0 ∈ C G (Z), the sequence of compact sets converges to the common attractor U * .
We proceed by showing that Υ, Ψ, and Φ have a unique common attractor.
is an attractor of Υ and from the Proof above, U * is a common attractor for Υ, Ψ and Φ.The same is true for k = 3n + 1 or k = 3n + 2. We assume that R k ̸ = R k+1 for all k ∈ N, then by using (G 3 ), we have where Thus from (3), we have Similarly, one can show that Thus, for all k, Thus, Thus, On taking limit as k → ∞, we obtain As lim By the convergence of the series To prove that Υ(U * ) = U * , when assuming the contrary we have where Thus, (4) implies and taking the limit as k → +∞ yields which is a contradiction as τ > 0. Thus, Υ(U * ) = U * .Following the conclusion above, U * is the common attractor of Υ, Ψ, and Φ.
For uniqueness, we consider V as another common attractor of Υ, Ψ and Φ with H G (U * , V, V) > 0.Then, where Thus, (5) ) from which we conclude that H G (U * , V, V) = 0, and thus, U * = V.Hence, U * is a unique common attractor of Υ, Ψ, and Φ. Remark 2. In Theorem 2, take the collection S G (Z), of all singleton subsets of Z, then S G (Z) ⊆ C G (Z). Furthermore, if we take the mappings (f k , g k , h k ) = (f, g, h) for each k, where f = f 1 , g = g 1 and h = h 1 , then the operators (Υ, Ψ, Φ) become Thus, we obtain the following result on common fixed point.
Corollary 1.Let {Z; (f k , g k , h k ), k = 1, 2, . . ., q} be a generalized F-iterated function system in a complete G-metric space (Z, G) and define the maps f, g, h : Z → Z as in Remark 2. If there exists τ > 0 such that for v 1 , v 2 , v 3 ∈ Z having G(fv 1 , gv 2 , hv 3 ) > 0, the following holds Then, f, g, and h have a unique common fixed point u ∈ Z.Additionally, for an arbitrary element u 0 ∈ Z, the sequence {u 0 , fu 0 , gfu 0 , hgfu 0 , fhgfu 0 , • • • } converges to the common fixed point of f, g, and h.
Corollary 2. In a complete G-metric space (Z, G), let {Z; Then, there exists unique U * ∈ C G (Z) that satisfies Additionally, for any arbitrarily chosen initial set R 0 ∈ C G (Z), the sequence of compact sets converges to the common attractor U * .
Proof.From Theorem 2, we obtain that there exists unique U * ∈ C G (Z) that satisfy is also an attractor of Υ m .Fol- lowing the similar steps for those in Proof of Theorem 2, we obtain that * ) is also the common attractor of Υ m , Ψ m and Φ m .By the uniqueness of the common attractor, and G-metric on Z be defined as The maps f 1 , f 2 , g 1 , g 2 , h 1 , and h 2 are continuous and non commutative.Now, we show that for F ∈ and τ > 0, the mappings ; ; ; . Now, by taking F(λ) = ln(λ) for λ > 0, τ = ln( 20 19 ), and for u, v, w Again for u, v, w ∈ Z, we have ; ; ; holds.Thus, all the conditions of Theorem 2 are satisfied, and moreover, for any initial set R 0 ∈ C G (Y), the sequence {R 0 , Υ(R 0 ), ΨΥ(R 0 ), ΦΨΥ(R 0 ), ΥΦΨΥ(R 0 ), • • • } of compact sets is convergent and has a limit, the common attractor of Υ, Ψ, and Φ.
holds.Thus, all the conditions of Theorem 2 are satisfied, and moreover, for any initial set R 0 ∈ C G (Z 3 ), the sequence {R 0 , Υ(R 0 ), ΨΥ(R 0 ), ΦΨΥ(R 0 ), ΥΦΨΥ(R 0 ), • • • } of compact sets is convergent and has a limit, the common attractor of Υ, Ψ, and Φ (see Figure 4).The Figure 4 shows the convergence process of sequence steps at n = 2, 4, 6, and 8 in (a), (b), (c), and (d), respectively.The green points in the figures show the data points of convergence steps and the blue lines show the movements of data points in the different places.Interchanging the order of variables in maps yields a new form of common attractor of Υ, Ψ, and Φ (see Figure 5).The green points in the figures show the data points of convergence steps and the blue lines show the movements of data points in the different places.
a non empty subset of Z}.B(Z) = {W : U is a non empty bounded subset of Z}.CL(Z) = {U : U is a non empty closed subset of Z}.CB(Z) = {U : U is a non empty closed and bounded subset of Z}.
Figure 1
shows the convergence process of sequence steps at n = 2, 4, 6, and 8 in (a), (b), (c), and (d), respectively.The green points in the figures show the data points of convergence steps and the blue lines show the movements of data points in the different places.
Figure 2
shows the convergence process of sequence steps at n = 2, 4, 6, and 8 in (a), (b), (c), and (d), respectively.The green points in the figures show the data points of convergence steps and the blue lines show the movements of data points in the different places.
Figure 2 .
Figure 2. Iteration steps of the convergence to the common attractor of Υ, Ψ, and Φ.If we are interchanging the order of variables in maps, then we obtain a new form of common attractor of Υ, Ψ, and Φ, see for example in Figure3.The green points in the figures show the data points of convergence steps and the blue lines show the movements of data points in the different places.
Figure 4 .
Figure 4. Iteration steps to the convergence of the common attractor of Υ, Ψ, and Φ.
Figure 5 .
Figure 5. Iteration steps to the convergence of the common attractor of Υ, Ψ, and Φ.
|
2024-06-13T15:29:44.784Z
|
2024-06-10T00:00:00.000
|
{
"year": 2024,
"sha1": "9ed2600fd900c392b57956fa8738a6731624cb6a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-3110/8/6/346/pdf?version=1718011712",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3ec8039920c066d993a27429181d7aa134e48a85",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
32226038
|
pes2o/s2orc
|
v3-fos-license
|
Gender inequality in the clinical outcomes of equally treated acute coronary syndrome patients in Saudi Arabia
BACKGROUND AND OBJECTIVES Gender associations with acute coronary syndrome (ACS), remain inconsistent. Gender-specific data in the Saudi Project for Assessment of Coronary Events registry, launched in December 2005 and currently with 17 participating hospitals, were explored. DESIGN AND SETTINGS A prospective multicenter study of patient with ACS in secondary and tertiary care centers in Saudi Arabia were included in this analysis. PATIENTS AND METHODS Patients enrolled from December 2005 until December 2007 included those presented to participating hospitals or transferred from non-registry hospitals. Summarized data were analyzed. RESULTS Of 5061 patients, 1142 (23%) were women. Women were more frequently diagnosed with non ST-segment elevation myocardial infarction (NSTEMI [43%]) than unstable angina (UA [29%]) or ST-segment elevation myocardial infarction (STEMI [29%]). More men had STEMI (42%) than NSTEMI (37%) or UA (22%). Men were younger than women (57 vs 63 years) who had more diabetes, hypertension, and hyperlipidemia. More men had a history of coronary artery disease. More women received angiotensin receptor blockers (ARB) and fewer had percutaneous coronary intervention (PCI). Gender differences in the subset of STEMI patients were similar to those in the entire cohort. However, gender differences in the subset of STEMI showed fewer women given β-blockers, and an insignificant PCI difference between genders. Thrombolysis rates between genders were similar. Overall, in-hospital mortality was significantly worse for women and, by ACS type, was significantly greater in women for STEMI and NSTEMI. However, after age adjustment there was no difference in mortality between men and women in patients with NSTEMI. The multivariate-adjusted (age, risk factors, treatments, door-to-needle time) STEMI gender mortality difference was not significant (OR=2.0, CI: 0.7–5.5; P=.14). CONCLUSION These data are similar to other reported data. However, differences exist, and their explanation should be pursued to provide a valuable insight into understanding ACS and improving its management.
departments is lower for women than men, 6 with rates dependent on clinical presentation at the time of admission. The percentage of women diagnosed with ACS can range from 33% to 45%. 7 Furthermore, a smaller percentage of women than men presented with STelevation myocardial infarction (STEMI) (secondary to occlusive thrombus), but more presented with unstable angina (reflecting subtotal occlusion). 1,8 Moreover, sex differences in symptoms of ACS exist, which might be explained by differences in anatomic, physiologic, bio-logic, and psychologic characteristics among them. 3,9 Previous studies demonstrating important differences in the outcomes of men and women with ACS have focused on the management and the performance of revascularization procedures. [10][11][12] A systematic review of the diagnosis and treatment of CAD found significant evidence that women admitted to hospital with ACS are less likely to receive aspirin, b-blockers, or thrombolysis; less likely to undergo exercise stress testing; and also are less likely to undergo angiography or revascularization. 13 Although not all studies have found such gender differences, particularly after adjusting for important confounding factors such as age. 14,15 Several studies are available from Western countries on gender disparities in ACS treatment and outcomes; however, no data is available from Saudi Arabia. Accordingly; our objective was to explore whether gender-related differences exist in the treatment and outcomes of patients presenting with ACS in Saudi Arabia.
PATIENTS AND METHODS
The Saudi Project for Assessment of Coronary Events study is a prospective registry and a quality improvement initiative of all consecutive ACS patients that were admitted to the participating hospitals. 16 Ethical approval was obtained in all participating centers. The diagnosis of the different types of ACS was based on the definitions of the Joint Committee of the European Society of Cardiology/American College of Cardiology (ACC). 16 Serum cardiac biomarkers used to assist in the diagnosis of myocardial injury were measured locally at each hospital' s laboratory using its own assays and reference ranges.
Study design and population
ACS patients include those with STEMI, non-STsegment elevation myocardial infarction (NSTEMI), and unstable angina (UA). We report here the results of the 2 phases of the study that lasted from December 2005 until December 2007. There were 13 hospitals in phase-I and 17 in phase-II; one third of the hospitals were nontertiary care hospitals with no cardiac catheterization and/or cardiac surgery facilities. The details of these phases were outlined previously. 17 In summary, phase-I extended over a 1-year period and included baseline registry of process of care, outcomes, and health care services. Subsequently, the overall and individual-hospital results were sent to each hospital to improve on the knowledge-care gap and get a comparison with national practices.
Phase-II extended for another 1 year and data was collected using the Internet (www.space-ksa.com).
Overall and individual-hospital results were also provided "real-time" during this on-line phase to all participating hospitals.
Study organization
A case report form (CRF) for each patient with suspected ACS was filled out on hospital admission by assigned physicians working in each hospital using standard definitions, and then was completed throughout the hospital stay. All CRFs were verified by a cardiologist and then sent to the principal coordinating center where the forms were further checked for incomplete data and mistakes before submission for final analysis. To avoid double-counting patients, each patient' s national identification number was used. An independent clinical research organization (Dubai Pharmaceutical, Dubai, UAE) was contracted to randomly audit all data collected from 20% of the hospitals in phase-I. Data accuracy was found to be more than 99%.
Case report form data variables Data collected included the following variables: patients' demographics, medical history, provisional diagnosis on admission and final discharge diagnosis, electrocardiographic findings, laboratory investigations, medical therapy, use of cardiac procedures and interventions, inhospital outcomes, and mortality.
Statistics
Differences in categorical variables between respective comparison groups were analyzed using the chi-square test or Fisher exact test. Continuous variables were analyzed using a t test or Mann-Whitney U test based on the satisfaction of normality assumption. P values were reported as 2-sided test results with a 5% level of significance for each test. Multiple logistic regression analysis was used to identify whether gender was an independent predictor of in-hospital mortality. Variables considered for inclusion were baseline demographic characteristics medical history diabetes mellitus, hypertension, hyperlipidemia, percutaneous coronary intervention (PCI), coronary artery bypass graft (CABG), in-hospital therapies, and door-to-needle-time. All analyses were performed using STATA version 9 (StataCorp LP, United States).
RESULTS
A total of 5061 patients with the diagnosis of ACS were enrolled from 30 hospitals during the period between December 2005 and December 2007. Table 1 depicts the baseline characteristics of the whole cohort. A total of 77.4% (3919) were men and 22.3% (1142) were women. The mean age of women was 63 years compared with 57 years for men (P<.001) ( Table 1). Women had significantly higher baseline risks like diabetes mellitus, hypertension, hyperlipidemia, higher body mass index and tachycardia in men (P<.001 for all comparisons). However, the prevalence of CAD in men was higher than in women (15.2% vs11.2%; P=.001), but there was no difference in the rate of men and women who underwent PCI and CABG surgery without any disparity. A significant difference in presenting diagnosis based on gender was observed where STEMI was common in male patients (45.2% vs 28.6%, P<.001), whereas the NSTEMI (34.6% vs 42.6%; P<.001) and UA (20.1% vs 28.7%; P<.001) were more common in women patients (Figure 1).
There was no significant difference between men and women in terms of symptoms at presentation to hospital (89.3% vs 81.9%; P=.4). Women were more likely than men to have more severe clinical abnormalities (i.e., lower systolic BP and higher pulse rate) but less likely than their male peers to have unusual chest pain. The incidence of cerebrovascular accident/transient ischemic attack and peripheral artery disease was not different between the 2 groups. Moreover, key diagnostic investigations like troponin and coronary angiogram were similar in both the genders.
In-hospital medications and clinical outcome comparisons
No significant differences were observed in the administration of aspirin, clopidogrel, angiotensin converting enzyme inhibitors/blockers, b-blocker, and lipid-lowering agents between female and male patients in the hospital. PCI was more significantly performed in men than women (36.3% vs 31.6%; P=.001); however, there was no significant difference in the rate of CABG between the 2 groups. The rate of in-hospital death was significantly more in women than men (5.2% vs2.5%; P=.02), the rate of congestive heart failure (CHF) was significantly more in women than men (16.2% vs 8.6%, P<.001), and the rate of recurrent ischemia was also significantly more in women than men ( 12) were not significantly different between the 2 groups. Table 2 depicts the demographic, in-hospital treatment, and the outcomes of patients with STEMI. The mean age of women was significantly higher than of men (62.5 vs 56.7, P=.001). There was no difference in the use of evidence-based medication or the rate of thrombolytic and primary PCI between the 2 groups. Women had higher in-hospital mortality rate (11% vs 3.3% P<.001), higher CHF (20.2% vs 9.8%, P<.001), and higher cardiogenic shock rate (11% vs 6.95, P=.01) than men. However; there was no significant difference in the rate of major bleeding, stroke or re-MI between the 2 groups. Table 3 depicts the crude and age-adjusted odds ratios (OR) associated with the in-hospital mortality. The crude OR associated with the in-hospital mortality was higher in women 3.5 (95% CI: 2.3-5.5) for SETMI and was 2 (95% CI: 1.2-3.5) for NSTEMI. There was no significant difference in the mortality between the 2 groups in patients with UA. Age-adjusted OR for the in-hospital mortality was higher in women for STEMI 2.5 (95% CI: 1.5-3.8; P<.001) and was not significant for NSTEMI OR 1.5 (95% CI: 0.97-2.9; P=.060).
DISCUSSION
This study provides information on the demographics, in-hospital treatment, and outcomes of women presenting with ACS compared to men in Saudi Arabia. The main findings of the present study were that the Saudi women developed ACS at higher age, had a higher prevalence of traditional risk factor, equally treated with evidence-based therapies with a significant delay in the administration in these therapies, and had worse inhospital outcomes than men. Previous reports showed that women had their first cardiac event 6 to 10 years later than men and had higher attributable risk factors. 18 Furthermore, typically, more women with ACS present without chest pain or discomfort; however, the difference is not universal and prompted 12,19,20 to emphasize that public health symptom messages should not be changed to include lesser chest pain in women. In the present study, neither the lesser frequency of ischemic chest pain nor the slightly greater frequency of atypical chest pain in women compared with men was significant.
Saudi women presented more often with UA and NSTEMI, whereas men had more frequently STEMI, which is in accordance with earlier studies such as GUSTO IIb (Global Use of Strategies to Open Occluded Coronary Arteries in Acute Coronary Syndromes), TIMI IIIB (Thrombolysis In Myocardial Infarction), and the Euro Heart Survey. [21][22][23] These gender-related differences may be accounted for the differences in anatomy, pathophysiology of CAD, and clinical characteristics in women versus men. 21 Concerning patient management, there is conflicting evidence for a gender-related bias. Several studies documented a clear gender bias in referral to diagnostic procedures and treatment of coronary artery disease. [24][25][26] American College of Cardiology/American Heart Association guidelines for NSTEMI ACS care at hospital discharge include aspirin, clopidogrel, b-blockers, ACE inhibitor, lipid-lowering agent, smoking cessation, dietary modification, counseling, and cardiac rehabilitation. In our study, in-hospital medications irrespective of gender followed the protocol treatment guidelines. However, more female patients were prescribed angiotensin receptor blockers compared to men, possibly for renal protection attributing to higher baseline risk factors such as diabetes. 27,28 In addition, no significant differences were noted in the rate of CABG or thrombolytic therapies between the 2 groups; however, the rate of PCI was significantly lower in female patients than male.
Like other reports, 29 our study showed that CHF and recurrent ischemia were more often reported in the female group, whereas no significant gender difference was found in the occurrence of cardiogenic shock, stroke, major bleeding, and re-MI rate. For example, Maynard et al reported a higher incidence of CHF in women ACS patients during hospitalization, [30][31][32] suggestive of diastolic dysfunction as a large component of the presentation of heart failure in ACS women. 22 In one of the studies, a subset of women presented with STEMI showed higher rate of in-hospital mortality than men. 7 This difference was attributed to their older age, higher baseline risks, more frequent comorbidities, and less frequent use of revascularization or undertreatment, or restricted to a subgroup of female patients (possibly related to smaller target vessel size, increased vessel tortuosity, and other biological differences). 33-36 Similar to our study several reports from randomized clinical trial (GUSTO, ISIS 3) and lager databases (RESCATE, Washington, NARMI) indicate that women gender is an independent risk factor for CHF, cardiogenic shock and in-hospital mortality after adjusting for age, comorbidities and evidence-based therapies for STEMI. In addition, it is argued that under-referral of women may have been the cause of increased morbidity and mortality in women, particularly associated with PCI procedure. 37,38 However, there was no differ-ence in rate of referral for PCI in our STEMI between the groups. Moreover, reports indicate that women with STEMI tend to delay seeking medical attention than men (GUSTO 1), upon arrival to the hospital they typically experience further delay in administration of thrombolytic therapy. Jacson et al reported that women waited a mean of 23 minutes longer before receiving thrombolytic therapy than men (112.2 [84.1] vs 89.6 [68.7]) minutes, P<.1; median 100 and 75 minutes, women and men, respectively. In our study there was a significant delay in administrating thrombolytic therapy to women, which is not explained by differences in symptoms at presentation (median 52 vs 71 minutes, men and women, respectively; P=.035). Adjusting for DTN time did remove the increased in-hospital mortality in women with STEMI.
Limitations
Our data is based on observational registry. The main limitation of such design is nonrandomized nature and unmeasured cofounders. However, well-designed registry data provide valid results. We did not systematically capture the time of onset of symptoms to hospital presentation, which perhaps confounded the findings of this study.
In conclusion, women develop ACS at a higher age in Saudi and have higher attributable baseline risk factors. They predominantly present with NSTEMI and unstable angina. Saudi women with STEMI independently predicted poorer outcomes in terms of CHF, cardiogenic shock, and in-hospital mortality. In our study, this finding is related to delay in the administration of thrombolytic therapy. Hence physicians need to increase the awareness of prompted administration of effective therapy in women with STEMI.
Acknowledgments
This study was funded by Sanofi Aventis.
|
2018-04-03T02:06:14.654Z
|
2013-07-01T00:00:00.000
|
{
"year": 2013,
"sha1": "5832b7c5a96bd6f2385f92b1de4c865adf1d818f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.5144/0256-4947.2013.339",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5832b7c5a96bd6f2385f92b1de4c865adf1d818f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3717077
|
pes2o/s2orc
|
v3-fos-license
|
Changing housework, changing health? A longitudinal analysis of how changes in housework are associated with functional somatic symptoms
Aim The aim of this study was to analyse how changes in housework over the course of adulthood are related to somatic health in Swedish men and women. Methods Data were drawn from 2 waves of the Northern Swedish Cohort Study, response rate 94.3%, N=1,001. A subsample of cohabiting individuals was selected (n=328 women, 300 men). Outcome variable was functional somatic symptoms (FSS) at age 42. Associations were assessed in multivariate general linear models with adjustment for confounders and somatic health at age 30. Results Housework is primarily performed by women, and women's responsibility for and performance of housework increased from ages 30 to 42. These changes were associated with elevated levels of FSS at age 42 in women. Men reported considerably lower responsibility for and performed less housework compared with women, the load of housework for men does not change substantially from ages 30 to 42 and no associations with FSS were identified. Conclusions The gendered division of housework means that women are particularly exposed to a heavy workload. Women's responsibility for and performance of housework increase between ages 30 and 42 and this threatens to be embodied in the form FSS. We conclude that housework should be considered an important source of stress in addition to that from waged work and that a deeper understanding of the links between housework and health requires a gender theoretical analysis.
xisting evidence shows links between housework and health among women and men (1Á3). Generally, a higher load of housework are associated with lower self-rated health among women (4), whereas satisfaction with the division of housework is associated with reduced risks for sickness absence among men (2). Cohabiting women and men with unequal responsibility for the housework have higher risks for psychological distress (1). Overall, previous research on the health consequences of housework has primarily focused on the psychosocial aspects, with a lack of studies including both women and men that explore somatic health status (5,6). There is also a knowledge gap regarding how changes in housework over time are related to somatic health status. One of the limitations in previous studies adopting a life course perspective on housework is that they generally focus on specific events such as marriage and childbirth, rather than investigating how change in housework across time is related to health status (7). For instance, the birth of a child often leads to increased housework, and dominant norms upholding the unequal division of labour, whereby women take up the greater burden than men, become more common (8,9). One of the few longitudinal studies within the field suggests that more the responsibility for housework and childcare, less the feeling of fairness and satisfaction among women (10). Explorations of possible health consequences from change in housework across time are missing.
Together with the other Nordic countries, Sweden represents a dual-earner welfare model with a strong political support for gender equality in family and working life (11). This model has been found to promote better health than less egalitarian models (1,11,12). However, the normative and cultural expectations of gender practices imply symbolic constructions of childcare and housework as women's work and a way for women to show love for their family (13). For example, although the amount and type of both paid and unpaid work changes across life depending on, for example, cohabitation, marriage, childbirth and separation (9), heterosexual cohabiting Swedish women still have the main responsibility for the unpaid housework (14). The trend towards a more gender equal division of unpaid work at home in industrial countries is primarily due to a reduction of the hours women spend doing this work (9,14). These gendered patterns are often referred to as the gender division of labour, which means that women and men are exposed to partially different environments and responsibilities, which in turn can be associated with health either negatively or positively (15,16). The workload from combined waged and housework seems to lead to an increased risk of health problems (17), and the double burden of paid and unpaid work becomes a health risk (18,19).
To understand the complex systems of how the social process of housework interact with bodily expressions of health, we use an epidemiological framework of embodiment that emphasizes the integration of soma, psyche and society (20,21). According to Krieger, embodiment represents a biological incorporation of the material and social world, and how bodies change with environmental and behavioural factors, such as social gender relations and gendered practices of housework (20,21). In this study, functional somatic symptoms (FSS) represent the possible bodily response to exposure to housework. We use FSS to signify a spectrum of self-perceived bodily complaints that are experienced as a transformation from normal health status to often-unexplained somatic symptoms (22).
The aim of this study was to analyse how changes in housework over the course of adulthood were related to FSS among women and men.
Methods
Sample and data collection Data were drawn from the Northern Swedish Cohort, which consists of all pupils (n01,083; 506 girls and 577 boys) who studied in their last year of compulsory school in a medium-sized Swedish industrial town in 1981. The questionnaire included questions concerning school, employment, socio-economic conditions and health. Participants have subsequently filled in a similar questionnaire in 1983, 1986, 1995 and 2007. The response rate (in relation to those still alive in the original cohort) was 94% in 2007 (23). This study is based on data from ages 30 years (1995) to 42 years (2007) and includes only those who, at both waves, lived with a partner and/or was married (n 0628, 52.2% women).
Measures
Outcome FSS (for ages 30 and 42) were measured through 10 selfreported somatic symptoms: headache or migraine; other stomach ache (than heartburn, gastritis or gastric ulcer); nausea; backache, hip pain or sciatica; general tiredness; breathlessness; dizziness; overstrain; sleeping problems; and palpitations. Items were coded ''No, never'' (0); ''On and off'' (1); and ''Often/all the time'' (2). The scale was computed as the mean of the 10 item values (range 0Á2) and shows acceptable psychometric properties, including, for example, factor structure, internal consistency and invariance in factor structure over time (24).
Independent variables Á main exposure
Responsibility for housework (for ages 30 and 42) was measured with the question, ''How much of the responsibility for the housework do you take?'' The answer alternatives were ''none,'' ''less than half,'' ''more than half'' and ''all.'' Change in responsibility for housework was created by the variables of responsibility for housework at ages 30 and 42. To reduce the number of categories and to increase the statistical power in the analysis, responsibility for housework was dichotomized into 2 groups at ages 30 and 42: low responsibility (none, less than half or half) and high responsibility (more than half or all). The combined variable, ''change in responsibility for housework,'' was categorized into 4 groups: (a) Low responsibility at ages 30 and 42 (b) High responsibility at age 30 and low responsibility at age 42 (c) Low responsibility at age 30 and high responsibility at age 42 (d) High responsibility at ages 30 and 42 Household work time (ages 30 and 42) was measured as the number of hours per week spent on housework duties such as cooking, washing and cleaning. The answer alternatives were ''no time,'' '' B1 hour,'' ''1Á3 hours,'' ''4Á7 hours,'' ''8Á14 hours,'' ''15Á21 hours,'' ''22Á35 hours'' and ''more than 35 hours.'' These items were used to construct 2 variables reflecting somewhat different aspects of change in the amount of time spent on housework: ''change in amount of housework,'' which indicates stability or change in categories of low/medium amount or high amount, and ''change in time spent on housework,'' which reflects change or not regardless of initial number of hours per week spent on housework.
Change in amount of housework was created by the variables of time in housework at ages 30 and 42. In order to reduce the number of categories and to increase the statistical power in the analysis, time in housework was dichotomized into 2 groups at ages 30 and 42: low amount (0Á14 hours/week) and high amount (!14 hours/week).
The combined variable ''change in housework time'' was categorized into 4 groups: (a) Low amount at ages 30 and 42 (b) High amount at age 30 and low amount at age 42 (c) Low amount at age 30 and high amount at age 42 (d) High amount at ages 30 and 42 Change in time spent on housework is the variable that was computed by calculating the difference in hours of housework per week (see description above) between ages 30 and 42 and then recoding the numeric variable into ''no change,'' ''decrease'' and ''increase.'' Given the different approach in how these 2 timerelated measures account for time spent on housework at age 30, the variables complement each other.
Covariates
Living with children at age 30 was measured as if the participants were living with children all or some of the time (0) or were not living with children (1). Living with children was assumed to be linked to elevated levels of housework and, therefore, included as a possible confounder in the analyses.
Time in paid work at age 30 was measured as the number of hours in paid work (0Á82 hours) per week. Logically, the more hours spent on paid work, the less time for housework. However, as shown in the introduction this relationship is complex and highly gendered (1).
Occupational status (age 30) was measured with occupation level on the basis of the Swedish SEI classification (25): upper white-collar workers including self-employed (0), lower white-collar (1) and blue-collar workers (2).
Ethics statement
The Regional Ethical Review Board in Umeå, Sweden, has approved this study.
Statistical analysis
Between-group analyses were performed using independent sample t-tests and ANOVAs. Crude and multivariate general linear models (GLMs) were performed for each housework exposure in relation to FSS. Given the distinct gender pattern in both the outcome and the main exposures, the regression analyses were conducted separately for women and men. Adjustments were made for the following variables at age 30: FSS, living with children, average number of hours spent on paid work per week and occupational status. All statistical analyses were performed using PASW Statistics 22 with a significance level at 0.05. Table I displays the distribution of variables included in the study. The distribution of the housework variables was highly gendered with women reporting more responsibility for and spending more time doing housework than men. For men, the pattern of change in amount of housework is very similar to that of change in responsibility: nearly 9 out of 10 men remained in the stable low category. Every second woman remained in a stable high responsibility category. They were also more likely than men to have shifted from high to low responsibility and vice versa. With regard to change in amount of housework, both the shifts (from high to low amount and vice versa) and the stable high scenario were more common in women than men. When only looking at change in the number of hours spent on housework, one-quarter of all participants, regardless of gender, reported no change, whereas between 30 and 44% reported decrease or increase.
Results
Seventy percent of the sample lived with children at age 30, and men spent more time in paid work than did women. More than half of the sample were white-collar workers including self-employed, 6% lower white-collar workers and 4 out of 10 as blue-collar workers. As shown in Table I, women reported higher levels of FSS than men at both ages 30 and 42. Table II presents how levels of FSS were distributed between the categories of change in housework. With regard to change in responsibility for housework, the analysis of the entire sample shows a difference in which FSS was least common in the low stable group and most common in the group of decreased responsibility from high to low. However, gender-separate analyses showed no difference in FSS depending on whether the participants remained in the altered or stable groups regarding responsibility. This variable was therefore not included in the regression analyses. Table II further shows that changes from less to more hours in housework were associated with higher levels of FSS at age 42, compared to all other categories including the high stable group. This finding was, however, only valid for women.
Results from the GLMs are found in Table III (women) and include only variables for which there were betweencategories differences in FSS. In women, the crude analysis (Model 1) confirmed that, in comparison with the reference category (stable low or no change), the increased amount and time of housework between ages 30 and 42 was associated with elevated levels of FSS. This association remained after adjusting for FSS at age 30 (Model 2) as well as having children in the household, the number of hours spent on paid work and occupational status (Model 3). Detailed results from post hoc tests are available upon request. No statistically significant associations were identified among men (Table IV).
Discussion
Although previous research demonstrates associations between level and responsibility for housework and health among women and men (1Á3), few studies have explored how longitudinal changes in housework are related to changes in bodily expressions of health in women and men. The main findings of this study show that not only is housework predominately performed by women but also women's responsibility for and performance of housework increases from ages 30 to 42. These changes are associated with elevated levels of FSS regardless of previous FSS, Compared to men, women changed their responsibility for and the amount of housework between ages 30 and 42 to a much greater extent. There was no gender pattern in whether the time spent on housework had decreased, increased or remained stable. However, this measure does not account for the level at age 30, that is, no change can mean high load at both ages 30 and 42, or vice versa. This is a time of life when many people start a family, and research shows that the transition into parenthood tends to increase the amount of housework for women, whereas men's housework is more stable across parenthood (9). The results indicate that the gendered organization of unpaid work at home represents a gender structure upheld by socially constructed positions and norms (13). It is likely that the expectations of women as mainly responsible for the domestic sphere are reflected in the unequal distribution and change of housework (26). Our study suggests that this situation constitutes a risk for women's physical health not only in the present but also over time. Housework, as a part of family relations, is also deeply imbued with embodied interactions and practices. For example, the fact that women more often take care of men rather than the reverse is intrinsically embodied (27). Women's higher risk of FSS may, therefore, be an embodied consequence of the gendered division housework (20).
Our study indicates that men's physical health does not seem to be affected negatively by an unequal division of housework. However, it should be noticed that we do not know whether men's health would be affected in the same way as that of women, in the same situation, because a high workload of hours in housework, as well as changes over time, is much rarer among men than among women. Nevertheless, time spent on housework in relation to psychological distress has previously been investigated within the same population (the Northern Swedish Cohort), without finding any significant associations among either women or men (1). These contradictory results indicate that the burden of performing housework can be difficult to capture through established mental health measures, although it seems to leave its marks in women's bodies (20). From a public health perspective, measuring the time spent on housework is therefore a highly relevant way of capturing bodily health expressions of gender practices in everyday life.
An unexpected result is that decreased responsibility for housework (from higher to lower) did not reduce the level of FSS (Table II). In contrast, those reporting decreased responsibility had higher level of FSS in the total population. Although these results became insignificant in the gender separate analyses, it seems they mainly represent women's situation as women, to a greater extent, have changed their level of housework responsibility. One possible explanation might be that, for women, breaking expected genders norms of housework (in this case, reducing the amount) implies strain on the individual. A similar argument was put forward in a previous Swedish study; being the pioneer of breaking societal gendered norms can be stressful (28). However, even if normbreaking practices in housework might impact health negatively in the short run, it is well known that gender equality has positive health consequences for both women and men in the long run (12). When new gender relations expand and gain general acceptance, the initial negative health consequences of changes in gender relations will be positive and lead to reduced inequalities in health (29).
Strengths and limitations
The strengths of the study include, for example, prospective cohort material, low attrition, the sample being representative of the Swedish population (23) and a wellevaluated measure of FSS (24). The findings represent cohabiting individuals rather than couples and the results should be interpreted as patterns at the population level. Also, 7 of 10 already had children at age 30, which indicates that the initial level of housework most likely was fairly high. As the focus in this study was on unpaid work in everyday life, we only included a daily basis for housework and not unpaid work, such as gardening, car care and restoration work. There were problems of statistical power in the analyses of men because of few cases in the highÁhigh categories. Hence, we cannot be sure that the health response of changes in housework in men is dissimilar to that of women. This needs to be scrutinized in future studies.
Conclusions
The gendered division of housework means that women are particularly exposed to a heavy workload. Women's responsibility for and performance of housework also increase across adulthood, and this situation threatens to be embodied in the form of elevated levels of FSS. In contrast, men have a considerably lower and unchanged load of housework that does not seem to be related to their FSS across time. We conclude that housework should be considered an important source of stress in addition to that from waged work and that a deeper understanding of the links between housework and health requires a gender theoretical analysis. This should be acknowledged in social policy and public health interventions.
|
2016-10-11T02:19:10.865Z
|
2016-01-31T00:00:00.000
|
{
"year": 2016,
"sha1": "a0e63a72686058fcc7546eddcb6ca9ed284f1bff",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/ijch.v75.31781?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a0e63a72686058fcc7546eddcb6ca9ed284f1bff",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260349693
|
pes2o/s2orc
|
v3-fos-license
|
Application and evolution of design in oral health: A systematic mapping study with an interactive evidence map
Abstract Objectives There is increasing recognition of the value and capabilities of design in healthcare. Beyond the development of medical devices, design is increasingly being applied to intangible, complex and systemic healthcare problems. However, there is limited evidence on the use of design specifically in the field of oral health. This systematic mapping study aims to collate and catalogue evidence of design in oral health. Methods A systematic search of academic databases and grey literature was performed. Duplicate results were removed, and publications relating to the same project were grouped. Reviewers from design and oral health independently screened a sample of the dataset. Projects of both relevance to oral health, and with input from a designer or clear implementation of a design methodology or approach were included. Projects were coded and plotted on a novel interactive evidence map. Results 119 design and oral health projects were included between 1973 and 2022. Interventional (n = 94, 79%), empirical (n = 46, 39%), methodological (n = 35, 29%) and theoretical (n = 7, 6%) design contributions were identified across the projects. The projects were categorized by four orders of design: first—graphics (n = 6, 5%), second—products (n = 41, 34%), third—interactions (n = 70, 59%), and fourth—systems (n = 2, 2%). Design was found in a diverse range of contexts in oral health; most commonly being relevant to general patients (n = 61, 51%), and for use in general dental practice (n = 56, 47%). Further design outcome categories (digital material; printed material; object; room or space; apparel; process; smart device; tangible interface; graphical interface; virtual reality; service; policy; system) and oral health themes (oral health literacy; oral care training; dental clinic design; dental instruments and equipment; personal oral care; dental appliance; clinician health and productivity; clinical information systems; informed consent; oral health promotion and prevention; oral care training; patient interactions and experience) were identified. Conclusions The novel interactive evidence map of design in oral health created enables ongoing and open‐ended multivariant documentation and analysis of the evidence, as well as identification of strategic opportunities. Future research and policy implications include; recognition and engagement with the full capabilities of design; integration of design experts; fostering inclusive engagement and collaboration; disentangling patient and public involvement; advancing human‐centred systems approaches; adopting design‐led approaches for policy‐making.
| INTRODUC TI ON
There is increasing recognition of the strategic value and capabilities of design in healthcare.Innovation and design organizations such as NESTA 1 and The Design Council 2 have highlighted design's creative participatory approaches and divergent modes of thinking as beneficial for addressing healthcare challenges.There is increasing academic discourse around design for health, [3][4][5][6][7][8][9][10][11] as well as recognition from health-related organizations such as Wellcome, 12 Unicef, 13 and The Bill and Melinda Gates Foundation, 14 and consulting and technology companies including Mckinsey & Company 15 and IBM. 16Design's capacity in health spans beyond the implementation of advanced technology and development of medical devices.It is increasingly being recognized by, and integrated into healthcare organizations as a central agent of innovation and strategy, 17,18 tackling complex problems, 19 and shaping healthcare systems and processes with a view to transition towards more desirable sustainable futures. 20 the context of oral health, it has been argued that design could play a valuable role in: delivering person-centred and preventive models of care 21 ; responding to socio-demographic shifts 21 ; effective adoption of technology 21 ; and the translation of evidence into clinical dental practice. 22,23However, despite the wider design and health discourse being well-established and growing, there is limited evidence on both the use and understanding of design in oral health.
This systematic mapping review aims to-for the first timecollate and catalogue evidence of design activity in oral health, building a robust foundation to understand the current landscape and identify future opportunities for the field.To allow rigour and thoroughness of evidence mapping, each design project has been assessed according to both its design characteristics and oral health context and accordingly classified across eight categories.These include year; design contribution; design order; design outcome; oral health theme; population; setting; and collaborators.
| Definition of design
Definitions, processes and outputs of 'design' vary substantially across different sectors and contexts. 24As such, it is important to clarify the meaning of design adopted in this review.This is influenced by the positionality of the design researchers involved, whose understanding of design is human-centred, 25 eurocentric 26 and informed by their product/service design background.This review focuses on design as a professional practice in which designers gain considerable knowledge, skills and training. 27We define design as a process of both problem framing and solving, 28 which employs a combination of designerly principles, mindsets, practices and techniques, 29 which generates outcomes across four orders of design (first-visuals, second-products, third-services and interactions, fourth-systems), 21,30 and which provides four types of contributions, that is, theoretical; methodological; empirical; as well as interventional (through design outcomes). 31,32
| Definition of oral health
We refer to oral health as defined by the FDI 33 : Oral health is multi-faceted and includes the ability to speak, smile, smell, taste, touch, chew, swallow and convey a range of emotions through facial expressions with confidence and without pain, discomfort and disease of the craniofacial complex (head, face, and oral cavity).Oral health means the health of the mouth.No matter what your age, oral health is vital to general health and well-being.
| ME THODOLOGY AND ME THODS
This review employs a systematic mapping methodology. 34This type of literature review is appropriate where there is a diversity of literature and a large number of included publications is anticipated.A systematic search of a broad area of enquiry is carried out with the purpose of collating, describing and cataloguing evidence in order to identify knowledge clusters, gaps and/or define opportunities for future research, rather than to answer a focused research question.Formal quality appraisal is not required, and synthesis is presented in a clear visual format. 35
| Search strategy
Databases were selected based on health or design subject areas, or because of their multidisciplinary coverage.These were: The final search terms are shown in Table 1.The search query used for each database is provided in Appendix A.
Grey literature searching and snowballing of references from the publications already included was carried out in order to support and understand gaps in the database review.A Google search 7 was carried out using a reduced version of the database search string (bold text in Table 1).
Search results from all sources were combined, and duplicates were removed.
| Data screening
A three-stage screening approach (title, abstract and authors, full text) was adopted.This is shown in Figure 1.
'Projects' are the primary unit of analysis in this review; where the search identified multiple publications related to the same project, they were grouped during the screening process. 36single reviewer conducted the screening process.A multidisciplinary review team (two dental professionals and two design professionals) independently applied the screening criteria to a sample of search results (n = 400) to ensure consistency and clarity.There was 96% overall agreement, and the Krippendorff alpha 37 score was 0.94 indicating substantial agreement.Discussion of any discrepancies followed, and agreements were sought to strengthen the consistency in the remaining search results.
Projects were included if both:
1.There was a contribution from the field of design.
2. The project was directly relevant to the field of oral health.
Projects were excluded if the full text was unavailable, or there was no English language version available.Literature reviews and opinion/commentary pieces were excluded.Detailed screening criteria, accompanied by example decisions, are provided in Appendix B.
| Data extraction and analysis
Coding categories (Appendix C) were initially defined a priori (deductive approach), and were adjusted where appropriate during the coding process (inductive approach).New categories were identified during data synthesis and were incorporated into the coding scheme.Generic fields were taken from James et al's systematic mapping methodology (title, year) 34 and topic-specific categories were adapted from Chamberlain et al's previous design in health review (collaborators, population, setting, design output). 7Additional categories were added to classify design contribution, design order and oral health theme.
| Data synthesis
The coded projects were explored through an iterative visual mapping process (Appendix E). 38 This involved reading and re-reading the data and exploring patterns and relationships between the projects through visual maps.This served as a creative method of synthesis, facilitating exploration of the overall landscape of design in oral health and leading to the development of an interactive evidence map (Figure 3).
| RE SULTS
119 projects relating to design in oral health were identified from 1973 to 2022.The full list of coded projects can be found in Appendix D. Figure 2 illustrates the results from all coding categories to build an overall picture of the evidence.the map as a coloured dot, plotted according to the eight design and oral health coding categories.The map can be accessed online, 39 where users can filter projects on and off according to the different categories.Other interactive features include hovering over projects to display their description and clicking on them to be taken to the webpage of the relevant publication.
| DISCUSS ION
The interactive map enables open-ended multivariant analysis of the evidence of design in oral health (based on 119 coded projects between 1973 and 2022) across multiple temporal, contextual and thematic levels spanning eight categories.To fully engage with the findings and benefit from the interactive features of the map, we encourage readers to access the online version. 39
| Design's characteristics in oral health; design orders, outcomes and contributions
Figure 2A shows growing design activity in oral health, with a notable increase from the early 2000's.This growth is mainly associated with technology-driven third-order design activity.While third-order design encompasses the design of interactions both between people (services, processes) and between people and technology (interfaces), the latter is most dominant in oral health.
The first instances of third-order design involve the interface design of clinical information systems in 2003. 40,41Since then, a wave of third-order projects has emerged, with the majority (69%) having a technology-related design outcome (tangible interfaces, graphical interfaces, smart devices and VR).As healthcare transitions towards a smart and connected system, technology will play a crucial role. 42,43In oral health, with emerging concepts such as 'Dentistry 4.0', 44 design could play a key role in ensuring that new technologies are not only functional but also human-centred. 21ually valuable, yet currently less common, is the application of 3rd-order design to services, where it can aid shifts towards person-centred care (e.g.Walji et al.'s 45 human-centred dental discharge summary).
The second most prevalent is second-order design.This is not surprising given that this is the traditional domain of design, and the delivery of oral healthcare requires many tangible objects and physical spaces.Objects make up 73% of second-order design, and most objects (67%) relate to dental instruments and equipment (the most common oral health theme).These include dental chair redesigns, [46][47][48][49][50][51] a redesigned dental drill, 52 and a surgical tool vending machine. 53e notable finding is the scarcity of first-order design.Despite our searches only identifying six first-order design projects, graphic design is prevalent across oral healthcare in the form of logos, posters and packaging.Perhaps in many cases, the value or presence of first-order design is overlooked, and thus, it may not be documented or uncovered by our search process.Moreover, it is worth noting that although there are not many isolated examples of first-order design, higher-order design often encompasses the orders below.
F I G U R E 3
Interactive evidence map of design in oral health (available at: https://inclu siona ries.com/portfolio/map-desig n-oral-healt h/).
For example, Nanjappa et al. 54 The lack of activity in the 4th order of design is less surprising given that the application of design to systems and policy is an emerging practice. 55We identified two projects that qualified as 4thorder design; Chen and Li 56 proposed a user-centred oral health system for China, while Lievesley and Wassall 57 discuss a visualization of UK health service provision used to bring a person-centred perspective to policy-making for community dental services.These projects occurred relatively recently-2020 and 2015.There is an increasing role for 4th-order design in oral health, as discourse around wicked and persistent problems in healthcare grows, [58][59][60] and it is argued that these can only be addressed through the integral transformation of whole health systems. 20,61,62Areas of design research including 'transition design' 20 and 'design for policy' 63 are responding to this.
Examining the orders of design offers valuable insights into the nature of design solutions in oral health.However, it is helpful to recognize that design's contributions extend beyond solutions or interventions and also encompass valuable empirical, methodological and theoretical dimensions.The empirical contributions identified demonstrate that design can aid in problem framing, for example in a study analysing human factors issues and patient perspectives on the dental photography work cycle, 64 as well as in evaluation, such as in the validation of an interactive learning environment for children with dental anxiety. 65Design also offers methodologies and methods which are being applied in oral health,
F I G U R E 4
Interactive features of the evidence map.
such as the use of co-design to develop an oral health animation 66 and theories and frameworks such as the application of persuasive design principles to the design of an intervention for child dental anxiety. 65While the systematic mapping identified examples across all contribution types, Figure 2H shows that interventional design contributions are the most common, occurring in 79% of the projects, while empirical (39%), methodological (29%), and theoretical (6%), are much less common.The skew towards interventional contributions suggests a conventional focus on design as an agent of problem-solving, and the paucity of theoretical contributions in particular indicates insufficient theory development which is an issue in design highlighted by several others. 67,68As design activity and applications grow in oral health, particularly into the 4th-order of design, there is a need for an increased and enhanced knowledge basis (theory, methodology and methods) to support this. 69For example, design theories and methodologies for systems transitions 70 and policy-making 71 could be adapted and applied in oral health.
| Oral health contexts of design; oral health themes, settings, populations and collaborators
The dental practice is the predominant setting (47%) for design in oral health (Figure 2C).Within the dental practice, most projects relate to the themes: dental instruments and equipment, dental clinic design, and clinical information systems.The second most common setting is 'home' (28%), where the most common themes are personal oral care and oral health promotion and prevention.Together, these two settings account for 75% of the projects.[75][76] The oral health promotion and prevention theme encompasses the broadest range of settings compared to all other themes.The WHO 77 and Health Education England 78 The population code captures the specific populations or patient groups affected by design in oral health.This might not necessarily be the end-user of an intervention or the audience of a contribution, but the population to which it is relevant.For example, the intended user of Reynolds and Liu's 52 dental drill is dentists, however, the relevant population is children with dental anxiety.While 51% of the projects included relate to the general population, 13 populations outside of the mainstream were identified (Figure 2D).The most common populations (after general) are children, disabled people, and people with dental anxiety.Projects for these groups often focus on tailored oral health literacy and promotion (e.g.educational app for children with dental anxiety), 80 and the design of accessible dental clinics, equipment and personal oral care (e.g.dental chair for wheelchair users). 50Designing with diverse and often excluded populations is a key principle of inclusive design, 81 which aims to bring them into the mainstream and create innovative solutions that benefit all.The potential value and significance of inclusive design approaches to current challenges and transitions in oral health has previously been argued. 21e to the variety of professional titles used across disciplines, six broad codes were chosen to synthesize the collaborator types (Figure 2D).Design in oral health is becoming increasingly collaborative, often including members from a combination of oral healthcare, design, engineering and scientific disciplines.However, it commonly occurs without a designer or creative professional (48%), and where a designer is involved, their level of contribution varies.
The application of 'design without designers' 82 can lead to misrepresentations of the discipline and tokenistic design practices which risk the loss of design's value, reach and impact.For example, in Collaborators from the humanities and social sciences are least common, being identified in 5% of the projects.As design challenges in oral health become increasingly complex, involving a broad range of disciplines and stakeholders can foster co-creativity and develop new transdisciplinary approaches.In particular, the humanities and social sciences could offer valuable perspectives on the complex social, psychological, cultural and historical contexts for design in oral health. 84tients and/or the public were only involved in 29% of the projects (29% patients, 5% public), with the interactive evidence map showing increasing numbers of projects involving patients and/or the public after 2010.Public and patient collaboration is highly relevant to the calls for patient-centred care in dentistry, 85,86 and to public and patient involvement (PPI) which is increasingly lauded and is often a requisite for securing health research funding. 87Design offers a variety of creative, critical and empathic participatory methods and approaches which are being applied to PPI in oral health, covering topics such as; the design of dental discharge summaries 45 and the development of a prevention service model for low-income communities. 88opting a design approach, we intentionally report the figures of patient and public collaborators separately, as there exists a critical distinction from a human-centred design perspective.Humancentred design places the needs and desires of people at the centre of the design process 25 through engaging two distinct groups, that is, end-users (who will interact directly with the design and have ex- and 'justifications' of the two approaches, 90 arguing that lack of clarity leads to inappropriate or tokenistic involvement. 89While different authors unpack the distinctions in different ways, key differences in impartiality, [89][90][91] experiential knowledge, 89,91 interests, 91 perspectives 91 and expectations 91 have been discussed, and there is agreement on the need to disentangle patient and public involvement.
The levels of participation of public and patient collaborators varied greatly across the projects; from evaluative user surveys, to rich involvement and collaboration throughout the design process.
The need to move participation in design in oral health beyond 'doing to/for' and towards 'doing with' has previously been discussed. 82In order to ensure considered and meaningful involvement of patients and the public in design in oral health going forwards, we suggest making a clear distinction between public and patients and carefully considering the purpose and objectives of involvement.This will inform decisions about who to involve and how best to involve them.
| Limitations
While we have taken a systematic and rigorous approach to identifying evidence of design in oral health, it is likely that some evidence is not reported or has been missed.Documentation and reporting standards of design have been criticized in the literature. 92,93Design processes are generative and responsive in nature, often making them incompatible with standardized scientific methods and documentation practices.Ill-defined and ubiquitous design terminology and limited documentation and dissemination of design practice, makes the identification and synthesis of design literature difficult.
Furthermore, while methods were employed to identify grey literature, conducting an exhaustive grey literature search has inherent limitations, 94 and data were primarily retrieved from academic literature.Academic publishing is prone to publication bias with a preference for novelty, meaning that the methodology may capture the state of the art in design, but lack representation of the status quo.
Furthermore, searches were carried out in the English language, meaning that the findings are likely eurocentric and not representative of the nature of design across all geographies and cultures.
| Implications for future design in oral health
The interactive evidence map enables ongoing and open-ended multivariant documentation and analysis of the evidence of design in oral health as a living map.We propose that it could be used for three distinct purposes; 1.A documentation tool-to capture and map chronological evolution of the field.
2. An analytical tool-to facilitate in-depth exploration and multivariate analysis of evidence of design in health, enabling the identification of trends across different categories/levels/contexts.Moving beyond individual disjointed projects, investigate critical gaps and key opportunities arising from the current landscape, and advance strategic collaborative research agenda and directions at the frontier of design in oral health.Develop effective dissemination strategies to ensure that design approaches, contributions and outcomes are shared and implemented widely and at scale.
Recognition of and engagement with the full capabilities of design;
[2.1]In both processes of: problem framing and problem-solving.
[2.2] Across all four design contribution types: theories, methodologies, empirical studies, and interventions or solutions.
[2.3]In generating a range of outcomes across all four orders of design: 1st graphics, 2nd objects, 3rd interactions, 4th systems.
Integration of design experts;
Ensure core involvement and leadership of design experts in any design-related activity from the outset and throughout, and avoid 'design without designers'.
Fostering inclusive engagement and collaboration;
Engage and convene a broad range of disciplines and stakeholders and develop transdisciplinary approaches for design in oral health.Invite diverse less hierarchical perspectives and actively identify and collaborate with typically marginalized and excluded voices.
Advancing human-centred systems approaches;
Leverage 'individual-level' experiences with the wider 'systemlevel' interconnected factors to address complex problems in oral health.This involves disentangling patient and public collaboration, and considering community-based and tailored approaches to address the unique needs of different populations.
POLICY Implications-Design in oral health 6.Design for oral health policy-making; Explore and adopt 'design for policy'; design-led approaches for systematically developing effective human-centred policies based on leveraging creative participatory approaches, evidence-based criteria, and novel concepts.
Strategic research funding;
Target key areas identified through the landscape of design in oral health with grant calls to stimulate research streams in priority areas.
Disentangle patient and public involvement;
Develop a granular, human-centred design approach to public and patient involvement, which makes a distinction between patients and the public.Emphasize clarifying the purpose and objectives of involvement to inform decisions about who to involve and how best to involve them.
disentangling patient and public involvement; advancing human-centred systems approaches; adopting design-led approaches for policy-making.K E Y W O R D S design, evidence mapping, interdisciplinary, systematic map • Scopus • Taylor and Francis Online • Web of Science.Titles, abstracts and keywords were searched to identify evidence of design activity in oral health up to and including December 2022.The term 'design' is common in published literature.As such, a strategic search string was required to discriminate professional design practice and research.Search terminology from Chamberlian et al.'s previous review of design in health 7 was adapted and added to by the research team and refined through pilot scoping searches.
Figure 3
Figure3shows the interactive evidence map produced and Figure4highlights its interactive features.Each project appears on describe the co-design of Chatterbox, a toolkit to aid communication between Dental Health Support Workers and socioeconomically disadvantaged families.Although this project represents 3rd-order service design, it also incorporates second and first order design elements, as evidenced by the design of a box and activity cards.
have increased focus on oral health beyond dental practice, driven by the need to equip the population in relation to self-care and appropriate use of dental care across the life course.Shifting towards community-based approaches has the potential to prevent oral diseases through personalized and accessible action at individual, community and societal levels.The Whole Mouth Health project demonstrates this, through co-designing with a range of people in different contexts and settings to develop tailored oral health resources. 79 Tobias and Spaniers'83 publication 'Developing a Mobile App (iGAM) to Promote Gingival Health by Professional Monitoring of Dental Selfies: User-Centered Design Approach', the term 'user-centered design' does not appear at any point in the full text, despite being stated in the title.
periential knowledge), and stakeholders (who may be affected by or have an interest in the design) for different purposes and through different methods.Similar to end-users and stakeholders, patient and public collaborators have distinct knowledge, experiences, needs and desires, and their involvement in research serves different purposes and requires different approaches.Healthcare researchers have also criticized the catch-all term PPI, 89 pointing out the distinct 'meanings'
3 . 1 .
A strategic and generative tool-to inform new interdisciplinary research streams and collaborations through the identification of TA B L E 2 Implications for future design in oral health.RESEARCH Implications-Design in oral health Strategic research directions and dissemination;
|
2023-08-02T06:17:23.968Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "2dc590b55c5fa358b2809579d570cf3444be90e7",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cdoe.12892",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a31654324085640b4e3b999930f1b01033bf64e7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224824750
|
pes2o/s2orc
|
v3-fos-license
|
Identifying the Coping Strategies of Nonoffending Pedophilic and Hebephilic Individuals From Their Online Forum Posts
Individuals who identify as pedophilic or hebephilic, and who do not offend, are increasingly visible in online discourse and as a focus of research. Developing knowledge about this population will offer insights into their psychological needs and, potentially, into the mechanisms and strategies individuals use to live offense-free lives. This study examined coping strategies among members of an online forum supporting pedohebephilic individuals who do not wish to offend. Forum users’ posts were analyzed using thematic analysis. Eleven themes emerged, which were classifiable into three superordinate themes around (a) the acceptance of pedophilia, (b) strategies to stay safe, and (c) dealing with sexual arousal. These themes offer insight into the varying strategies used by these individuals to cope with stress and/or to remain offense-free. Understanding whether these strategies are adaptive or maladaptive may help develop better support services for those who have not offended and may inform prevention efforts.
. The labels used to describe these individuals are often used interchangeably in public discourse with pedohebephilic individuals being described as "child molesters" and vice versa (Theaker, 2015). However, these labels are not synonymous as child sexual abuse can occur without the presence of pedohebephilic sexual interest (Feelgood & Hoyer, 2008;Seto, 2008). The term child molestation reflects the act of sexual abuse against a child, whereas pedohebephilia refers to a sexual interest in children and is independent of whether or not a person has acted upon this preference (Feelgood & Hoyer, 2008). Therefore, it is possible to have pedohebephilic interests and not sexually abuse children.
Until relatively recently, individuals with a sexual interest in children who do not act on these attractions were an underrepresented group in research (Cantor & McPhail, 2016). However, an emerging literature is beginning to offer a deeper understanding of the characteristics of this group (e.g., Freimond, 2013;Houtepen et al., 2016;Mitchell, 2014;Mitchell & Galupo, 2016;Raven et al., 2014). In the research literature, these individuals are referred to as nonoffending pedophiles or nonoffending hebephiles, and often self-identify as virtuous pedophiles or minor-attracted persons 1 (Walker & Panfil, 2017). The current study examined forum user posts on virped.org (Virtuous Pedophiles), which, despite its name, also includes individuals whose stated interests would be more accurately classed as hebephilic.
In the absence of a rich literature on individuals with pedohebephilic interests who do not offend, the field has relied on studies sampling pedohebephilic individuals who have also committed (and typically been apprehended for) contact or noncontact sexual offenses. As a result, primary prevention efforts are being informed by research on the criminogenic needs of individuals who have offended, but are less informed by the effective strategies of people who live offense-free lives. To date, several studies have offered evidence on the characteristics of purportedly nonoffending individuals with pedohebephilic interests that may inform prevention efforts (for a review, see Cantor & McPhail, 2016). Despite this, empirical research examining coping strategies among individuals with pedohebephilic attractions who do not commit sexual offenses has been extremely limited (Cantor & McPhail, 2016).
In a general sense, coping strategies refer to the ways in which an individual responds and regulates their behavior in response to an aversive or challenging event (Bonanno & Burton, 2013). Coping strategies can be adaptive or maladaptive, and the adaptiveness of a given strategy can vary across context and time (Folkman & Moskowitz, 2004). They may also be moderated by the controllability of the stressful event (Carver & Connor-Smith, 2010). Two qualitative studies, looking at how individuals who identify as pedophilic cope with their sexual interest, have found a number of themes relating to the maintenance of offense-free lives. In an unpublished study, Mitchell (2014) reported that a small number of individuals with a sexual interest in children-but who stated they had not acted on those interests-reported using masturbation to fantasies as a way of reducing the level of sexual desire and, therefore, the possibility of offending. Houtepen et al. (2016) conducted semi-structured interviews with a small sample of self-identified, community-based pedophilic males to better understand the onset of pedophilia and the coping strategies they employed, which they stated helped them to live offense-free. For this sample, in addition to masturbatory fantasies, engagement with adult relationships was also endorsed as an effective way to cope with and reduce sexual arousal. Although participants in the Houtepen et al.'s (2016) study were currently motivated not to offend, the majority had watched pornographic material containing children or had committed child sexual abuse. Therefore, their articulated coping strategies may not be fully effective. Houtepen et al. (2016) also identified avoidant coping strategies-such as drug abuse-among individuals experiencing shame and who were trying to suppress their sexual feelings.
Both Houtepen et al. (2016) and Mitchell (2014) identified social support as a mechanism facilitating coping with pedophilic attraction and reducing the possibility of offending. Increased social support may act as a buffer to lessen or eliminate the negative affect of stressful events (Cohen et al., 2000). In terms of the impact of social support on criminality, there is evidence that the protective function of social support may depend on the availability of prosocial sources of social support (Brezina & Azimi, 2018). Houtepen et al. (2016) found evidence of two sources of social support: support from other pedophilic individuals and support from nonpedophilic individuals. Having support from other pedophilic individuals was found to offer reassurance that participants were not alone and gave them the freedom to speak openly about their attractions. Support from nonpedophilic individuals gave participants a sense of acceptance. In addition, supportive individuals could potentially act as a safeguard when they were around children, helping them to regulate their own behavior and maintain legal boundaries. Conversely, social support in Mitchell's (2014) study was found to have only a minor influence on decisions in the nonoffending sample. This may be because many participants in their sample linked support with disclosure and, as a result, may not have felt it possible for their friends and family to be supportive where they had not disclosed their sexual interests.
Understanding how individuals with pedohebephilic interests remain offense-free is vital to support prevention efforts. Unfortunately, due to the stigma associated with sexual interest in children and fear of exposure (see Jahnke & Hoyer, 2013), acquiring an adequate sample can be challenging. However, projects offering support and treatment for nonoffending pedophiles have demonstrated that this group can be reached and can be motivated to seek help (Beier et al., 2015;Levenson & Grady, 2019;Van Horn et al., 2015). Many help-seeking individuals indicated they had already disclosed their sexual interest and sought treatment (Beier et al., 2009), although many also faced considerable barriers when doing so (Levenson & Grady, 2019). Despite the ability of researchers to reach these populations, studies of nonoffending pedohebephilic individuals carry considerable limitations.
A key potential limitation of anonymous studies of sexual interest and sexual behavior is their reliance on self-report. Participants may underreport problematic behavior due to lack of trust, social desirability, and fear of losing anonymity. One approach to potentially overcome some of the limitations of traditional self-report approaches is to carry out research on naturally occurring discourse. The current study observed discourse among an online community sample of purportedly nonoffending pedohebephilic individuals to identify the coping strategies they use and the advice they offered in support of their peers. Forum users were not directly engaged or asked questions to observe a more natural discourse, thus potentially reducing socially desirable responding and removing any demand characteristics.
Posts on the internet forum virped.org underwent thematic analysis. Virped.org is an online forum created to offer advice and support to those who have a sexual interest in children and do not wish to act on it. The forum, created by nonoffending pedophiles, also aims to raise public awareness in the hope it will reduce the stigma associated with pedophilia. Fear of stigma and concerns over confidentiality are key barriers to help-seeking among people with a sexual interest in children (Levenson & Grady, 2019). Virped.org therefore may provide the main or only resource for peer support and guidance for many pedohebephilic individuals. The aim of the current study was to identify the coping strategies of virped.org users both for managing stress related to their sexual interests and for managing their sexual interests, including minimizing the possibility of offending.
Sample
The data set in this study consisted of posts written by members of the virped.org community. Due to the anonymous nature of the forum, demographics such as gender and age could not be reliably obtained. With more than 4,700 members, virped.org offers support to those with a sexual interest in children and who contribute to the forum in the context of an ethos that "sexual activity between adults and children is wrong" (virped.org FAQs, 2020). To access the site, users must be committed to living offensefree lives and there are strict rules about what can be discussed/shared. Although the site is dedicated to supporting these individuals and keeping children safe, some users may have histories that include sexual offenses against children and the use of indecent images of children. It is also possible that some users who report to be nonoffending were not, in fact, living offense-free lives. Users are aware that the forum is very likely to be monitored by police. Users therefore may be selective about the information they post and may promote a more positive picture about how they manage and cope in their everyday lives, painting an idealized version of themselves rather than a true reflection.
Data and Data Collection
The study was approved by the University of Kent, School of Psychology Research Ethics Committee (Ref: 20153624) and by forum moderators on virped.org. Data were collected systematically by selecting threads from two of 22 discussion areas from the virped.org website. Threads are single topics of conversation started by one member to initiate discussion, while discussion areas sort threads into their relevant topics.
Threads were selected from a 12-month period between May 2015 and April 2016, from two discussion areas ("Requests for support" and "keeping kids and ourselves safe"). These discussion areas were selected as the most likely to contain relevant discussions. Discussion areas we did not examine included areas dedicated to forum rules, research studies, and member introductions and areas focused on topics like humor, sports, and music. Although we chose two discussions that we expected to include a large amount of relevant content, other areas of the forum may potentially include useful data (e.g., a dedicated area for women attracted to children and a discussion area about recent life experiences). These were not examined in the current study.
Threads within chosen discussion areas were selected based on their relevance and included any posts that discussed strategies to manage sexual attractions. As it was not possible to isolate posts by individuals who may have previously offended from those who had not, all posts within selected threads were included. In total, 30 threads were identified that contained discourse relevant to the research topic. This included any advice given by forum users to help manage the risk of sexual offending as well as ways of managing stress related to their sexual interest. The number of posts per thread ranged from four to 36, and in total, 326 posts were analyzed and coded to form the data set. Examination of the usernames of forum users suggested that posts by up to 87 unique individuals contributed to the data set. Although several users contributed more regularly than others, none of the final themes we identified were dominated by frequent contributors.
Data were organized using the software package NVivo, which is a qualitative datamanaging program that allows large quantities of raw data to be sorted, analyzed, and explored more efficiently (Bazeley & Jackson, 2013). This software supports qualitative analysis as it enables researchers to link memos to the data, use coding strips to show the relationships between themes, and create visual hierarchy charts and mind maps.
Analytic Method
Data were analyzed by the first author using thematic analysis, a method used to identify themes and patterns across a data set, which is particularly suitable for investigating underresearched areas (Braun & Clarke, 2006, 2013. In approaching the sampled forum posts, the primary coder (first author) applied a critical realist approach to seek to make sense both of the forum users' perspectives on their experiences and of the wider social influences that affected those perspectives (Braun & Clarke, 2006). An inductive approach was used to develop codes and themes from the data without relying on prior theoretical knowledge and to gain a deeper understanding of the ways in which forum users manage their own sexual interests as well as the advice and support they offer to other users. Data were read multiple times to become familiar with the data set and then coded using a five-step process (Braun & Clarke, 2006).
Step 1 involved familiarization with the data, including reading, rereading, and making notes of any initial patterns that emerged. In Step 2, initial codes were generated. Codes are meaningful groups or interesting concepts within the data that have not yet been brought together into a broader theme. The first author identified relevant codes by focusing on features of the data related to (a) coping strategies used and/or (b) advice given to others, to minimize the possibility of offending or reduce stress caused by forum users' sexual interests. Once all the data had been coded, the third step involved looking for any patterns between codes-and how they may fit together-thus generating candidate themes. At the end of this step, all extracts that appeared to relate to a code had been identified, and no themes had been discarded. The fourth step involved pruning themes that did not appear sufficiently supported, collapsing candidate themes that seemed to reflect overlapping themes and examining the coherence of the coded extracts within the themes. The fifth step involved further refinement, looking at the relationship between themes, and produced three superordinate themes and 11 subthemes. In this phase, we defined and named the final themes. While all steps were initially completed by the first author, Steps 4 and 5 were revisited and refined by all authors during the process of preparing the manuscript for publication, leading to fine-tuning of themes and labels.
From the complete data set of 326 posts, 196 unique extracts contributed to the final themes. Several of these were multifaceted and contributed to multiple themes. As a result, the total number of extracts per theme shown on Table 1 exceeds 196. Interrater reliability was examined by selecting 10% of the data (20 randomly selected extracts) to ensure that a second rater (the second author) could reliably interpret each extract as representing the themes identified by the first author. The second rater was blind as to how many of the themes were represented within the selection of extracts and not all to-be-rated extracts corresponded to a theme. Percentage agreement between the first and second raters was 85% (κ = .83, biascorrected and accelerated 95% confidence interval [BCa 95% CI] = [.69, .94]), representing strong agreement (McHugh, 2012). (12) Mentally preparing (11) Using distraction (9) Using avoidance (39) Removing temptation (14) Considering consequences (5) "When I get that feeling" Using legal outlets (39) Masturbating to child fantasies (39) Note. The number in parentheses denotes the number of extracts contributing to each subtheme.
Results
In total, 11 subthemes were identified and sorted into three overarching themes. In keeping with the forum rules and the British Psychological Society's (2013) Ethics Guidelines for Internet-Mediated Research, no identifying information, including direct quotes from posts, were used in the write-up of this analysis for publication. While direct quotes would have enriched the presentation of our results, this extra layer of anonymity was important in this study, as consent to access the posts was agreed with forum moderators but not with the individual posters. 2
Theme 1: Accepting and Living With Pedohebephilia
This theme reflected content related to coming to terms with pedophilia and recognizing that forum users are not responsible for their attractions.
Accepting their sexual interests. This subtheme was based on 14 extracts from 11 unique users. The extracts contributing to this theme suggested that, for forum users, coming to terms with their attractions was difficult. The majority of forum users who made relevant comments articulated that not coming to terms with their pedohebephilia had major consequences, which placed them at risk. Individuals who had accepted their sexual interests, identified a level of pride, not necessarily in being pedophilic or hebephilic, but being proud of the ways in which they have managed their attractions. One user articulated the view that although pedophilia may be a part of who they are, it did not define them.
Not beating themselves up. Guilt and self-loathing were identified by users as factors to avoid or overcome in order to come to terms with their pedohebephilia. Fifteen extracts from 12 unique users contributed to this subtheme. One user suggested that such difficulties were partly due to negative media and societal messages-that pedophiles will inevitably give in to their urges and offend. Users who appeared to present as more confident with their sexual interests maintained that there is nothing to be ashamed of, as long as they remained offense-free. The concepts of guilt and shame appeared in a number of extracts. It was not, however, clear that users were distinguishing between these constructs. Instead, forum users appeared to be using the terms to communicate to others that they should not engage in negative self-judgment due to having pedohebephilic interests or engaging in fantasy involving children.
Being positive about pedohebephilia. This subtheme reflects the way in which having a positive outlook on pedohebephilia can shape people in positive ways and was based on eight extracts from eight users. For several of these users, being positive about their sexual interest was something learnt through experience. Two users argued their pedohebephilia created energy and passion. Passion in this context appeared to reflect experiencing an active emotional life rather than feeling emotionally deadened, which could be an issue for those with exclusive pedophilia if they feel unable to express themselves emotionally or sexually. One user stated that they harnessed these feelings and used them in positive ways. The emotional connection that they have with children was something that some users said they would not part with. For these individuals, this appeared more important than the sexual component, with one member stating they would happily remove the sexual attraction if they could retain the emotional connection.
Theme 2: Staying Safe
Staying safe contained six subthemes and referred to the ways in which forum users dealt with their sexual interest to protect both themselves and children.
Having contact rules. Forum users set themselves guidelines, or contact rules, to safely interact with children. This subtheme was based on 12 extracts from 11 unique users. A large proportion of these users referred to rules such as the present parent test (when alone with a child, behave as you would if their parents were present), child-initiated only contact (child initiates sitting on lap/holding hands), as well as rules about contact on social media. These rules appear to act as a safety net to maintain control in difficult situations, which users stated caused them great anxiety and stress. One forum user discussed these rules as strategies to define boundaries and maintain contact with children without appearing suspicious or creepy. One user stated that children needed to ask before any interaction took place so that it would be clear to any onlookers exactly who initiated the contact. Another forum user stated that they discourage all physical contact with their girlfriend's children, as the children's behavior toward them feels sexual and makes the forum user uncomfortable.
Mentally preparing. This subtheme related to the way in which forum users prepared to manage potentially risky situations. Eleven extracts exemplified this subtheme based on posts by nine users. Users advised imagining specific scenarios that they would find particularly difficult, and thinking of ways in which they could respond to manage their behavior appropriately. Discussions suggested that users used this method to prevent being caught unaware as well as to develop confidence that they would be able to handle risky situations should they arise. Examples of rehearsed scenarios included being asked to babysit or a child suggesting sexual activity. Preparation techniques also included ways in which forum users organized their lives to reduce the possibility of encountering children. For example, an individual described how they plan shopping trips so that they can arrive early when there are fewer people, park close to the entrance, and have an exit strategy if they encounter too many children. This level of organization reflects the lengths that individual forum users went to to make their day-to-day experiences less stressful. One user stressed that it was important that this type of mental preparation should remain constructive and not develop into a fantasy.
Using distraction. Diverting energies into areas other than their sexual interests was a coping mechanism that forum users found helpful and recommended. Nine extracts from seven users provided the core evidence for this subtheme. Distraction was discussed in terms of hobbies and other interests such as physical (sports) activities. This method was suggested by users to help occupy the mind and distract from unwanted thoughts. Some forum users considered distraction merely as a tool to control temptation, while a few regarded their choice of method as having been a huge positive in their lives.
Using avoidance. To stay safe, users highlighted the importance of minimizing opportunities where risky behavior could occur. There was a large amount of content relevant to this subtheme in the data set, with 39 relevant extracts from 26 users. For many, avoidance was endorsed as a good strategy that they felt helped them reduce risk and stay safe. Avoidance was spoken about in situations where users expressed a lack of confidence that they would be able to control their thoughts and behaviors. Examples of avoidance included going out during school hours or avoiding specific locations where there are many children. Others used avoidance strategies to avoid becoming too attached, with one user stating he can become attached almost immediately after meeting a child, following which his emotions can become difficult to deal with. Users who felt less confident about interacting with children suggested avoiding them completely. However, a substantial number (12) of the extracts in this section related to the avoidance of situations where users were alone with children. Seven extracts related specifically to the avoidance of alcohol because of its disinhibiting effects. The subtheme of avoidance differed from other subthemes such as mentally preparing, as the extracts typically did not include much concrete detail on how to avoid these situations.
Removing temptation. The concept of temptation featured in 14 extracts from 12 users spanning a range of threads, with some individuals stating they have never struggled with it and others stating that they find self-control difficult. Discourse around temptation mainly occurred when users spoke about social media and using the internet in general. Users highlighted how easy it was to talk to children online or to access online images of children. As a result, forum users developed ways of managing their online behavior. For some, this meant complete removal of any internet access at home, including smartphone devices, forcing themselves to use public computers with no anonymity. For others, temptation was less frequent, with some forum users describing infrequent periods of vulnerability where they felt at risk of losing control. In these instances, users described using methods such as locking their devices away in a timed safe or handing them over to a partner until they felt more in control of their urges.
Considering consequences. Another method used by a small number of forum users (five individuals) to help them maintain legal boundaries was to think through their actions and the possible consequences. Individuals spoke about this in terms of taking the child's perspective, considering their feelings and the possibility that they may experience mental and physical harm. One user described how thinking ahead, about the actions they would like to take, and the feeling this gave them (that their actions would be wrong) prevented them from crossing any boundaries. The same user stated that they only find themselves in risky situations when they are not thinking about the morality of their actions.
Theme 3: "When I Get That Feeling"
This theme highlights the ways in which forum users manage their sexual desires to "stay legal," as well as focusing on the different views they have with regard to the effectiveness of these methods.
Using legal outlets. Being unable to satisfy their sexual needs was a topic discussed quite frequently among users, who described a variety of methods that helped them with sexual release. Thirty-nine extracts in the final data set were relevant to this subtheme, with 30 unique contributors. Users with nonexclusive pedohebephilia (also attracted to adults) described how having an adult relationship or viewing adult pornography helped them to manage their pedohebephilic desires. This method was referred to frequently among users as a way to manage their attractions. For users exclusively attracted to children, many argued that finding methods for sexual release was more challenging. Some users stated that using pornography where adults look younger was not ideal but an acceptable outlet. The use of twink pornography-depicting men in their late teens or early 20s who are typically of slim build-was mentioned by some users, as well as pornography containing women with less pronounced development of secondary sexual characteristics and no pubic hair. However, for several users, the level of physical maturity of actors in legal pornography limited its appeal. Several of these individuals stated that they watched nonpornographic videos or images featuring children in their age of interest instead of pornography. Forum users discussed seeking out pornography from cultures who are typically youthful looking, with one member stating that they find Asian men more appealing and another preferring pornography depicting Japanese women dressed as schoolgirls. Lolicon manga or anime material-a genre of Japanese cartoon depicting female children in an erotic or pornographic manner-was also mentioned to be useful by a number of users (the equivalent depicting male children is called shotacon) as were pornographic stories that some had written themselves. However, many users with exclusive pedophilia indicated that they were unable to satisfy their sexual urges using these techniques, with some opting for more inventive ways of coping. One user described a doll they had made from children's clothing that gave them a sense of companionship and belonging. This example highlighted the need for individuals, in particular those with an exclusive sexual interest, to ease the loneliness they feel, and the difficulties they face in finding safe outlets.
Masturbating to child fantasies. Masturbation to fantasies of children was a hotly debated area among forum users. We coded 39 extracts relevant to this subtheme, including posts by 29 unique individuals. Some users suggested that masturbation to fantasy helped with sexual urges, whereas others argued it intensified their attractions. Forum users exclusively interested in children were likely to endorse masturbation to fantasies involving children as an effective strategy to manage their interests in the absence of other outlets. Some users advised using masturbation prior to interaction with children to relieve sexual tension. Others, however, stated they avoided it completely.
A number of users were concerned that masturbating to fantasies of children would reinforce that behavior, making their attractions more intense. This appeared to be a minority view (seven extracts), although others had mixed feelings or acknowledged the possibility that masturbation to child fantasies would reinforce their interests. One user described using this mechanism of reinforcement to try to develop greater sexual interest in adults. The majority of users appeared to be of the opinion that masturbation to child fantasy was either harmless or decreased tension or arousal that, unchecked, might lead to problematic situations. Approximately two thirds of extracts contributing to this theme reflected this view, although some differentiated between fantasies involving known versus unknown children.
Discussion
By observing the discourse between users of a forum dedicated to supporting individuals with a sexual interest in children who do not wish to act on their desires, we were able to gain further insight into the management of their sexual attractions and sharing of advice with other forum users. Developing an acceptance of pedohebephilic interests and a broader self-acceptance appeared to be an important step for many forum users. Coping strategies such as contact rules and having set boundaries in place when interacting with children appeared popular, as well as distraction techniques for unwanted thoughts. Being able to recognize and deal with triggers, knowing limits, and staying vigilant were strategies that users described as helping stay focused and in control. Many adopted or recommended physical avoidant coping strategies by avoiding children altogether or specific situations. Some forum users discussed preparing for occasions where they had contact with children by rehearsing imagined scenarios. A few forum users mentioned using perspective taking.
Fantasy and masturbation created much debate within the forum with some individuals endorsing it as a successful method for them and others arguing that it intensified their attractions. Outlets for sexual release, such as masturbation to fantasies of children, were found to divide opinions, especially for those with exclusive pedophilia. The idea that children can be sexually persuasive was observed in the way in which forum users discussed some of their imagined scenarios and real-life events.
Acceptance and Avoidance
Coming to terms with their attractions appeared to be a common struggle for users, with many describing feelings of guilt and self-loathing. Not accepting their pedophilic interest was argued to have devastating consequences, leading to low self-esteem. Low self-esteem may increase risk of sexual offending (for a discussion of equivocal findings on self-esteem, see Mann et al., 2010). The importance of self-acceptance has been highlighted by Goode (2010) and is a key principle of Acceptance and Commitment Therapy (ACT; Forman et al., 2007). ACT encourages acceptance of negative experiences and has been demonstrated as an effective approach to treatment (A-tjak et al., 2015), across a wide range of disorders. The acceptance of their sexual attraction held different meanings for different users in the current study. Those who struggled with it viewed acceptance as appearing proud of their pedohebephilia. However, for individuals who had come to terms with their sexual interest, acceptance meant feeling proud of their management of their attractions and embracing the positives that it has brought to their lives.
In research on coping, avoidance and distraction are often grouped together (Gonzales et al., 2001), representing ways of disengaging from negative life events. Within the current data, however, these strategies emerged as distinct, with avoidance relating to physical restrictions users imposed on themselves, and distraction representing the suppression of unwanted thoughts. Distraction and avoidance strategies were commonly referred to among users in the forum, complementing findings by Houtepen et al. (2016). Users described using distraction as a way to reduce sexual thoughts they found distressing.
Although not a long-term solution (Forman et al., 2007;Hayes et al., 2006), focused distraction has been suggested to alleviate the distress of intrusive thoughts (Najmi et al., 2009) and prevent rumination (Shiota, 2006). The most discussed method of avoidance in our data was the avoidance of being alone with children. This strategy was observed in terms of protecting children and as a way of protecting themselves in case they behaved in a way that might be perceived as inappropriate.
Generally, avoidant coping alone is not considered an effective strategy for dealing with long-term negative affect and is consistently shown to be associated with poor mental health (Gonzales et al., 2001). However, evidence suggests that avoidant coping can produce better outcomes when stressors are deemed uncontrollable (Creasey et al., 1995;Valentiner et al., 1994) as well as being effective in managing short-term emotional expression (hiding signs of anxiety). Therefore, this type of avoidant coping could be considered as a self-protective measure to prevent negative consequence (Kashdan et al., 2006). Despite this, reliance on avoidant coping can have serious consequences, such as impaired functioning, especially if it becomes inflexible and demands too much time and effort (Kashdan et al., 2006). For example, a small number of the forum users organized their daily routines around school hours to avoid coming into contact with children. This not only imposes physical restrictions but also prevents them from dealing with challenging situations should they unexpectedly occur. This type of avoidance has been shown to correlate with increased anxiety, depression, and cognitive-affective distress (Feldner et al., 2003;Holahan et al., 2005). It remains an open question whether these negative consequences of avoidant coping are also present for the specific types of avoidance strategies articulated by the forum users.
Mental Preparation
Although avoidance of certain situations was considered useful to some users, others acknowledged the inevitability of coming into contact with children and expressed concern regarding how they would cope. The value of imagining possible scenarios, such as being asked to babysit or a child instigating sexual contact, was discussed within the group. Those who used these scenarios said they did so to develop strategies should they ever be confronted with a similar situation. Mental preparation techniques such as this have been recognized as an effective strategy to improve performance ability in various domains of human activity (Driskell et al., 1994). For example, they are commonly used by athletes from a wide range of sporting backgrounds (Bertollo et al., 2009). These psychological strategies typically involve visualizing goals, mentally rehearsing strategies, and anticipating what could go wrong and how to respond (Bertollo et al., 2009).
Clearly, the use of strategies by forum users may not be directly comparable with mental preparations in sporting performance. However, for those with pedohebephilic attraction, mental preparation, in conjunction with other strategies, may potentially reduce anxiety and increase confidence in situations that would normally be deemed challenging. Poor problem-solving is empirically supported as a risk factor for recidivism in individuals convicted of sexual offenses (Hanson et al., 2007). It is suggested that poor problem-solving commonly relates to deficits in the ability to identify a problem, a lack of consequential thinking, and difficulties in constructing a range of viable options (Mann et al., 2010). Therefore, mental preparation could be a helpful tool to those struggling to cope with their attractions. In addition, it may reflect a distinction between individuals who offend and those who have been able to remain offense-free.
A small number of forum users discussed how they considered the perspective of the child (how the child would feel were they to act upon their feelings) and the potential effects of their actions, as a way to maintain legal boundaries. Perspective taking is an empathic response that involves both cognitive (expressing what someone else is feeling) and emotional processes (feeling what a person is feeling; Mann & Barnett, 2012) and is commonly targeted in treatment with individuals who have sexually offended (Yates, 2004). Research in other domains suggests that improvements in perspective taking positively affect intergroup attitudes and influence empathic arousal (Vescio et al., 2003). However, there has been considerable debate around the efficacy of targeting empathy deficits in the treatment of individuals who have sexually offended with concerns that it is unnecessary and potentially even harmful (e.g., Mann & Barnett, 2012). This ambivalence in the research literature may be mirrored in our finding that perspective taking did not appear to be a frequently used strategy by forum users in this study. A greater number instead appeared to articulate ways of directly managing their own behaviors and thoughts rather than focusing on the feelings of others.
Dealing With Sexual Arousal
Coping strategies to alleviate sexual arousal produced many different opinions among forum users. For individuals with nonexclusive pedophilia, focusing their attractions toward adults was frequently suggested. Those with an exclusive interest discussed using legal pornography containing actors who appear younger, due to their secondary sexual characteristics, or animated pornography depicting underage characters. Many found that focusing their fantasies toward adults reduced their attractions to children, supporting previous qualitative research (Houtepen et al., 2016). In a sense, forum users appeared to be advocating a strategy similar to conditioning. While conditioning is used by some practitioners with individuals with a sexual interest in children (for a case study of masturbatory reconditioning, see Marshall, 2015), there is limited evidence that arousal to nonpreferred stimuli can be conditioned (Hoffmann, 2007;Seto & Ahmed, 2014). However, many of the individuals identifying their use of legal adult pornography as risk-reducing appeared not to be exclusively pedohebephilic. For those individuals, stimuli may be sufficiently appetitive to temper competing sexual interests.
Potentially Maladaptive Strategies
Many of the strategies discussed on the virped.org forum are adaptive, or may be situationally adaptive, and are consistent with the skills that practitioners may seek to build when working with individuals who have committed sexual offenses. For example, strategies around minimizing contact with children, managing substance use, or that reflect increased problem-solving or impulse control map onto treatment targets within guidelines for effective practice with individuals who have sexually offended (Association for the Treatment of Sexual Abusers, 2014). Problem-solving and impulsivity, in particular, have been identified as meaningful risk factors, improvements in which are empirically linked to reductions of reoffending, and are strongly encouraged in biopsychosocial models of treatment (Carter & Mann, 2016;Mann et al., 2010). However, some of the themes highlighted potentially maladaptive strategies.
For individuals outlining their use of forms of pornography where actors/characters appear underage, or for individuals describing masturbation to fantasies involving children, it is unclear whether such coping strategies are likely to be effective at reducing possibility of offending. Indeed, this uncertainty was reflected in forum discussions, especially around masturbation to fantasies about children. At a basic level, the legality of materials like lolicon/shotacon varies across jurisdiction, and as a result, some forum users may be placing themselves at risk of prosecution. Masturbating prior to possible interactions with children or other circumstances where users feel at risk of committing offenses may take advantage of the postorgasmic refractory period of the human sexual response cycle (Masters & Johnson, 1966) where men, in particular, may experience a lack of sexual interest. However, this strategy may vary in effectiveness across individuals due to age-related variation in the duration of the refractory period (Meston, 1997).
A broader question is whether using pornographic material that appears to function as a proxy for indecent images of children and masturbating to pedohebephilic fantasies are strategies that influence the likelihood of offending. Meta-analysis suggests that approximately half of individuals identified as having used indecent images of children have also committed contact sexual offenses . For the purposes of diagnosing paraphilias, neither Blanchard (2010) nor Seto (2010) distinguished between pornographic materials depicting real and fictitious children. If this real/fictitious distinction is trivial in terms of risk of contact offending, using some of the legal outlets discussed by forum users may be a risky strategy. However, given the lack of empirical research on this question, the opposite may be true, whereby seeking out legal forms of pornographic material, even where they are not quite a perfect fit to the individual's sexual interests, may reflect protective factors that function to reduce the possibility of offending. Bartels and Gannon (2011) hypothesized two mechanisms through which fantasy may drive future offending. They drew on existing research to suggest that individuals may become "motivated to enact the imagery they have mentally simulated within their fantasies" (p. 551) and/or that fantasy may function as a disinhibiting factor that desensitizes the individual and as a result makes offending behavior more likely. As a result, few practitioners would recommend masturbating to fantasies involving children as an ongoing coping strategy. However, this does place individuals who are exclusively orientated to children but who choose not to use indecent images of children or commit contact offenses in a bind.
The use of fantasy and masturbation among users may not simply function as a strategy to cope with sexual interests and sexual tension. Research with individuals who have sexually offended suggests that those individuals use sexual activity as a way of coping with stressful or problematic situations (Maniglio, 2011). Studies with small numbers of offending individuals (McKibben et al., 1994;Proulx et al., 1996) suggest that negative mood states increase problematic fantasy as part of a chain that may lead to offending behavior. The paucity of literature on pedohebephilic individuals who do not offend makes us cautious about inferences drawn from research with offending individuals. However, the use of problematic sexual fantasy, and in particular an increased dependence on that sexual fantasy, may function as an indicator for nonoffending pedohebephilic individuals that they are relying on potentially problematic coping strategies to regulate their emotions.
Implicit Beliefs
A notable observation throughout, highlighted by their need to set rules and boundaries when interacting with children, was the lack of confidence many of the forum users had in their abilities to act appropriately, with many expressing a fear of losing control. This feeling of uncontrollability may mirror the belief, observed among individuals convicted of sexual offenses, that the world, including one's emotions and thoughts, is uncontrollable (Ward & Keenan, 1999). Where individuals convicted of sexual offenses make statements about the uncontrollability of their environment, practitioners may question whether they reflect genuine etiological/criminogenic beliefs or excuses/minimizations that may arise after an offense (see Maruna & Mann, 2006). However, for forum users in our study, feelings of uncontrollability appeared to be a motivating factor that encouraged them to develop methods to prevent loss of control.
Developing such methods may allow individuals to challenge their own cognitions and thus demonstrate that the world is, in fact, controllable and that their emotions and thoughts are manageable.
Rules that forum users set to maintain control were not only used to safeguard children but were also spoken about as a way of protecting themselves from the behaviors of children. On occasions, users referred to children's behavior as sexual. This was observed in the imagined scenarios they had as well as real-life experiences. It was common for users to discuss that children may involve adults in their sexual behavior, although most posters appeared to believe that this happens only in exceptional circumstances. This finding is consistent with Houtepen et al. (2016) who reported that, although participants denied having engaged in sexual contact with children, some reported having experienced situations where a child has attempted to initiate sexual activity. This finding may reflect a belief system similar to Ward and Keenan's (1999) "children as sexual beings" implicit theory, hypothesized to be characteristic of some individuals who commit sexual offenses against children. Holding this implicit theory is related to the overperception of children's behavior as sexual. Despite acknowledging that children are vulnerable and that pedohebephilic sexual attraction is something that can harm a child if acted on, some forum users did appear to observe children as capable of being sexually persuasive.
Our data prevent us from examining whether forum users differ from individuals who have committed sexual offenses against children in the extent to which they hold (or articulate) beliefs around the sexuality of children. However, Maruna and Mann (2006) argued that some forms of so-called cognitive distortions may buffer individuals from stigmatizing shame and thus offer protection from offending. Future research may examine whether the apparently distorted beliefs of individuals with pedohebephilic interests and who do not offend differ in quality and function from those of individuals who sexually offend (e.g., by being better able to separate the sexual motivations of children in their fantasies from children in reality).
Limitations
Overall, our study builds on the emerging literature on nonoffending pedohebephilic individuals by demonstrating the wide range of coping strategies used by individuals with a sexual interest in children, as well as the advice they offer one another to remain offensefree. Where other research in this area has found it difficult to recruit adequate sample sizes due to the stigma attached to pedophilia and fear of losing anonymity, the current study drew on the contributions of a subset of over 4,700 members of the virped.org website. Despite this, our findings should be considered with some limitations in mind.
First, although our findings reflect the contributions of many forum users, we were unable to examine how representative these contributions were of all forum users or of the nonoffending pedohebephilic community as a whole. Unlike previous research, this study did not explicitly ask participants for information. Data were instead obtained from conversations between forum users, free from researcher interference, where members have built up trusting relationships and are potentially being more honest and open with their thoughts and experiences. However, as with other studies of this nature, despite the reduction in experimenter demand effects due to the use of existing discourse, there remains a reliance on self-report. As a result, the information obtained may not be a true reflection of these individuals' experiences due to social desirability. The probability that the site is monitored by police may create its own demands, preventing users from portraying a true reflection of themselves. It is further possible to argue that posts on the forum are performative, presenting an idealized version of the self or of the forum community. These factors should be considered when interpreting our findings. However, the closed nature of the forum may partially mitigate concerns about social desirability. In addition, it is worth considering whether even performative posts may provide useful guidance to peers and help establish positive group norms about self-acceptance and coping with pedohebephilic interest. In addition to the above, the site rules prevent users from sharing offending history. Therefore, the researchers cannot be entirely confident that all users are nonoffending but are instead being selective about what they write. It is, however, entirely possible that these individuals are, regardless of any previous offending, committed to living offense-free lives and supporting others to do the same.
Second, it should be acknowledged that, while the data set represented contributions from 87 unique individuals, there were a number of regular posters who appeared influential in discussions. This could be due to the fact that these individuals felt they had more experience dealing with their pedohebephilic attractions and therefore had more to contribute in terms of offering advice. Alternatively, it could simply be that they had more time to access the site than others. Despite this, none of the themes that emerged were dominated by a single user.
Third, it is possible that forum users are drawn to certain threads or discussion areas on virped.org because they are struggling to cope. Therefore, advice offered may not be representative of the broader nonoffending pedohebephilic community and could potentially be maladaptive. The evidence for the value of peer-to-peer support for coping with psychological challenges is currently mixed (e.g., Ali et al., 2015). It has been suggested that peer support groups may carry potential risks, such as the unreliability of advice being offered by users with unknown credentials (Entwistle et al., 2011). Advice offered by peers could lead those receiving the advice to develop unrealistic expectations and set unachievable goals. This in turn could increase symptoms such as anxiety and be detrimental to recovery (Ziebland & Wyke, 2012), an issue that warrants further investigation.
Implications and Recommendations for Future Research
We expect-and hope-that users of the virped.org forum will read this article. One feature of the specific set of threads that we selected for analysis was the prevalence of advice to avoid certain situations (e.g., being alone with children, or drinking alcohol). However, we observed that, in these interactions, the person providing this advice often did not explain in more concrete terms how they recommend avoiding the situation. It may be useful for forum users who share advice to consider providing more detailed examples of how they avoided a certain situation. After all, the person they are advising might have the motivation to avoid a situation, but lack specific strategies to achieve that goal.
For professionals supporting nonoffending individuals with pedohebephilic interests or working with individuals who have offended but who are motivated to avoid future offending, our findings highlight the need for open discussion with clients around healthy use of pornography, fantasy, and masturbation. Overall, the conversations we sampled on the forum evidence a lack of consensus on whether engaging with child-related fantasy helps manage sexual urges or intensifies them. To an extent, practitioners may feel unsupported by the research literature when engaging with their clients on this issue. There is a lack of empirical research examining the role of fantasy in managing or failing to manage other sexual behaviors as well as the individual difference moderators of those relationships.
Professionals and policy makers may cite our research to argue that this particular online forum represents a resource in which users espoused prosocial behavior, constructively challenged risky behavior, and shared strategies that, in large part, map onto the types of strategies that professionals might recommend to clients. We recommend further research on the coping strategies of nonoffending individuals with pedohebephilic interests, to replicate or refine our findings. However, in advance of such research, practitioners may find it useful to discuss our emerging themes and strategies with their clients and explore the extent to which the experiences and practices discussed by forum users resonate with their own.
Our analysis arranged extracts into a meaningful thematic arrangement of superordinate themes and subthemes (Table 1). However, further research with different data from virped.org may help replicate or refine these themes and may help to identify further ways in which this population manage and cope with their sexual interests and their pedohebephilic identity. Should the data allow, it would be interesting to examine how forum users change and develop their articulated coping strategies over time.
Some themes reflected coping strategies that appeared to relate more strongly to certain modalities of sexual behavior (e.g., legal outlets for online behavior, and having contact rules for contact behavior). They may also capture distinct strategies for the long-term management of sexual interests (e.g., mentally preparing) and more short-term responses to particular situations (e.g., using legal outlets). Researchers should consider carrying out semi-structured interviews with forum users to examine whether apparent differences in our themes by modality of behavior reflect differences in the coping skills needed to deal with different challenges, or are different manifestations of the same underlying coping skills. In addition, semi-structured interviews could be used to examine how nonoffending individuals shift coping strategies between coping with existential long-term stressors and situations where they experience short-term vulnerability.
Conclusion
This study utilized the discourse of self-identified, purportedly nonoffending individuals with pedohebephilia to identify the coping strategies they use to reduce the possibility of offending. The findings of this study build on previous research by observing natural discourse without researcher interference. The anonymity and social support provided by peer forums may encourage honesty in posting and prompt discussions that individuals may not otherwise be inclined to discuss. With the social exclusion that many of these individuals experience, being able to share these experiences with like-minded others in an appropriately moderated forum may provide a unique support system for the individuals in the virped.org community. The record contained within the forum provides rich data for professionals to gain insight into the lived experiences of this population and thus be better equipped to support them and others who are less able to manage their sexual interests.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
ORCID iD
Caoilte Ó Ciardha https://orcid.org/0000-0001-5383-8403 Notes 1. Some authors also prefer this term. 2. Researchers interested in reading the forum post extracts used to generate themes should contact the corresponding author, using their institutional/organizational email address.
|
2020-10-22T18:55:05.834Z
|
2020-10-21T00:00:00.000
|
{
"year": 2020,
"sha1": "275257ff8806e9e4ee112bc826e8b54d34197b58",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1079063220965953",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad47b81839a800af7a438c608c4578c64e69468f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
231872803
|
pes2o/s2orc
|
v3-fos-license
|
Do More Injured Lungs Need More Protection? Let’s Test It
1. Bellani G, Laffey JG, Pham T, Fan E, Brochard L, Esteban A, et al.; LUNG SAFE Investigators; ESICM Trials Group. Epidemiology, patterns of care, and mortality for patients with acute respiratory distress syndrome in intensive care units in 50 countries. JAMA 2016;315:788–800. 2. Fan E, Del Sorbo L, Goligher EC, Hodgson CL, Munshi L, Walkey AJ, et al.; American Thoracic Society, European Society of Intensive Care Medicine, and Society of Critical Care Medicine. An official American thoracic society/European society of intensive care medicine/society of critical care medicine clinical practice guideline: mechanical ventilation in adult patients with acute respiratory distress syndrome. Am J Respir Crit Care Med 2017;195:1253–1263. 3. Papazian L, Aubron C, Brochard L, Chiche JD, Combes A, Dreyfuss D, et al. Formal guidelines: management of acute respiratory distress syndrome. Ann Intensive Care 2019;9:69. 4. Gu erin C, Reignier J, Richard JC, Beuret P, Gacouin A, Boulain T, et al.; PROSEVA Study Group. Prone positioning in severe acute respiratory distress syndrome. N Engl J Med 2013;368:2159–2168. 5. Sud S, Friedrich JO, Adhikari NK, Taccone P, Mancebo J, Polli F, et al. Effect of prone positioning during mechanical ventilation on mortality among patients with acute respiratory distress syndrome: a systematic review and meta-analysis. CMAJ 2014;186:E381–E390. 6. Munshi L, Del Sorbo L, Adhikari NKJ, Hodgson CL, Wunsch H, Meade MO, et al. Prone position for acute respiratory distress syndrome: a systematic review and meta-analysis. Ann Am Thorac Soc 2017;14:S280–S288. 7. Gu erin C, Beuret P, Constantin JM, Bellani G, Garcia-Olivares P, Roca O, et al.; investigators of the APRONET Study Group, the REVA Network, the R eseau recherche de la Soci et e Française d’Anesth esieR eanimation (SFAR-recherche) and the ESICM Trials Group. A prospective international observational prevalence study on prone positioning of ARDS patients: the APRONET (ARDS Prone Position Network) study. Intensive Care Med 2018;44:22–37. 8. Sud S, Friedrich JO, Adhikari NKJ, Fan E, Ferguson ND, Guyatt G, et al. Comparative effectiveness of protective ventilation strategies for moderate and severe acute respiratory distress syndrome: a network meta-analysis. Am J Respir Crit Care Med 2021;203:1366–1377. 9. Brower RG, Matthay MA, Morris A, Schoenfeld D, Thompson BT, Wheeler A; Acute Respiratory Distress Syndrome Network. Ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome. N Engl J Med 2000;342:1301–1308. 10. Fan E, Beitler JR, Brochard L, Calfee CS, Ferguson ND, Slutsky AS, et al. COVID-19-associated acute respiratory distress syndrome: is a different approach to management warranted? Lancet Respir Med 2020;8:816–821. 11. Ferrando C, Suarez-Sipmann F, Mellado-Artigas R, Hern andez M, Gea A, Arruti E, et al.; COVID-19 Spanish ICU Network. Clinical features, ventilatory management, and outcome of ARDS caused by COVID-19 are similar to other causes of ARDS. Intensive Care Med 2020;46:2200–2211. [Published erratum appears in Intensive Care Med 47:144–146.] 12. Gattinoni L, Camporota L, Marini JJ. COVID-19 phenotypes: leading or misleading? Eur Respir J 2020;56:2002195. 13. Gattinoni L, Chiumello D, Caironi P, Busana M, Romitti F, Brazzi L, et al. COVID-19 pneumonia: different respiratory treatments for different phenotypes? Intensive Care Med 2020;46:1099–1102. 14. Gattinoni L, Coppola S, Cressoni M, Busana M, Rossi S, Chiumello D. COVID-19 does not lead to a “typical” acute respiratory distress syndrome. Am J Respir Crit Care Med 2020;201:1299–1300. 15. Grieco DL, Bongiovanni F, Chen L, Menga LS, Cutuli SL, Pintaudi G, et al. Respiratory physiology of COVID-19-induced respiratory failure compared to ARDS of other etiologies. Crit Care 2020;24:529. 16. Panwar R, Madotto F, Laffey JG, van Haren FMP. Compliance phenotypes in early acute respiratory distress syndrome before the COVID-19 pandemic. Am J Respir Crit Care Med 2020;202:1244–1252. 17. COVID-ICU group on behalf of the REVA Network and the COVID-ICU Investigators. Clinical Characteristics and Day-90 Outcomes of 4,244 critically ill adults with COVID-19: a prospective cohort study. Intensive Care Med 2020;47:60–73. 18. Amato MB, Meade MO, Slutsky AS, Brochard L, Costa EL, Schoenfeld DA, et al. Driving pressure and survival in the acute respiratory distress syndrome. N Engl J Med 2015;372:747–755. 19. Goligher EC, Costa ELV, Yarnell CJ, Brochard LJ, Stewart TE, Tomlinson G, et al. Effect of lowering tidal volume on mortality in ARDS varies with respiratory system elastance. Am J Respir Crit Care Med [online ahead of print] 13 Jan 2021; DOI: 10.1164/rccm.202009-3536OC. 20. Beitler JR, Sarge T, Banner-Goodspeed VM, Gong MN, Cook D, Novack V, et al.; EPVent-2 Study Group. Effect of titrating Positive EndExpiratory Pressure (PEEP) with an esophageal pressure-guided strategy vs an empirical high PEEP-Fio2 strategy on death and days free from mechanical ventilation among patients with acute respiratory distress syndrome: a randomized clinical trial. JAMA 2019;321:846–857.
Driving pressure, calculated as the difference between plateau pressure and positive end-expiratory pressure (PEEP) during mechanical ventilation in a relaxed subject, has an independent association with the risk of death in patients with acute respiratory distress syndrome (ARDS) (1, 2), suggesting that interventions in these patients such as PEEP titration are beneficial only if associated with a decrease in driving pressure. Lung computed tomography demonstrating heterogonous aeration in ARDS typically reveals dependent nonaerated lung, which is central to both our current understanding of ventilation strategies (3) and the typical increase in respiratory system stiffness (static elastance) estimated as the driving pressure divided by the VT. Perhaps readers will be more familiar with compliance (the inverse of elastance); both static respiratory system elastance and compliance are largely influenced by the volume of aerated lung. As both the stress and strain resulting in ventilation-induced lung injury reflect VT and end-expiratory lung volume, targeting driving pressure makes sense, as driving pressure, in effect, scales VT to the magnitude of the reduced lung volume for a given patient with ARDS. In this issue of the Journal, Goligher and colleagues (pp. 1378-1385) now provide supporting data, with a secondary analysis from five randomized trials, demonstrating a significant interaction of elastance and the effect of randomized VT on mortality (4). With the use of Bayesian multivariable logistic regression and long-term (60-d mortality) as the primary outcome, patients with higher elastance (and hence higher driving pressures) are likely to accrue a greater mortality benefit with lower VT compared with patients with lower elastance (and hence lower driving pressures), who are likely to accrue less mortality benefit. A Subpopulation Treatment Effect Pattern Plot analysis confirmed heterogeneity of the VT treatment effect. Although their analysis provides credence that lung-protective ventilation strategies, and perhaps PEEP selection, should primarily target driving pressure, several considerations are required to reconcile this with earlier studies.
Lower VT appears beneficial in both healthy and injured lungs. In the pivotal ARDS Network study examining lower versus higher VT ventilation (5), the interaction between the randomized VTs and the quartile of static compliance at baseline was not significant. Further analysis by Hager and colleagues of the ARDS Network data confirmed the lack of interaction and concluded that the benefit of lower VT ventilation strategy was not associated with plateau pressure (6), suggesting that lower VT was beneficial even when plateau pressure was low (and by extension, there was benefit when the elastance was also low). Analyzing a larger cohort than Hager and colleagues, Goligher and colleagues' analysis failed to find this relationship, perhaps because of the larger sample size and more sensitive analytic approaches. The elastance-dependent effect of VT reduction is consistent with the observation in models of ventilation-induced lung injury, in which damage is exacerbated by the degree of preexisting lung dysfunction (7). Taken together, this suggests the maximum benefit of lowering VT is found in the most severely injured lungs.
Goligher and colleagues and Amato and colleagues used Day 1 postrandomization respiratory system elastance and driving pressure values, respectively; ideally prerandomization elastance would be used, but these values are not available for many patients in this database. This is important, as ventilation with either higher or lower VT could alter the post-treatment elastance. For example, randomization to higher VT overnight could result in either tidal recruitment (and reduced elastance) or early ventilator-induced lung injury (and increased elastance) and introduce bias. Moreover, respiratory system elastance in patients with ARDS is not static; it changes over time, perhaps suggesting that a prospective study examining this concept would also need to be dynamic, reflecting regular assessments of elastance.
Respiratory system elastance is composed of both lung and chest wall compliance, making it a poor surrogate for transpulmonary pressure. Multiple factors such as increased body weight, chest wall deformity, markedly positive cumulative fluid balance, and raised intraabdominal pressure, among others, can all affect the chest wall elastance and, consequently, the respiratory system elastance. The respiratory system elastance was adjusted either to the predicted or the actual body weight based on data availability in the five examined studies, but as discussed by Goligher and colleagues, the driving pressure may need to be reconsidered when chest wall elastance is abnormally elevated.
The association presented in the present article, and multiple sensitivity analyses to address some of these concerns, provide some compelling data, but design and implementation of high-quality prospective randomized clinical trials testing these findings to better inform management will be difficult. Theoretical analysis of mechanical power applied during ventilation (8), another newer approach to understanding ventilator-induced lung injury, groups driving pressure and VT together, suggesting that it will be hard to separate the two during a clinical study. Furthermore, higher inspiratory flow rates increase mechanical power transmission, adding additional complexity to clinical trials of optimal lung protective ventilation. A contemporary usual care arm will include lower VT ventilation and PEEP titration, noting that today's patients are more likely to also receive prone ventilation and restrictive fluid therapy. Early appropriate antibiotics, resuscitation and mobilization, reduced transfusion-related lung injury, and early corticosteroids (9) in appropriate patients with ARDS are among many other practice changes that reduce lung damage and improve outcomes, requiring even larger clinical trial enrollment to achieve adequate power.
The current analysis suggests that elastance (and thus driving pressure) predicts the effect of treatment with lower VT and provides additional support for targeting an upper limit for driving pressure of 15 cm H 2 O. Prospective testing of such a strategy could also stratify patients based on their respiratory system elastance; those with a high elastance could then be randomized to an ultralow VT strategy to decrease driving pressure further, possibly coupled with the use of extra corporeal carbon dioxide removal. Similarly, patients with low elastance could be randomized to higher VTs that may be better tolerated. Until such studies are complete, the simplicity of repeated titration based on driving pressure is an attractive personalized approach as we strive to further improve outcomes from ARDS.
|
2021-02-11T06:19:41.137Z
|
2021-02-09T00:00:00.000
|
{
"year": 2021,
"sha1": "7c2107db96f751e1c626c2e422d4c69fd6c5bc1b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1164/rccm.202101-0154ed",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dfe59d6e48fd90a04a5b2e5c773a31cad8e82a2f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219616991
|
pes2o/s2orc
|
v3-fos-license
|
Digital Transformation in Healthcare – South Africa Context
: Digital transformation is growing at a slow rate in medical schemes or healthcare compared to other industries such as banking and insurance. The healthcare sector needs to embrace the digital transformation and adopt and optimize on use of technology, otherwise, the sector will be left behind. Other sectors have taken advantage of technology, for example in the retail sector, nowadays people shop online, bank, and do travel bookings online. The logistic business has also embraced digital transformation in that most activities are now done through devices at the convenience of one’s office or home. The recent HPCSA 1 conference included topics such as Telemedicine’s where several digital transformation and innovations in the health sector were also presented. What was evident in the discussions was that progress in accelerating digital transformation is pounded by a slow pace of regulation and other relevant guidelines. The topics discussed clearly revealed that the health sector is still far behind compared to other countries. For example, there is a gap in the adoption of digitally enabled tools for diagnosing, providing treatment, and better management of chronic conditions and other conditions. Electronic medical records are still not a part of routine care both from the supply and the funders side except a handful of players.
On
1 the funders side, you do find several medical schemes 2 that invest in technology, for example, there are schemes that are already implementing digital application forms for smooth onboarding of new members. This is with the aim of going digital and reduces paper application forms. Similarly, with the submission of claims of which more than 98% are submitted in electronic form has transformed significantly. Strategies such as digital marketing are typically used to reach the target market and communicate more effectively with members. Several schemes have invested a lot in product development such as mobile apps, developing communication channels through online and social media platforms. Social media platforms provide an opportunity for brand repositioning, it also provides an opportunity to reach a new target market and access to a larger pool potential client base. Social media platforms could also be used as a tool to improve service to clients, create convenience, provide instant interaction with clients. However, very few medical schemes optimize on these platforms, particularly small to medium schemes. There is still a need to measure value add of digital transformation to members, chiefly where the quality of care is concerned. A recent study conducted by Willie (2019) which was an unstructured survey on the use of medical scheme mobile app by members. The survey revealed than more than 75% of the respondents did not have the app installed. Some of the sentiments for not using the app were: • Lack of awareness about the app
•
The app is complex • No reason to use the app
Does not meet my needs
Digital disruption has great potential in healthcare, the main areas of investments are certainly Big Data analytics and AI (Artificial Intelligence). Some of the big data analytics tools are useful for improving efficiencies where some of the tools can be automated, this potentially could yield better utilization of human resources and could potentially have huge cost savings. In the main, Big data and AI tools are used to profile clients, medical service providers and look at healthcare utilization patterns and trends. Some of the techniques such as predictive analytics are important in that they can be used not only to profile member but create a strategy to combat attrition. Insights from the data could be useful for data drive decision-making process that potentially save huge downstream cost for medical schemes. There is also great potential in investing in digital marketing and the optimal use of mobile apps.
DIGITAL TRANSFORMATION INNITIATIVES IN THE PUBLIC SECTOR -SOUTH AFRICA HEALTHCARE
There are several innovations that must take place in the public sector in South Africa as far as digital transformation is concerned, chiefly these are still at beta phases and their overall impact and outcomes are still to be realized. Furthermore, there are pockets of digital innovations in the public sector dating back to 2014, some are initiatives employed at provincial level whiles others are deployed at the national level. An integrated holistic approach at the national level could ascertain value add and impact in the sector. Box 1 below depicts the Department of Health's (DoH) digital and eHealth developments and implementation from 2014.
USE OF ARTIFICIAL INTELLIGENCE IN HEALTHCARE
Artificial Intelligence (AI), Machine Learning (ML) and Big Data Analytics are some of the most talkedabout technologies in recent years. According to Bali, Garg, and Bali (2019), AI aims to mimic human cognitive functions, such as the ability to reason, discover meaning, generalize, or learn from experience. Popular AI techniques include machine learning methods for structured data, such as the classical support vector machine and neural network, and the modern deep learning, as well as natural language processing for unstructured data (Jiang, 2017). Machine learning is the foundation of modern AI and is essentially an algorithm that allows computers to learn independently without following any explicit programming (Uzialko, 2019).
The use of AI is already at advanced stages in other industries, the adoption in healthcare is growing at a steady rate, however, there is no doubt AI is certainly going to change the face of healthcare delivery. AI is being employed in a numerous setting, for example, funders, as well as administrators, use it to adjudicate and process of claims, hospital facilities for assessing bed occupancy. AI is also used to analyses unstructured data such as images, videos, physician notes to enable clinical decision making and information sharing. Other commentators such as Reddy (2018) argues that AI is more prevent in the area of medical diagnosis. AI systems can analyze huge volumes of data faster far more than humans, this improves efficiencies in identifying medical diagnoses than doctors. It should be noted that AI cannot completely replace the medical profession but could be used as a tool to optimize currently process and reach medical conclusions and decision-making factor, thus saving costs and improving quality of life.
APPLICATIONS OF ARTIFICIAL INTELLIGENCE
Artificial has the potential to change the healthcare industry in South Africa for the better, this is subject to optimal use in both the supply and demand side of the health care ecosystem. AI is delivering high value including the following areas:
Medical Diagnosis
AI systems can analyze far more data far faster than humans, which may make them more adept at identifying medical diagnoses than doctors.
Neurology
Neurological healthcare deals with nervous systems disorders such as Parkinson's disease, Alzheimer's disease, epilepsy, stroke, and multiple sclerosis. AI can also predict strokes and monitor seizure frequency.
Pathology Images
Most diagnoses depend on a pathology result, so a pathology report's accuracy can make the difference between diagnosis and misdiagnosis.
Radiology Tools
Various forms of radiology, such as CT scans, MRIs and X-rays provide healthcare providers with an inside view of a patient's body. However, different radiology experts and doctors tend to interpret such images differently.
Smart Devices
Hospitals are big purchasers of smart devices. The devices, which take the form of tablets and hospital equipment, exist in intensive care units (ICUs), emergency rooms, surgery and regular hospital rooms.
Overutilization, Waste and Abuse of Medical Services
The South African private health sector expenditure is viewed as one of the most expensive models compared to other similar countries, South Africa spends 9% of its GDP on healthcare, which is 4% higher than the WHO 3 's recommended spending for a country of its socioeconomic status (Bidzha, Greyling Mahabir, 2017). Furthermore, South African private healthcare patients' stay in hospital costs more than in some developed countries and some of these costs cannot be explained (HMI, 2018). The over-utilization of healthcare services is also cited as one of the cost drivers in the health sector and ultimately impacts on the premiums paid by the members. Providing lower levels of or right care to patients also results in wasteful expenditure from the funders side, other examples of possible waste include medically unnecessary caesarean sections (C-section) or imaging.
The c-section rate in South Africa is higher than the WHO's recommendation at about 26% (WHO, 2009). In the private sector, the C-section rate is three times higher compared to the national at more than 77%, which significantly high to the recommended rate (CMS, 2019). The recommended rate of Cesarean sections is around 10% -15 % of all births. A study by Manyeh et al. (2018) argues that the increase in Csection rate in developing countries has not been clinically justified and these increasing trends have become a major health issue due to potential maternal and perinatal risks, inequality of access and cost involved.
Waste and inefficiency occur at every level in health care system; waste also includes unnecessary procedures done on the patients, other examples include instances where repeat tests on the same patients are done by several providers but billed separately. This could be avoided where if various medical providers in the value chain could access the same patient records for clinical decision making. Thus, there is value in investing in the healthcare delivery model that is not fragmented and encourages care coordination.
According to Albejaidi and Nair (2017), failures of care coordination typically occurs when patients experience care that is fragmented. Other examples include poorly managed care coordination that may result in a patient being referred from one health care setting to another health care setting. Figure 2 below depict various categories of waste as defined by Albejaidi and Nair (2017). One of the highlighted ones which are frequently prevalent in an uncoordinated health system is typically where patients' records are not stored in a central secure data repository. As a result, duplication of services-tests or procedures are done frequently than clinically necessary.
THE USE OF BLOCKCHAIN AS A RISK MITIGATION MECHANISM
The problem with healthcare insurance is that it has a lot of information asymmetry, and one needs to spend a lot on what is called "transaction costs", to make the trading environment transparent with some assurance of holding market agents accountable if something goes wrong. If one is not convinced, one only must read the provisional findings of the Health Market Inquiry (HMI) on private healthcare. This type of decision environment isn't just endemic to the demand side of medical insurance, but a monopolistic competition on the supply side also means information isn't freely accessible.
This has meant that people are not sharing information or collaborating as effectively as they should. Well, this is hardly optimal for innovative solutions that bring down the cost of healthcare or mitigate fraud, waste and abuse. Sadly, regulators and governments will be held responsible for the inefficiencies and fraud arising from trading environments with minimal accountability. Once again, one only needs to read the provisional findings of the HMI. Blockchain technology provides free distribution of information across information networks, without the middleman (banks/brokers/administrators). Thus; with lower transaction costs, access to cost-effective quality healthcare increases. The state of the policy conundrum in South Africa means that blockchain technology could be the solution for information blackholes.
Systematic reviews on the impact of blockchain technology (information sharing of patient records, or decision support), finds that it improves patient treatment outcomes and safety, and reduced healthcare utilisation. All these are policy issues that are tussled-out in the National Health Insurance White paper and the HMI provisional report. Although blockchain technology has been found effective within internal corporate systems, better systemic outcomes are achieved on the interoperability of systems. Naturally, this brings on concerns about data protection. Well if blockchain was good enough to establish the Bitcoin money market, hopefully, consumers will trust it to secure their health.
|
2020-06-13T08:23:04.916Z
|
2019-10-10T00:00:00.000
|
{
"year": 2019,
"sha1": "efb247294b66ec334f81f9f53f4baf00e54ccc40",
"oa_license": null,
"oa_url": "https://greenpublishers.org/wp-content/uploads/2020/02/GJIADV7A1-Willie.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "efb247294b66ec334f81f9f53f4baf00e54ccc40",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
119628418
|
pes2o/s2orc
|
v3-fos-license
|
Bounding the $j$-invariant of integral points on modular curves
In this paper, we give some effective bounds for the $j$-invariant of integral points on arbitrary modular curves over arbitrary number fields assuming that the number of cusps is not less than 3.
Introduction
For a positive integer N , let X(N ) be the principal modular curve of level N . Let G be a subgroup of GL 2 (Z/N Z) containing −1, and let X G be the corresponding modular curve. We denote by det G the image of G under the determinant map det : GL 2 (Z/N Z) → (Z/N Z) * . This curve is defined over Q(ζ N ) det G , where ζ N = e 2πi/N . So in particular it is defined over Q if det G = (Z/N Z) * . We denote by j the standard j-invariant function on X G . We use the common notation ν ∞ (G) for the number of cusps of X G .
Let K 0 be a number field containing Q(ζ N ) det G . Then X G is defined over K 0 . Let S 0 be a finite set of absolute values of K 0 , containing all the Archimedean valuations and normalized with respect to Q. We call a K 0 -rational point P ∈ X G (K 0 ) an S 0 -integral point if j(P ) ∈ O S0 , where O S0 is the ring of S 0 -integers in K 0 .
By the classical Siegel's finiteness theorem [24], X G has only finitely many S 0integral points when X G has positive genus or ν ∞ (G) ≥ 3. But the existing proofs of Siegel's theorem are not effective, that is they don't provide with any effective bounds for the j-invariant of S 0 -integral points.
Since 1995, Yuri Bilu and his collaborators have succeeded in getting effective Siegel's theorem for various classes of modular curves. Bilu [7,Proposition 5.1] showed that the j-invariant of the S 0 -integral points of X G can be effectively bounded provided that ν ∞ (G) ≥ 3, but there was no quantitative version therein. Afterwards, Bilu [9,Theorem 10] proved that the j-invariant of integral points of X 0 (N ) could be effectively bounded if N ∈ {1, 2, 3, 5, 7, 13}, and Bilu and Illengo [10] obtained similar results for "almost every" modular curve. But they still gave no quantitative results.
By using Runge's method, the first explicit bound for the j-invariant of the S 0integral points of X G was given in [11,Theorem 1.2] when X G satisfies "Runge condition" which roughly says that all the cusps are not conjugate. When G is the normalizer of a split Cartan subgroup of GL 2 (Z/pZ), where p is a prime number, this bound can be sharply reduced, see [11,Theorem 6.1] and [12,Theorem 1.1]. In particular, the authors in [11,12,13] showed various and interesting applications of these bounds, such as rational points of modular curves [11,13], Serre's uniformity problem in Galois representation [12], and so on.
Most recently, without Runge condition and by using Baker's method, Bajolet and Sha [5] gave an explicit bound for the j-invariant of integral points on X + ns (p), which is the modular curve of a prime level p corresponding to the normalizer of a non-split Cartan subgroup of GL 2 (Z/pZ), p ≥ 7. Furthermore, a general method for computing integral points on X + ns (p) is developped in [4]. In this paper, we apply Baker's method, based on Matveev [22] and Yu [28], to obtain some effective bounds for the j-invariant of the integral points on X G without assuming Runge condition but assuming that ν ∞ (G) ≥ 3.
We denote by h(·) the usual absolute logarithmic height. For P ∈ X G (Q), we write h(P ) = h(j(P )). Now we would like to state the main results. Theorem 1.1. Assume that K 0 ⊆ Q(ζ N ), N is not a power of any prime, ν ∞ (G) ≥ 3, and S 0 only consists of infinite places. Then for any S 0 -integral point P on X G , we have where C is an absolute effective constant and ϕ(N ) is the Euler's totient function.
Actually, we obtain a more general Theorem 1.2 below, which applies to any number field and any ring of S 0 -integers in it.
Put d 0 = [K 0 : Q] and s 0 = |S 0 |. We define the following quantity (1.1) where D 0 is the absolute discriminant of K 0 , and the norm of a finite place is, by definition, the absolute norm of the corresponding prime ideal. We denote by p the maximal rational prime below S 0 , with the convention p = 1 if S 0 consists only of the infinite places. Theorem 1.2. Assume that N is not a power of any prime and ν ∞ (G) ≥ 3. Then for any S 0 -integral point P on X G , we have where C is an absolute effective constant.
The situation is different when N is a prime power, see Section 7. In this case we define Notice that X G is also a modular curve of level M . Theorem 1.3. Assume that N is a power of some prime and ν ∞ (G) ≥ 3. Then for any S 0 -integral point P on X G , we can get two upper bounds for h(P ) by replacing N by M in Theorem 1.1 and 1.2.
Notations and conventions
Through out this paper, log stands for two different objects without confusion according to the context. One is the principal branch of the complex logarithm, in this case will use the following estimate without special reference for |z| ≤ r < 1, see [11,Formula (4)]. The other one is the p-adic logarithm function, for example see [19,Chapter IV Section 2]. Let H denote the Poincaré upper half-plane: H = {τ ∈ C : Imτ > 0}. For τ ∈ H, put q τ = e 2πiτ . We also putH = H∪Q∪{i∞}. If Γ is the pullback of G∩SL 2 (Z/N Z) to SL 2 (Z), then the set X G (C) of complex points is analytically isomorphic to the quotientH/Γ, supplied with the properly defined topology and analytic structure. Moreover, the modular invariant j defines a non-constant rational function on X G , whose poles are exactly the cusps. See any standard reference like [21,23] for all the missing details. For is the second Bernoulli polynomial and ⌊a 1 ⌋ is the largest integer not greater than a 1 . Obviously |ℓ a | ≤ 1/12, this will be used without special reference. Let A N be the subset of abelian group (N −1 Z/Z) 2 consisting of the elements with exact order N . Obviously, the product runs through all primes dividing N . Moreover we always choose a representative element of a = (a 1 , a 2 ) ∈ (N −1 Z/Z) 2 satisfying 0 ≤ a 1 , a 2 < 1. So in the sequel for every a ∈ (N −1 Z/Z) 2 , we have ℓ a = B 2 (a 1 )/2.
Throughout this paper, we fix an algebraic closureQ of Q, which is assumed to be a subfield of C. Every number field used in this paper is presumed to be a subfield ofQ.
For a number field K, we denote by M K the set of all valuations (or places) of K extending the standard infinite and p-adic valuations of Q: |2| v = 2 if v ∈ M K is infinite, and |p| v = p −1 if v extends the p-adic valuation of Q. We denote by M ∞ K and M 0 K the subsets of M K consisting of the infinite (Archimedean) and the finite (non-Archimedean) valuations, respectively. Given a number field K of degree d, for any v ∈ M K , K v is the completion of K with respect to the valuation v andK v its algebraic closure. We still denote by v the unique extension of For a number field K of degree d, the absolute logarithmic height of an algebraic Throughout the paper the symbol ≪ implies an absolute effective constant. We also use the notation O v (·). Precisely,
Preparations
In this section, we assume that N ≥ 2.
3.1. Siegel functions. Let a = (a 1 , a 2 ) ∈ Q 2 be such that a ∈ Z 2 , and let g a : H → C be the corresponding Siegel function, see [20, Section 2.1]. We have the following infinite product presentation for g a , see [11,Formula (7)], For the elementary properties of g a , please see [20,. Especially, the order of vanishing of g a at i∞ (i.e., the only rational number ℓ such that the limit lim τ →i∞ q −ℓ τ g a exists and is non-zero) is equal to ℓ a . For a number field K and v ∈ M K , we define g a (q) as the above, where q ∈K v satisfies |q| v < 1. Notice that here we should fix q 1/(12N 2 ) ∈K v , then everything is well defined.
Given two positive integers k and ℓ, we denote by P k the set of partitions of k into positive summands, and let p ℓ (k) be the number of partitions of k into exactly ℓ positive summands. By [3,Theorem 14.5], we easily get |P k | < e k/2 , k ≥ 64.
Then according to the table of partitions or computer calculations, we can obtain |P k | < e k/2 , k ≥ 1.
Suppose that Notice that the coefficient φ a (k) of q k/N equals to the coefficient of q k in the expansion of the following finite product, (1 − q n e −2πia2 ).
If S 1 and S 2 are both empty, then the coefficient φ a (k) = 0.
We say ℓ ∈ S 1 ak if and only if there exist ℓ positive integers in S 1 such that the sum of them equals to k, and let m ℓ count the number of different ways. Similarly for the definitions of S 2 ak and m ′ ℓ .
We say ℓ ∈ S 3 ak if and only if there exist ℓ 1 positive integers in S 1 and ℓ 2 positive integers in S 2 such that the sum of them equals to k, then (ℓ 1 , ℓ 2 ) ∈ T ℓ ak and let m ℓ1ℓ2 count the number of different ways.
Then the desired expression of φ a (k) follows easily from the definitions. For each element x ∈ P k , let m x be the number of the times of x appearing in the expansion of (3.1). Then we obtain If ⌊k/N ⌋ ≤ 2, one can verify the inequality by explicit computations.
Modular units on X(N ).
Recall that by a modular unit on a modular curve we mean that a rational function having poles and zeros only at the cusps.
For a ∈ (N −1 Z/Z) 2 , we denote g 12N a by u a , which is a modular unit on X(N ). Moreover, we have u a = u a ′ when a ≡ a ′ mod Z 2 . Hence, u a is well-defined when a is a non-zero element of the abelian group (N −1 Z/Z) 2 . Moreover, u a is integral over Z[j]. For more details, see [11,Section 4.2].
Furthermore, the Galois action on the set {u a } is compatible with the right linear action of GL 2 (Z/N Z) on it. That is, for any σ ∈ Gal(Q(X(N ))/Q(j)) = GL 2 (Z/N Z)/ ± 1 and any a ∈ (N −1 Z/Z) 2 , we have u σ a = u aσ . Here we borrow a result and its proof from [4] for subsequent applications and the conveniences of readers.
Proof. We denote by u the left-hand side of the equality. Since the set A N is stable with respect to GL 2 (Z/N Z), u is stable with respect to the Galois action over the field Q(X(1)) = Q(j). So u ∈ Q(j). Moreover, since u is integral over Notice that X(1) has only one cusp and u has no zeros and poles outside the cusps, so we must have u is a constant and u ∈ Z.
Furthermore, we have and X G1 be the modular curve corresponding to G 1 . In this subsection, we assume that X G1 is defined over a number field K. Then X G is also defined over K. Since X G and X G1 have the same geometrically integral model, every K-rational point of X G is also a K-rational point of X G1 .
For each cusp c of X G1 , let t c be its local parameter constructed in [11,Section 3]. Put q c = t ec c , where e c is the ramification index of the natural covering X G1 → X(1) at c. Notice that e c |N . Furthermore, for any v ∈ M K , let Ω c,v be the set constructed in [11, Section 3] on which t c and q c are defined and analytic. Here we quote [11, Proposition 3.1] as following.
with equality for the non-Archimedean v, where the union runs through all the cusps of X G1 . Moreover, for P ∈ Ω c,v we have We will use the above proposition several times without special reference. Moreover, this proposition implies that for every P ∈ X G1 (K v ) + there exists a cusp c such that P ∈ Ω c,v . We call c a v-nearby cusp of P .
We get directly the following corollary from Proposition 3.1.
where ℓ is some prime factor of N .
3.4. Modular units on X G1 . We apply the notations in the above subsection.
We denote by M N the set of elements of exact order N in (Z/N Z) 2 . Let us consider the natural right group action of G 1 on M N . Following the proof of [10, Lemma 2.3], we see that the number of the orbits of M N /G 1 is equal to ν ∞ (G).
Obviously, when we consider the natural right group action A N /G 1 , there are also ν ∞ (G) orbits of this group action. So Let T be any subset of A N , we define Let O be an orbit of the right group action A N /G 1 , we have By [11, Proposition 4.2 (ii)], u O is a rational function on the modular curve X G1 . In fact, u O is a modular unit on X G1 . For any cusp c, we denote by Ord c (u O ) the vanishing order of u O at c. For v ∈ M K , define Then u O has the following properties.
(ii) For the cusp c ∞ at infinity, we have For any cusp c, (iv) Let c be a cusp of X G1 and v ∈ M K . For P ∈ Ω c,v , we have [20], tells us that this rank is maximal possible.
Siegel's theory of convenient units
We recall here Siegel's construction [25] of convenient units in a number field K of degree d, in the form adapted to the needs of the present paper. The results of this section are well-known, but not always in the set-up we wish them to have.
Let S be a finite set of absolute values of K, containing all the Archimedean valuations and normalized with respect to Q. Fix a valuation v 0 ∈ S, we put Let ξ 1 , · · · , ξ r be a fundamental system of S-units. The S-regulator R(S) is the absolute value of the determinant of the r × r matrix It is well-defined and is equal to the usual regulator R K when S is the set of infinite places.
Proposition 4.1. There exists a fundamental system of S-units η 1 , · · · , η r satisfying Furthermore, the entries of the inverse matrix of (4.1) are bounded in absolute value by r 2r ζ.
Proof. See [14, Lemma 1]. Notice that the left-hand inequality in the second inequality is a well-known result of Dobrowolski [15].
For the unit η = η b1 1 · · · η br r , where η 1 , · · · , η r are from Proposition 4.1 and b 1 , · · · , b r ∈ Z, put B * = max{|b 1 |, · · · , |b r |}, then we have Proof. The first inequality follows from Proposition 4.1 and standard height estimates. Write Resolving this in terms of b 1 , · · · , b r and using the final statement of Proposition 4.1, we obtain Then the corollary is proved.
Finally, we quote two estimates of the S-regulator in terms of the usual regulator R K , the class number h K , the degree d and the discriminant D of the field K.
For the first inequality see [14,Lemma 3]; one may remark that the lower bound R(S) ≥ 0.1 follows from Friedman's famous lower bound [16] for the usual regulator R K ≥ 0.2. The second one follows from Siegel's estimate [25, Satz 1] in fact there is an explicit bound for h K R K therein.
Baker's inequality
In this section we state Baker's inequality, which is the main technical tool of the proof. It is actually an adaptation of a result in [1]. For the convenience of readers, we also quote its proof with slight change.
Notice that Matveev assumes (in our notations) that with some choice of the complex value of the logarithm. However, if we pick the principal value of the logarithm, then | log θ k | ≤ | log |θ k || + π ≤ dh(θ k ) + π ≤ (1 + π)Θ k .
Hence we may disregard (5.1) at the cost of increasing the absolute constant C in the definition of Υ.
In the case of non-archimedean v we employ the result of Yu [28]. Precisely, we use the second consequence of his "Main Theorem" on page 190 (see the bottom of page 190 and the top of page 191), which asserts that, assuming (1.19) of [28], but without assuming (1.5) and (1.15), the first displayed equation on the top of page 191 of [28] holds.
Remark 5.2. We choose the form of Baker's inequality in Theorem 5.1 because of its convenience for our computations, although it is effective but not explicit. If one want to get an explicit bound for h(P ), he can apply Matveev [22] and Yu [28] respectively, like [18], and he also can apply [6, Theorem C] to handle uniformly with the Archimedean and non-Archimedean cases.
The case of mixed level
In this section, we assume that N has at least two distinct prime factors. Then we will apply Baker's inequality to prove Theorem 1.1 and 1.2.
In the sequel, we assume that P is an S 0 -integral point of X G and ν ∞ (G) ≥ 3. What we want to do is to obtain some bounds for h(P ).
From now on we let K = K 0 · Q(ζ N ) = K 0 (ζ N ). Let S be the set consisting of the extensions of the places from S 0 to K, i.e.
Then P is also an S-integral point of X G .
Put d = [K : Q], s = |S| and r = s − 1. Since j(P ) ∈ O S , we have Then there exists some w ∈ S such that h(P ) ≤ s log |j(P )| w .
We fix this valuation w from now on. Therefore, we only need to bound log |j(P )| w . As the discussion in Subsection 3.3, P is also an S-integral point of X G1 . Hence for our purposes, we only need to focus on the modular curve X G1 .
We partition the set S into three pairwise disjoint subsets: From now on, for v ∈ S 1 let c v be a v-nearby cusp of P , and we write q v for q cv and e v for e cv . Notice that for any v ∈ S 3 , it is non-Archimedean with |j(P )| v ≤ 1.
In the sequel we can assume that |j(P )| w > 3500, otherwise we can get a better bound than those given in Section 1. Then we have w ∈ S 1 and P ∈ Ω cw,w . Therefore, by (3.2) we only need to bound log |q w (P ) −1 | w .
From now on we assume that |q w (P )| w ≤ 10 −N . Indeed, applying (3.2) the inequality |q w (P )| w > 10 −N yields h(P ) < 3sN , which is a much better estimate for h(P ) than those given in Section 1.
Notice that under our assumptions, we see that N ≥ 2. Moreover, in this section we assume that s ≥ 2. In fact, if s = 1, then we can add another valuation to S such that s = 2, and then the final results of this section also hold.
Preparation for Baker's inequality. We fix an orbit O of the group action
If Ord cw U = 0, we choose O such that Ord cw U < 0 according to Proposition 3.2. Noticing v ∞ (G) ≥ 3 and combining with Proposition 3.6 (vi), we can choose another orbit O ′ such that U and V are multiplicatively independent modulo constants with Define the following function So we always have Ord cw W = 0 and W (P ) ∈ O S . In particular, W is integral over Z[j]. Moreover, W is not a constant by Proposition 3.6 (vi). By Proposition 3.6 (ii) and (iii), we have if Ord cw U = 0; and h(γ w ) ≤ 24N 7 log 2. By Proposition 3.2, we know that W (P ) is a unit of O S . So there exist some integers b 1 , · · · , b r ∈ Z such that W (P ) = ωη b1 1 · · · η br r , where ω is a root of unity and η 1 , · · · , η r are from Proposition 4.1. Let η 0 = ωγ −1 w . Then we set (6.2) Λ = γ −1 w W (P ) = η 0 η b1 1 · · · η br r . Notice that η 0 , · · · , η r ∈ K and For subsequent deductions, we need to bound h(W (P )). Proposition 6.1. We have h(W (P )) ≤ 2sN 8 log |q −1 w (P )| w + 94sN 8 log N. Proof. First suppose that Ord cw U = 0. Then W = U . For v ∈ S 3 , j(P ) is a v-adic integer. Hence, so is the number W (P ). In addition, it is easy to see that Notice that for v ∈ S 1 , |Ord cv (W )| ≤ N 4 . Applying Proposition 3.6 (iv) and (3.2), we have It follows from Proposition 3.6 (v) that Hence, we get Now suppose that Ord cw U = 0. For any v ∈ S 1 , we have Here note that |Ord cv (W )| ≤ 2N 8 . For any v ∈ M ∞ K , we have | log |W (P )| v | ≤ 2N 7 log(|j(P )| v + 2400) + 2N 4 ρ v .
Apply the same argument as the above, we obtain h(W (P )) ≤ 2sN 8 log |q w (P ) −1 | w + 2sN 8 log 3 + 72N 7 log N + 2N 7 log 5900. Now it is easy to get the desired result.
6.2. Using Baker's inequality. If Λ = 1, we can get better bounds for h(P ) than those given in Section 1, see Section 8. So in the rest of this section we assume that Λ = 1.
By Theorem 5.1, there exists an absolute constant C which can be determined explicitly such that the following holds. Choosing B ≥ B * and B ≥ max{3, Θ 1 , · · · , Θ r }, we have Recall that p has been defined in Section 1.
To get a bound for h(P ), we only need to calculate the quantities in the above inequality.
6.3. Proof of Theorem 1.1. Under the assumptions of Theorem 1.1, we have K = Q(ζ N ) and S = M ∞ K . Since we have assumed that s ≥ 2, we have ϕ(N ) ≥ 4. Then |D| ≤ N ϕ(N ) according to [27,Proposition 2.7]. It follows from Proposition 4.3 that Notice that Applying (6.6) we obtain Firstly, notice that Using Proposition 4.3, we estimate R(S) as follows: Since N K/Q (v) ≤ p [K:Q] = p d , this implies the upper bound (6.7) log R(S) ≪ 1 2 log |D| + d log log |D| + s log(dp).
Let D K/K0 be the relative discriminant of K/K 0 . We have We denote by O K0 and O K the ring of integers of K 0 and K respectively. Since By [17,III (2.20) (b)] and note that the absolute value of the discriminant of the polynomial x N − 1 is N N , we get Now let v 0 be a non-archimedean place of K 0 , and v 1 , · · · , v m all its extensions to K, their residue degrees over K 0 being f 1 , · · · , f m respectively. Then f 1 +· · ·+f m ≤ [K : K 0 ] ≤ ϕ(N ), which implies that f 1 · · · f m ≤ 2 ϕ(N ) . Notice that we always have .
Finally, using (6.6) and noticing that d 0 ≤ 2s 0 , we get the constant C being modified. Therefore, Theorem 1.2 is proved.
The case of prime power level
In this section, we assume that N is a prime power. As Section 6, we can define a similar function W . But in this case W (P ) is not a unit of O S by Proposition 3.2. So we need to raise the level. Put Notice that X G is also a modular curve of level M and ν ∞ (G) ≥ 3, since we have the following natural sequence of morphisms Since Gal(Q(X(M ))/Q(j)) = GL 2 (Z/M Z)/ ± 1, X G corresponds to a subgroup G of GL 2 (Z/M Z) containing ±1. In fact, The restriction of G on X(N ) is G. The modular curve X G has the same integral geometric model as X G . In particular, P is also an S 0 -integral point of X G . Therefore, from Theorem 1.1 and 1.2, we can get two upper bounds for h(P ) by replacing N by M , which proves Theorem 1.3.
The case Λ = 1
In this section, we suppose that N is not a prime power without loss of generality. Under the assumption Λ = 1 we can obtain better bounds for h(P ) than those given in Section 1.
Let c be a cusp of X G1 and v ∈ M K . We also denote by v the unique extension of v toK v . Recall Ω c,v and the q-parameter q c mentioned in Section 3.3, for the modular function U defined in Section 6.1, we get the following lemma.
Lemma 8.1. There exist an integer-valued function f (·) with respect to q c and λ c 1 , λ c 2 , λ c 3 · · · ∈ Q(ζ N ) such that the following identity holds in v-adic sense, In particular, for every k ≥ 1 we have h(λ c k ) ≤ log(24N 3 + 24kN 2 ) + log k. Proof. By definition, we have where by default f (q c ) is always equal to 0 if v is finite. Applying the Taylor expansion of the logarithm function to the right-hand side of the above formula, we get the desired formula for log U(qc) γO,cq
An immediate verification shows that
if v is infinite.
Same estimates hold true for the coefficients of the q-series for log(1−q n+1−a1 c e −2πia2 ). For each a ∈ O, the number of coefficients in the q-series for log(1 − q n+a1 c e 2πia2 ) which may contribute to λ c k (those with 0 ≤ n ≤ k/N ) is at most k/N + 1, and the same is true for the q-series for log(1 − q n+1−a1 c e −2πia2 ). The bound for |λ c k | v now follows by summation. Proof. Let n be the smallest k such that λ c k = 0. Then n ≤ N 6 . We assume that |q c (P )| v ≤ 10 −N , otherwise there is nothing to prove. Since Ord c U = 0 and U (P ) = γ O,c , it follows from Lemma 8.1 that 2πf (q c (P ))i + Then we get log |q c (P ) −1 | v ≤ N log (48N 2 (N 6 + N )).
Now we assume that Ord cw U = 0. Then W = U Ordc w V V −Ordc w U with Ord cw W = 0. Proposition 3.6 (vi) guarantees that W is not a constant. Applying the same method as the above without difficulties, we can also get a better bound than Theorem 1.1 and 1.2. We omit the details here.
In conclusion, if assuming Λ = 1, we can get polynomial bounds for h(P ) in terms of s 0 and N , which are obviously better than those in Theorems 1.1-1.3.
|
2013-04-09T15:49:36.000Z
|
2012-08-07T00:00:00.000
|
{
"year": 2012,
"sha1": "bb9bbf084100a0cf8f68fccd155e8f1dd7ca7dbc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1208.1337",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bb9bbf084100a0cf8f68fccd155e8f1dd7ca7dbc",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
252447541
|
pes2o/s2orc
|
v3-fos-license
|
Nanohybrid of Thymol and 2D Simonkolleite Enhances Inhibition of Bacterial Growth, Biofilm Formation, and Free Radicals
Due to the current concerns against opportunistic pathogens and the challenge of antimicrobial resistance worldwide, alternatives to control pathogen growth are required. In this sense, this work offers a new nanohybrid composed of zinc-layered hydroxide salt (Simonkolleite) and thymol for preventing bacterial growth. Materials were characterized with XRD diffraction, FTIR and UV–Vis spectra, SEM microscopy, and dynamic light scattering. It was confirmed that the Simonkolleite structure was obtained, and thymol was adsorbed on the hydroxide in a web-like manner, with a concentration of 0.863 mg thymol/mg of ZnLHS. Absorption kinetics was described with non-linear models, and a pseudo-second-order equation was the best fit. The antibacterial test was conducted against Escherichia coli O157:H7 and Staphylococcus aureus strains, producing inhibition halos of 21 and 24 mm, respectively, with a 10 mg/mL solution of thymol–ZnLHS. Moreover, biofilm formation of Pseudomonas aeruginosa inhibition was tested, with over 90% inhibition. Nanohybrids exhibited antioxidant activity with ABTS and DPPH evaluations, confirming the presence of the biomolecule in the inorganic matrix. These results can be used to develop a thymol protection vehicle for applications in food, pharmaceutics, odontology, or biomedical industries.
Introduction
Over the years, bacteria have developed resistance to several antibiotics, creating a severe problem in health sectors [1]. For this, the consumption of natural components that possesses bioactive compounds has been recommended, such as plant extracts or their essential oil [2]. Plant extracts have been studied due to their natural antioxidant, antimicrobial, anti-inflammatory, and antiseptic activities, as well as being precursors in the synthesis of pharmaceutical products [3,4].
The antibacterial and antioxidant activities of plant extracts can be attributed to phenolic compounds. Among these, terpenoids present in plants of the Thymus genus, including carvacrol and thymol, interact with the metabolic processes of bacteria [5]. Thymol is classified as generally recognized as safe (GRAS) and has been used in different food and pharmaceutical formulations [6] due to its antioxidant, antifungal, and antibacterial activities [7]. It has been proved that thymol in concentrations below 100 mg/mL exhibits antibacterial activity against Gram-negative and Gram-positive bacteria, such as
After intercalation, typical signals remained with reduced intensity, suggesting that thymol was not intercalated inside the interlaminar space but could be on the surface of the laminar compound [22]. To assume that the biomolecule was successfully intercalated inside the layered compound, the first peak should displace to smaller values of the 2θ angle, and interlaminar space could be calculated with Bragg's Law [16]. Nevertheless, thymol crystallinity (diffractogram not shown) was diminished in thymol-ZnLHS, suggesting a strong interaction with the surroundings of Simonkolleite. A similar study was conducted for thymol nanoencapsulation, in which the XRD pattern of thymol disappeared in the nano-encapsulated thymol, suggesting the prevention of crystallinity, and this immobilization possibly promotes more reactive sites or confers beneficial applications for food products [23].
The concentration of thymol adsorbed on the ZnLHS was determined by measuring the absorbance of the solution of intercalation. Previously, a standard curve was prepared to correlate the absorbance of thymol with the concentration. The equation from Bouaziz et al. [24] was applied, and the adsorption of thymol was estimated ( Figure 2). After intercalation, typical signals remained with reduced intensity, suggesting that thymol was not intercalated inside the interlaminar space but could be on the surface of the laminar compound [22]. To assume that the biomolecule was successfully intercalated inside the layered compound, the first peak should displace to smaller values of the 2θ angle, and interlaminar space could be calculated with Bragg's Law [16]. Nevertheless, thymol crystallinity (diffractogram not shown) was diminished in thymol-ZnLHS, suggesting a strong interaction with the surroundings of Simonkolleite. A similar study was conducted for thymol nanoencapsulation, in which the XRD pattern of thymol disappeared in the nano-encapsulated thymol, suggesting the prevention of crystallinity, and this immobilization possibly promotes more reactive sites or confers beneficial applications for food products [23].
The concentration of thymol adsorbed on the ZnLHS was determined by measuring the absorbance of the solution of intercalation. Previously, a standard curve was prepared to correlate the absorbance of thymol with the concentration. The equation from Bouaziz et al. [24] was applied, and the adsorption of thymol was estimated ( Figure 2). The adsorption phase was constant with a linear behavior (R 2 = 0.989) until 1.5 h, and afterward, the absorbance remained constant, with a concentration of 0.863 mg thymol/mg ZnLHS. Thymol has been previously adsorbed in the LDH matrix of Mg-Al-CO3 with a maximum concentration of 8 mg/g in 4 h [21], while in hydroxyiron clays kaolinite and montmorillonite, the absorption reached values of 0.391 and 1.125 mg/mg, respectively, in 10 days [25]. Compared to the LDH matrix, in this study, the amount of thymol adsorbed was higher, probably due to the less complex structure that the LHS presents, and thus, the interaction could be favored [16]. Moreover, it has been reported that adsorption is time-dependent [21], but considering the concentration of thymol in clays and in the ZnLHS, values are not too different for their corresponding time of adsorption, and thus, the use of ZnLHS proved to be able to retain almost the same amount of biomolecule in less time.
Furthermore, in Figure 2, kinetic adsorption models are depicted, and in Table 1, the parameters R 2 , RMSE, ARE, χ 2 , AIC, and BIC are shown. The pseudo-second-order model The adsorption phase was constant with a linear behavior (R 2 = 0.989) until 1.5 h, and afterward, the absorbance remained constant, with a concentration of 0.863 mg thymol/mg ZnLHS. Thymol has been previously adsorbed in the LDH matrix of Mg-Al-CO 3 with a maximum concentration of 8 mg/g in 4 h [21], while in hydroxyiron clays kaolinite and montmorillonite, the absorption reached values of 0.391 and 1.125 mg/mg, respectively, in 10 days [25]. Compared to the LDH matrix, in this study, the amount of thymol adsorbed was higher, probably due to the less complex structure that the LHS presents, and thus, the interaction could be favored [16]. Moreover, it has been reported that adsorption is timedependent [21], but considering the concentration of thymol in clays and in the ZnLHS, values are not too different for their corresponding time of adsorption, and thus, the use of ZnLHS proved to be able to retain almost the same amount of biomolecule in less time.
Furthermore, in Figure 2, kinetic adsorption models are depicted, and in Table 1, the parameters R 2 , RMSE, ARE, χ 2 , AIC, and BIC are shown. The pseudo-second-order model proved to be the better fit with R 2 = 0.989, and predictors exhibit the lowest values, followed by Elovich > pseudo-first-order > intra-particle diffusion model. The pseudo-second-order model describes that a chemisorption phenomenon was involved during the adsorption process, where valence forces acted through the exchange of electrons. These processes have been previously described for polyphenol adsorption systems in the biomass of Chlorella vulgaris [26], in the adsorption of polyphenols in microporous starch [20], and in the adsorption process of polyphenols in roasted hazelnut skin [27], demonstrating a strong correlation.
The liberation of thymol is depicted in Figure 3. It can be seen that, in the first hour, the thymol liberation rate was higher than it was during the rest of the test time, this can be attributed to rapidly achieving an equilibrium concentration of biomolecule adsorbed on the hydroxide surface. The thymol concentration kept increasing and decreasing with passing time, but always in a concentration range of 0.800-1.02 mg/mL, suggesting that zinc hydroxide kept the concentration in the state of equilibrium. This phenomenon may be helpful in controlling bacterial growth on surfaces since the bioactive compound would not be used only at the beginning, but it would constantly return to the inorganic matrix until the concentration of thymol falls off of equilibrium. In the study conducted by Guarda et al. [28], the liberation of thymol from microencapsulates was evaluated, and it can be seen that concentration of thymol was almost constant in a 28 days' test, and the maximum of biomolecule concentration was liberated on the first day. These processes have been previously described for polyphenol adsorption systems in the biomass of Chlorella vulgaris [26], in the adsorption of polyphenols in microporous starch [20], and in the adsorption process of polyphenols in roasted hazelnut skin [27], demonstrating a strong correlation.
The liberation of thymol is depicted in Figure 3. It can be seen that, in the first hour, the thymol liberation rate was higher than it was during the rest of the test time, this can be attributed to rapidly achieving an equilibrium concentration of biomolecule adsorbed on the hydroxide surface. The thymol concentration kept increasing and decreasing with passing time, but always in a concentration range of 0.800-1.02 mg/mL, suggesting that zinc hydroxide kept the concentration in the state of equilibrium. This phenomenon may be helpful in controlling bacterial growth on surfaces since the bioactive compound would not be used only at the beginning, but it would constantly return to the inorganic matrix until the concentration of thymol falls off of equilibrium. In the study conducted by Guarda et al. [28], the liberation of thymol from microencapsulates was evaluated, and it can be seen that concentration of thymol was almost constant in a 28 days' test, and the maximum of biomolecule concentration was liberated on the first day. FTIR spectra for ZnLHS, thymol, and thymol-ZnLHS are depicted in Figure 4. For thymol spectra (4a), characteristic signals can be found in the wavenumbers 2964, 2868, 1285, and 1233 cm −1 , assigned to C=C stretching, -OH bending, and C-O stretching of phe- FTIR spectra for ZnLHS, thymol, and thymol-ZnLHS are depicted in Figure 4. For thymol spectra (4a), characteristic signals can be found in the wavenumbers 2964, 2868, 1285, and 1233 cm −1 , assigned to C=C stretching, -OH bending, and C-O stretching of phenolic compounds [29]. The ZnLHS FTIR spectra show O-H vibrations in the region of 3500-3000 and 1630-1250 cm −1 , and the signal below 800 cm −1 corresponds to Zn-O bonds [16].
These signals appear in the thymol-ZnLHS spectrum (4c), with smaller intensities confirming the formation of the hybrid. Moreover, this phenomenon is found in other studies in which thymol was incorporated in a chitosan hydrogel [29,30]. Another important contribution can be identified in wavenumbers around 3376 cm −1 , where the OH − radicals' signal that is present in the laminar compound ( Figure 4b) is reduced in the hybrid, possibly due to the interaction with thymol [31].
Furthermore, small signals c.a. 2868, 1285, and 1233 cm −1 (wavenumbers with arrows), which are characteristic of the thymol spectrum, appear in the hybrid material, as reported in many studies [29,30]. According to Koosehgol et al. [32], even though the intensity of signals is not high, these small contributions can be assumed to be an indicator of the molecule's presence, as found in the IR spectra of a chitosan-thymol hydrogel. Remarkably, a band located at 1640 cm −1 (blue line) in ZnLHS (spectrum b) is assigned to the δ(OH) vibrational mode of surface-bound water on the layered material. Spectrum c loses this vibrational mode, probably by the hydrogen-bonding or "ordered hydrogen bonds" interactions with thymol. Namely, some studies showed that the hydrogen bond extends to the sides of hydrophobic solutes and can be ordered as a network where, as far as is known, it can be cooperative and of electrostatic nature. Thus, interactions could be reached with neighborhood OH groups [33,34]. Moreover, the signal centered at 1495 cm −1 (spectrum b) is associated with the stretching of the C-C bonds from the aromatic rings and was slightly shifted on spectrum c at 1504 cm −1 ; probably, this fact explains the interaction of thymol with ZnLHS matrix (yellow vertical line on spectrum c). To elucidate more interactions, a second derivative of the region between 1500 and 600 cm −1 for ZnLHS and thymol-ZnLHS was obtained ( Figure 4b); this spectrum clearly shows the explanations mentioned above, and in addition, an unresolved band centered at 1390 cm −1 in Figure 4a (violet vertical line) can be appreciated and resolved by second derivative (p < 0.05), advising the reduction of the OH coordinated bond [35]. Using the second derivative criterion proved to be significantly helpful in resolving weak and overlapping bands in the original spectra [36]. In the Raman spectra depicted in Figure 5a, thymol contributions are shown; specifically, a band toward 740 cm −1 was previously reported for the aromatic ring of thymol (SpectraBase TM Wiley & Sons, 2022). On the other hand, in spectra 5b, signals for ZnLHS are shown, confirming the structure of Simonkolleite found in the database of the RRUFF project for minerals at 780 nm (ID R130117). Signals positioned in 212 and 391 cm −1 belong to Zn-O vibrations, while 255 cm −1 is assigned to the Zn-Cl vibration. Moreover, the signal around 1050 cm −1 is attributed to any intercalated anions [37]. In the thymol-ZnLHS spectra (Figure 5c), blue arrows mark positions related to thymol, thus confirming the presence of the terpene in the hybrid structure. Even signals around 1058 cm −1 in spectrum a (blue line) and 1050 cm −1 in spectrum b (dotted line) are slightly displaced, suggesting a joint vibration in thymol-ZnLHS. Interestingly, signal toward 255 cm −1 for thymol-ZnLHS diminishes its intensity compared to ZnLHS, adverting the vibration of the chloride anion present in the interlamellar structure. Furthermore, the 1053 cm −1 region belonging to the thymol aromatic ring in spectrum c shows a slight displacement, a behavior that has been previously reported due to hydrophobic interactions [37]. These facts sustain findings in the vibrations of IR spectra. Thermograms of ZnHSL and thymol-ZnHSL are depicted in Figure 6. For layered hydroxide, similar behaviors have been previously reported [16,19]. On the other hand, thymol-ZnHSL exhibits a first event around 150 • C, where a 10% mass is lost, and that could be related to the thymol surrounding the hydroxide, since this terpene has been reported to reduce its mass drastically between 150 and 200 • C [38]. The next event comes near 400 • C, where Cl-and OH-interlaminar anions degrade, and finally, there is a total oxidation to ZnO in temperatures above 500 • C [16]. tercalation [39]. In this type of reaction, chlorine reacts with phenol or compounds containing phenolic groups such as thymol and can then support the displacement of the C-C aromatic ring band [40]. . Fourier-transform infrared spectroscopy: (a) IR spectra of thymol and zinc layered hydroxide with and without thymol and (b) second derivative spectra of the analyzed samples in the 1500-600 cm −1 region; significant peaks were considered at p < 0.05 and represented by *. The insertion or removal of water molecules causes changes in the electronic structure (something not so familiar in 2D-type structures), generating possibilities to function in different areas of knowledge by its photoelectronic properties [41]. A recent publication by Baig et al. [15] found a reduced bandgap (1.8 eV) attributed to the antibacterial action of pristine LHS by ROS species' generation. To observe this phenomenon, we calculated a bandgap of Simonkolleite studied here by Tauc's relation, using a UV-Vis spectrophotometer (Optizen Pop, K LAB). The bandgap analysis (Figure 7a) revealed a small value (2.27 eV), but the value was slightly higher compared with that of the authors. Reduced . Fourier-transform infrared spectroscopy: (a) IR spectra of thymol and zinc layered hydroxide with and without thymol and (b) second derivative spectra of the analyzed samples in the 1500-600 cm −1 region; significant peaks were considered at p < 0.05 and represented by *. The insertion or removal of water molecules causes changes in the electronic structure (something not so familiar in 2D-type structures), generating possibilities to function in different areas of knowledge by its photoelectronic properties [41]. A recent publication by Baig et al. [15] found a reduced bandgap (1.8 eV) attributed to the antibacterial action of pristine LHS by ROS species' generation. To observe this phenomenon, we calculated a bandgap of Simonkolleite studied here by Tauc's relation, using a UV-Vis spectrophotometer (Optizen Pop, K LAB). The bandgap analysis (Figure 7a) revealed a small value (2.27 eV), but the value was slightly higher compared with that of the authors. Reduced Therefore, the XRD, FTIR, Raman, and TGA techniques suggest that the thymol molecule was adsorbed. However, interestingly, signals located at 1153, 832, and 709 cm −1 (green vertical lines in Figure 4a,b) warn about the presence of a Cl − ions vibration, which is a counterion present in the typical Simonkolleite interlamellar space. The signal is diminished at 709 cm −1 for the nanohybrid spectrum and could suggest possible partial intercalation [39]. In this type of reaction, chlorine reacts with phenol or compounds containing phenolic groups such as thymol and can then support the displacement of the C-C aromatic ring band [40].
The insertion or removal of water molecules causes changes in the electronic structure (something not so familiar in 2D-type structures), generating possibilities to function in different areas of knowledge by its photoelectronic properties [41]. A recent publication by Baig et al. [15] found a reduced bandgap (1.8 eV) attributed to the antibacterial action of pristine LHS by ROS species' generation. To observe this phenomenon, we calculated a bandgap of Simonkolleite studied here by Tauc's relation, using a UV-Vis spectropho- (Figure 7a) revealed a small value (2.27 eV), but the value was slightly higher compared with that of the authors. Reduced bandgap values are related to the presence of chloride ions in the zinc matrix, producing a heterojunction between the valence band and the conduction band, which gives it potential photocatalytic properties in visible wavelength ranges [15].
Antioxidant Activity
The DPPH and ABTS tests were carried out to determine the antioxidant activity the thymol-ZnLHS hybrid (Figure 8a,b). The results demonstrated that thymol-ZnLH exerts slightly more ABTS activity than ZnLHS alone, thus confirming the biomolecule the hybrid. For the DPPH assay, thymol-ZnLHS exhibited higher activity than thym and ZnLHS at low concentrations, but the effect was inverted at high concentrations. study performed by Rúa et al. [2] found that high concentrations of other compound such as carvacrol, in the extract solutions inhibit the antioxidant activity of thymol. Sin the ZnLHS exhibited antioxidant activity by itself, this capacity to generate ROS speci related to the reduced bandgap [15] could have reduced the activity of thymol compare to the control. According to Deng et al. [49], the solubilization of thymol could increa with different solvents, but the antioxidant effect could be lost; apart from that, if inco porated into a food matrix, the flavor could be altered. The zeta potential (ζ) influences the stability of the particles through electrostatic repulsions. For stable dispersions, values must be greater than ±30 mV, considering that values above this value are less sensitive to agglomerations or destabilization caused by van der Waals forces or Brownian motion [42]. The ζ-potential analysis showed that laminar compounds (ZnLHS and thymol-ZnLHS) exhibited a negative potential of −4.93 ± 0.14 and a positive potential of +29.20 ± 0.90, respectively (Figure 7b). It is known that negative zeta potential values are associated with the accumulation of positive charges surrounding the nanomaterial, giving it a negative nature [43]. In other studies, this increase in ζ-potential has been demonstrated to be an effect of thymol in silica carrier agents [44]. A possible explanation could be associated with the hydrophobicity of both materials maintaining a repulsion electrostatic in the media. Moreover, this phenomenon was correlated in a study conducted by Mattos et al. [45] in which uncharged thymol added to the silica matrix showed positive and negative electrostatic potential, shifting values to zero in a pH-dependent manner. The mean particle size was reduced after intercalation from 589.80 ± 18 nm to 141.05 ± 13.85 nm, and the polydispersity index was also diminished from values between 0.58 ± 0.09 and 0.33 ± 0.05. This decrease improves the size and morphology distribution of the hybrids. Several methods to synthesize nanoclays lack homogeneity in their morphology, which can be a desirable property for applications in the food and pharmaceutical industries [46,47].
Micrographs depicted in Figure 7c exhibit the typical hexagonal morphology (blue arrows) of layered compounds (at least one dimension in the nano-range order) [48]. In the hybrid (Figure 7d), it can be seen that the structure remained, but the size decreased, and the organic part surrounding the particle (yellow circles), in some cases, formed networks of ZnLHS and biomolecule. Similar results were obtained by the interaction of glucans with a zinc-hydroxy chloride [16]. Furthermore, the decrease in the particle size was studied by Gutiérrez-Gutiérrez et al. [22] when curcumin was loaded into layered compounds to avoid its agglomeration, and the ζ-potential values supported this fact.
Antioxidant Activity
The DPPH and ABTS tests were carried out to determine the antioxidant activity of the thymol-ZnLHS hybrid (Figure 8a,b). The results demonstrated that thymol-ZnLHS exerts slightly more ABTS activity than ZnLHS alone, thus confirming the biomolecule in the hybrid. For the DPPH assay, thymol-ZnLHS exhibited higher activity than thymol and ZnLHS at low concentrations, but the effect was inverted at high concentrations. A study performed by Rúa et al. [2] found that high concentrations of other compounds, such as carvacrol, in the extract solutions inhibit the antioxidant activity of thymol. Since the ZnLHS exhibited antioxidant activity by itself, this capacity to generate ROS species related to the reduced bandgap [15] could have reduced the activity of thymol compared to the control. According to Deng et al. [49], the solubilization of thymol could increase with different solvents, but the antioxidant effect could be lost; apart from that, if incorporated into a food matrix, the flavor could be altered.
Antibacterial Activity
Inhibition halos were measured after incubation and are shown in Table 2.
Antibacterial Activity
Inhibition halos were measured after incubation and are shown in Table 2. The diameters were higher for S. aureus than for E. coli O157:H7. Thymol is an antibacterial compound that exhibits antibacterial activity against both Gram-negative and Gram-positive bacteria. Possibly, the higher surface charge (zeta-potential) will increase the interaction between cells/nanoparticles, leading to a better transfer of thymol; it allowed the molecule to exhibit higher inhibition halos for the hybrid than the biomolecule alone [50,51]. Xu et al. [52] determined the inhibitory concentration of thymol, with values around 200 mg/mL for E. coli. Compared to this study, the inhibitory concentration was observed in lower concentrations. Palygorskite functionalized with thymol achieved better antimicrobial properties against S. aureus than thymol alone since the improved hydrophilic character of the composite promotes the transport of monoterpene in clays [53]. Likewise, in another study, the antimicrobial action was verified through another system composed of clinoptilolite-zeolite clays loaded with thymol or carvacrol, which showed more extraordinary antimicrobial properties against E. coli and S. aureus. Inhibition was not only attributed to the release of monoterpenes but also to the enhanced activity, as new properties such as hydrophilia of the hybrids were introduced [54]. Moreover, the stability of nanomaterials as colloids is related to high values of ζ-potential (negatively or positively charged), promoting stability to the dispersion and, thus, improving bactericidal efficacy [55]. In addition, the reduced bandgap value may confer the capacity in these materials to kill bacteria through its ROS release in a synergistic manner with ζ-potential; all of these findings align with the work published by Baig et al. [15].
Inhibition of Biofilm Formation
According to several authors, pathogenic biofilm is relevant to health since it can lead to severe illness or even death [51]. The percentage of inhibition of biofilm formation is found in Figure 9.
Inhibition for ZnLHS at 5 and 10 mg/mL was of 73 and 89%, respectively, while for thymol, the results with the same concentrations were of 62 and 78%. Interestingly, the thymol-ZnHSL hybrid increased the inhibition of biofilm formation in 86 and 92% for each concentration tested, demonstrating a synergistic behavior of layered hydroxide and terpene molecules. The percentage of inhibition was statistically different for each material and concentration (p < 0.05).
Common disinfectants oxidize the cell membrane before biofilm forms [56]; this suggest that the hydroxide salt, due to its anionic nature, can inhibit this polysaccharide synthesis. Moreover, it has been proved that thymol suppresses biofilm-associated genes, and for that, the combination of both compounds may increase the inhibition rate of biofilm formation [57]. Hydroxide salt, anionic nature oxidizes, and thymol suppress biofilm-associated genes. materials to kill bacteria through its ROS release in a synergistic manner with ζ-potential; all of these findings align with the work published by Baig et al. [15].
Inhibition of Biofilm Formation
According to several authors, pathogenic biofilm is relevant to health since it can lead to severe illness or even death [51]. The percentage of inhibition of biofilm formation is found in Figure 9. Inhibition for ZnLHS at 5 and 10 mg/mL was of 73 and 89%, respectively, while for thymol, the results with the same concentrations were of 62 and 78%. Interestingly, the thymol-ZnHSL hybrid increased the inhibition of biofilm formation in 86 and 92% for each concentration tested, demonstrating a synergistic behavior of layered hydroxide and terpene molecules. The percentage of inhibition was statistically different for each material and concentration (p < 0.05).
Common disinfectants oxidize the cell membrane before biofilm forms [56]; this suggest that the hydroxide salt, due to its anionic nature, can inhibit this polysaccharide synthesis. Moreover, it has been proved that thymol suppresses biofilm-associated genes, and for that, the combination of both compounds may increase the inhibition rate of biofilm The increase in the percentage of inhibition presented by the thymol-ZnLHS compared to thymol alone may be because the latter has a relatively hydrophilic character that, when stabilized in colloidal dispersion, can favor its diffusion through the polysaccharide matrix with polar character. On the contrary, the hydrophobic character of ZnLHS could interact specifically with the bacterial membrane (behavior also observed by thymol). Therefore, the nanohybrid increases inhibition due to a synergistic effect [58]. This synergic effect has been reported previously with other compounds, such as nalidixic acid/zinc hydroxide nitrate [59].
Synthesis of ZnLHS and Thymol-ZnLHS
The synthesis of materials and hybrids was conducted by following the methodology described by Velazquez-Carriles et al. [16]. Briefly, 200 mL of a solution containing 0.04 g/mL ZnCl 2 was prepared and allowed to stabilize at room temperature, with constant stirring, for 20 min. Then the pH (Hanna, HI98115) was gradually elevated by adding NaOH 0.1 M dropwise until it reached a final value of 8 and a white precipitate formed. The solution was covered and allowed to stabilize for 24 h at room temperature, with constant stirring. Solids were recovered by centrifugation (10,000 rpm for 10 min at 25 • C) (LaboGene, LZ-1580R) with three consecutive washes with distilled water. The recovered powder (ZnLHS) was then dried in an oven (L-C Oven, Mechanically Convected) at 60 • C for 24 h and reserved until use.
For the hybrid synthesis, a solution of 5 mg/mL of thymol was prepared, and 500 mg of ZnLHS was added, with constant stirring at room temperature. Aliquots of supernatant were evaluated on a UV-Vis (Nanodrop 2000, ThermoScientific), at a wavelength of 274 nm, every 10 min, until constant absorbance was achieved (ca. two hours). The amount of thymol interacting with the ZnLHS was estimated with the formula proposed by Bouazis et al. [24] For recovery of the hybrid thymol-ZnLHS, the conditions mentioned above for centrifugation and drying were used.
The results of adsorption kinetics were fitted with four models: pseudo-first-order, pseudo-second-order, Elovich, and intra-particle diffusion. The best model was selected with respect to the highest correlation coefficient, R 2 , and also with the lowest in the following five statistical parameters: root mean squared error (RMSE), average relative error (ARE), chi-square (x 2 ), Akaike information criterion (AIC), and Bayesian information criterion (BIC). Each parameter was calculated with the following equations: where N is the number of experimental data, q t,predicted is the calculated value in mg/mg with each kinetic model, and q t,exp is the experimental value (mg/mg). LL is the logarithmic likelihood, and k is the number of parameters involved in the model. For thymol liberation, 60 mg of thymol-ZnHSL was suspended in 50 mL of PBS at 25 • C (approximately 1.02 mg thymol/mL), with constant stirring, and aliquots of supernatant were taken to measure the thymol concentration in a UV-Vis spectrophotometer at 274 nm until constant value was achieved.
Characterization
X-ray diffractograms (XRDs) were collected in an Empyrean X-ray Diffractor (Panalytical, Malvern, UK), using CuKα radiation at an angle 2θ between 5 and 70 • , with a 0.02 step and 30 s of collection time. Fourier-transform infrared spectra (FTIR) were recorded in a spectrophotometer (Cary 630, Agilent Technologies, Santa Clara, CA, USA) from 4000 to 500 cm −1 in absorbance mode, with 32 scans and 4 cm −1 of resolution. To determine the morphology, scanning electron microscopy (SEM) was applied in an FE-SEM (TESCAN, model MIRA 3 LMU, Brno, Czech Republic), with a voltage of 15 kV. Particle size, polydispersity index, and ζ-potential were determined by dynamic light scattering (DLS) in a Zetasizer Nano ZS90 (Malvern Instruments, Malvern, UK) at 25 • C and a pH of 7, adding 1 mL of a diluted solution of 1 mg/mL (10-fold) ZnLHS and thymol-ZnLHS. High-resolution Raman spectroscopy was conducted in a SmartRaman (DXR2, Thermo Fisher Scientific, Waltham, MA, USA), with 780 nm laser excitation, 50 mW, and a slit of 50 µm; the acquisition time was 150 s at 3 cm −1 of resolution. The spectra were recorded between 1100 and 150 cm −1 . Thermogravimetric analyses were carried out on a Discovery thermobalance (TGA5000, TA Instruments, New Castle, DE, USA). TG curves were registered by heating sample masses from 50 to 600 • C, using a ramp of 10 • C min −1 , under a nitrogen atmosphere.
Antioxidant Activity
The antioxidant activity of ZnLHS and thymol-ZnLHS was determined with 2,2 -azinodi-(3-ethylbenzthiazoline sulfonic acid (ABTS) and 2,2-diphenyl-1-picrilhidrazil (DPPH) tests, comparing with common antioxidant molecules, as well as thymol standard (Sigma Aldrich, St. Louis, MO, USA). For ABTS, Li et al.'s [60] methodology was followed, preparing solutions of the samples at different concentrations (50-300 µg/mL), using ascorbic acid as a positive control and methanol as a negative control. Inhibition of radical DPPH was conducted as described by Brand-Williams et al. [61], with sample solutions at the same concentrations mentioned before; the positive control was BHT, and the negative control was methanol. The results are expressed as a percentage of inhibition for both tests. Plates were read at 754 nm for ABTS and 520 nm for DPPH in a 96-well microplate reader (BIO-RAD, iMark, Hercules, CA, USA). In both techniques, the following formula was employed: Abs control − Abs sample Abs control * 100 (6) where Abs control is the absorbance of control that contains all the reagents, except the samples.
Antimicrobial Evaluation
The inhibition halo test was conducted to assess the antibacterial activity of hybrids. Briefly, Müller-Hinton agar (MHA) plates were prepared, and 100 µL of cell suspension of E. coli O157:H7 and S. aureus ATCC 25,923 at 1 × 10 8 cell/mL was spread on the surface. In each plate, holes were bored, and 100 µL of thymol, ZnLHS, and thymol-ZnLHS in a range of 1 to 10 mg/mL in PBS was added. Plates were incubated for 24 h at 36 ± 1 • C, and the inhibition zone was measured in mm.
Antibiofilm Activity
Some bacteria produce a biofilm to protect themselves against toxic compounds. A test following the methodology of O'Toole (2011) was applied to determine the inhibition of biofilm formation. Briefly, a culture of Pseudomonas aeruginosa ATCC 27,853 was grown overnight at 36 ± 1 • C in Luria Bertani broth and then diluted at 1:100 in the same fresh medium containing arginine as a carbon source and magnesium sulfate. In 96-well plates, 100 µL of P. aeruginosa culture was added, followed by 20 µL of thymol, ZnLHS, or thymol-ZnLHS solutions at 5 and 10 mg/mL (previously established with antimicrobial evaluation); as the negative control, PBS was used. The plate was incubated for 24 h at 36 ± 1 • C; then the culture was discarded, and the plate was washed in a deionized water bath twice and allowed to dry at room temperature. Subsequently, 125 µL of a solution of crystal violet (0.1% in water) was added to all wells and incubated at room temperature for 15 min. Plates were washed and dried as mentioned before. After the plate was dried, 125 µL of acetic acid (30% in water) was added, and the plate was incubated for 15 min at room temperature. Finally, volumes were transferred to a new plate, and the absorbance was measured at 550 nm in a 96-well plate reader (BioRad, iMark, Hercules, CA, USA), using acetic acid as blank. The inhibition of biofilm formation was calculated with the following equation: %bio f ilm inhibition = 1 − (Abs sample − Abs blank ) (Abs control − Abs blank ) * 100 (7) where (Abs control ) is the absorbance of control that contains all the reagents, except the materials or thymol.
Statistical Analysis
All the experiments were performed in triplicate (±SD). Significant differences were considered at p < 0.05, using ANOVA, followed by Fisher's LSD test (Statgraphics Centurion XIX, Princeton, NJ, USA).
The second derivative for the region between 600 and 1500 cm −1 of the FTIR spectra was analyzed, using 9 points, by the Savitzky-Golay algorithm. Then, to compare the absorbances intensities between ZnLHS and thymol-ZnLHS, the normality examination was applied (Anderson-Darling, D'Agostino-Pearson omnibus, and Shapiro-Wilk tests) in consequence (normality rejection), and a Mann-Whitney U test was established to determine wavenumbers significantly (p < 0.05). Origin 2022 (OriginLab Inc., Northampton, MA, USA) version was employed, and GraphPad Prism v8.0.1 (Dotmatics, San Diego, CA, USA) was used.
Conclusions
A hybrid of zinc layered hydroxide salt and thymol with biological activity was successfully synthesized. Characterization with X-ray diffraction confirmed a Simonkolleite structure for the layered compound; IR and Raman spectra and SEM micrographs showed that the organic compound surrounded the inorganic materials, mainly. The thermogravimetric analysis determined that the material could undergo temperatures around 200 • C before losing thymol due to degradation. Adsorption kinetics was described with nonlinear models with R 2 of 0.989 and a concentration determined by UV-Vis absorbance of 0.863 mg thymol/mg ZnLHS, and the liberation in PBS demonstrated that thymol can be released almost totally in about 4 h. The biological activity was tested with antioxidant and antimicrobial tests. ABTS and DPPH demonstrated that the hybrid synergistically exhibits antioxidant activity. For antimicrobial activity, Gram-positive bacteria such as Staphylococcus aureus were more sensitive to exposition to the hybrid of thymol and zinc hydroxide layered salt in low concentrations. Biofilm formation of Pseudomonas aeruginosa was almost completely inhibited with the hybrid compared to the materials alone. The findings of this work demonstrate a promising role for this nanohybrid as a decolonizing agent and preventive agent, making it attractive in various areas, such as food science, pharmaceuticals, dentistry, and clinics, helping to keep society healthy. Finally, we can infer that this nanohybrid could belong to a new generation of compounds with advanced biological properties.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2022-09-23T15:31:29.691Z
|
2022-09-20T00:00:00.000
|
{
"year": 2022,
"sha1": "61944642cdf7218fd64defd36f40ee4e735bafb5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/19/6161/pdf?version=1663834386",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a53d0925d9380e78b71abbaca55b8bdab4cc9582",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258066990
|
pes2o/s2orc
|
v3-fos-license
|
Video Process Mining and Model Matching for Intelligent Development: Conformance Checking
Traditional business process-extraction models mainly rely on structured data such as logs, which are difficult to apply to unstructured data such as images and videos, making it impossible to perform process extractions in many data scenarios. Moreover, the generated process model lacks analysis consistency of the process model, resulting in a single understanding of the process model. To solve these two problems, a method of extracting process models from videos and analyzing the consistency of process models is proposed. Video data are widely used to capture the actual performance of business operations and are key sources of business data. Video data preprocessing, action placement and recognition, predetermined models, and conformance verification are all included in a method for extracting a process model from videos and analyzing the consistency between the process model and the predefined model. Finally, the similarity was calculated using graph edit distances and adjacency relationships (GED_NAR). The experimental results showed that the process model mined from the video was better in line with how the business was actually carried out than the process model derived from the noisy process logs.
Introduction
Business processes store information about the execution of information systems in event logs, and process mining extracts useful information from the event logs [1]. Process mining's purpose is to compare the event logs or the resulting process model in the discovery task with the existing reference model of the same process and to detect issues in the executed process for improvement [2]. The six degrees of parallel platform is a widely used parallel mechanism that can achieve six DOF of motion of a moving platform in space [3]. This study aims to enhance the security of people's health, improve the medical level further, and increase the confidentiality of people's private information [4]. Traditional process mining techniques are based on process logs. However, logs cannot be obtained effectively in some scenarios so it is impossible to mine process models that conform to actual scenarios. In addition, some process logs may contain some erroneous or incomplete behaviors [5], which cause a certain deviation between the model obtained by the process mining algorithm and the ideal model. A method of extracting the process models from videos and analyzing the consistency of the process models is proposed.
Due to various problems in the process logs and a large amount of video data emerging [6], it is very necessary to apply process mining to video data. In the era of big data, the Internet is enmeshed in people's lives and brings conveniences to their production and lives [7]. Most enterprises have complete video data, which can completely record the actual execution of the business process and contains rich business information. The conformance checking of procedural video is based on video data as the source data, and the process model excavated and the process model obtained from the process log containing noise are checked for consistency with the predefined models so that enterprises can improve their
Related Work
In recent years, process mining has gained extensive attention and accumulated more and more research results. A process mining anonymization algorithm based on genetic algorithms was proposed in [14], which reduces the privacy protection and utility tradeoff problem of process mining to the optimization problem of the activity inhibition set. Synthesizing photo-realistic images from text descriptions is a challenging problem in computer vision and has many practical applications [15]. In [16] they gave a technique to repair the missing events in a log. They used the knowledge gathered from the process model and provided a technique to repair the missing event logs in a trace. This technique gives us an incomplete log analysis. By using the stochastic Petri net model and alignment, we repair the event logs and convert them into Bayesian analysis. An efficient algorithm for mining the nearest neighbor relationships of relational event logs was proposed in [17]. The algorithm used two-level pointers to scan the log data, which has a linear time complexity for the log size, and the demand for memory space was independent of the log size. This algorithm improved the process mining efficiency of relational event logs.
A model mining method of cross-organizational emergency response process (CERP) was proposed in [18]. This method first extended the classical Petri nets (RMPNs) with resource and message attributes, and found the emergency response process models (IERP) within the organization were represented as RMPNs from the emergency exercise process logs. Secondly, it defined the cooperation mode between emergency organizations. Finally, the CERP model was obtained by combining the IERP model with the collaboration model. Text mining techniques were employed to mine the sentiments in different interactions, and then epistemic network analysis (ENA) was used to uncover sentiment changes in the five learning stages of blended learning [19]. In [20], they formalized the problem of the directly-follows graph (DFG) simplification as an optimization, specifically, the problem of finding a sound-spanning subgraph of a DFG with a minimal number of edges and a maximal sum of edge frequencies. Finally, they reported on an evaluation of the efficiency and optimality of the proposed heuristics using 13 real-life event logs. To sum up, these traditional process discovery methods used process logs as input. However, process logs cannot be obtained effectively in some scenes, which lead to the process extraction work facing huge challenges. Relatively speaking, unstructured video data are more flexible, records the real situation of the business process, and objectively reflects the actual execution process of the process.
Framework
Based on the above analysis, a method for extracting process models from videos and performing conformance checking is proposed, which solves the problems where the process logs cannot be obtained effectively and the process logs are noisy. The framework of our work is shown in Figure 1, including the video action record extraction module and the video conformance checking module. The video action record extraction module first preprocesses the video data, removes image noise and extracts moving targets, and then inputs the preprocessed images into the action localization network to generate action suggestions of different length intervals, and finally identifies the action category label of the action suggestion interval through the action recognition network, and saves the action category label in the log, builds a video flow log, adds some noise on this basis, such as wrong or incomplete behavior, and builds a video flow log with the noise; the purpose of adding noise is to simulate actual process logs with noise in the information system. The video conformance checking module includes generating predefined models and conformance checking. The predefined module defines the standard Petri net process models, and the conformance checking module uses the graph edit distance and the adjacency relationship (GED_NAR) method to judge the matching degree of the process model containing noise and the extracted process model with the predefined model. To sum up, these traditional process discovery methods used process logs as input. However, process logs cannot be obtained effectively in some scenes, which lead to the process extraction work facing huge challenges. Relatively speaking, unstructured video data are more flexible, records the real situation of the business process, and objectively reflects the actual execution process of the process.
Framework
Based on the above analysis, a method for extracting process models from videos and performing conformance checking is proposed, which solves the problems where the process logs cannot be obtained effectively and the process logs are noisy. The framework of our work is shown in Figure 1, including the video action record extraction module and the video conformance checking module. The video action record extraction module first preprocesses the video data, removes image noise and extracts moving targets, and then inputs the preprocessed images into the action localization network to generate action suggestions of different length intervals, and finally identifies the action category label of the action suggestion interval through the action recognition network, and saves the action category label in the log, builds a video flow log, adds some noise on this basis, such as wrong or incomplete behavior, and builds a video flow log with the noise; the purpose of adding noise is to simulate actual process logs with noise in the information system. The video conformance checking module includes generating predefined models and conformance checking. The predefined module defines the standard Petri net process models, and the conformance checking module uses the graph edit distance and the adjacency relationship (GED_NAR) method to judge the matching degree of the process model containing noise and the extracted process model with the predefined model.
Video Action Record Extraction Module
The action record extraction of the video includes two parts: video data preprocessing, and action localization and recognition. The goal is to extract process information from the process video. Video data preprocessing can maximize data simplification and reduce computational interference. Motion localization and recognition is the process of filtering and identifying preprocessed images from the video data to complete the process extraction of procedural video.
Video Data Preprocessing
Four techniques are used for the video preprocessing: image grayscale processing, knn-based moving target extraction, image noise processing, and open operation.
(1) Image grayscale processing: The formula is shown in (1)
Video Action Record Extraction Module
The action record extraction of the video includes two parts: video data preprocessing, and action localization and recognition. The goal is to extract process information from the process video. Video data preprocessing can maximize data simplification and reduce computational interference. Motion localization and recognition is the process of filtering and identifying preprocessed images from the video data to complete the process extraction of procedural video. Image grayscale can turn three channels into a single channel, reduce memory space, improve operation speed, increase visual contrast, and highlight the target area.
(2) Moving target extraction: The algorithm's purpose is to extract the motion information contained in the action in the video. The image is grayed out and still contains useless background information. In order to accurately extract the moving targets, the moving target is extracted by the knn algorithm [21]. Basically, the knn algorithm can accurately extract moving targets, but it is still disturbed by noise. (3) Image noise processing: The method of median filtering is used to perform nonlinear filtering on grayscale images, so that the target pixel is closer to the real value, thereby eliminating isolated noise points. The calculation method is shown in Formula (2), where g (x, y) is the processed image, f (x, y) is the original image, W is an N*N two-dimensional template, N is usually a positive odd number, and Med represents sorting the gray values in the domain window and taking out the middle value: (4) Open operation processing: Assuming that Z is the target image and W is a structural element, the mathematical formula for the opening operation processing of the structural element W by the target image Z is: where (W) xy represents the translation of the origin of the structuring element W to the position of the image pixel (x, y), the erosion operation is represented by Θ, and the dilation operation is represented by ⊕. The open operations smooth the contours of the moving targets and break up the narrow connection areas and remove the small protrusions from targets. The video data preprocessing can extract the interest points of the actual actions and high-light the potential task sequences in the video. The processing results are shown in Figure 2.
Sensors 2023, 23, x FOR PEER REVIEW 4 Image grayscale can turn three channels into a single channel, reduce memory sp improve operation speed, increase visual contrast, and highlight the target area.
(2) Moving target extraction: The algorithm's purpose is to extract the motion in mation contained in the action in the video. The image is grayed out and still con useless background information. In order to accurately extract the moving tar the moving target is extracted by the knn algorithm [21]. Basically, the knn algor can accurately extract moving targets, but it is still disturbed by noise. (3) Image noise processing: The method of median filtering is used to perform nonli filtering on grayscale images, so that the target pixel is closer to the real value, the eliminating isolated noise points. The calculation method is shown in Formula where g (x, y) is the processed image, f (x, y) is the original image, W is an N*N dimensional template, N is usually a positive odd number, and Med represents ing the gray values in the domain window and taking out the middle value: (4) Open operation processing: Assuming that Z is the target image and W is a struc element, the mathematical formula for the opening operation processing of the s tural element W by the target image Z is:
Action Location and Recognition
After video data preprocessing, the next step is to locate and recognize actions in preprocessed image sequence, as shown in Figure 3. For videos, both the target action the duration of the action are varied. The coherence between actions makes it difficu
Action Location and Recognition
After video data preprocessing, the next step is to locate and recognize actions in the preprocessed image sequence, as shown in Figure 3. For videos, both the target action and the duration of the action are varied. The coherence between actions makes it difficult to locate the start and end points of the action. A binary classification method based on a convolutional neural network is used to distinguish between action and background. To generate the temporal region proposals, our basic idea is to group consecutive snippets with high actionness scores. In a given video, we first sample a sequence of snippets, then use this network to produce actionness scores for them, and finally group them into temporal regions of various granularities. A fault-tolerant processing scheme with high robustness is designed, which can generate action suggestion intervals of different lengths, allowing accurate identification of the duration of the action in the video. With a set of proposed temporal regions, the next stage is to classify them into action classes. convolutional neural network is used to distinguish between action and background. To generate the temporal region proposals, our basic idea is to group consecutive snippets with high actionness scores. In a given video, we first sample a sequence of snippets, then use this network to produce actionness scores for them, and finally group them into temporal regions of various granularities. A fault-tolerant processing scheme with high robustness is designed, which can generate action suggestion intervals of different lengths, allowing accurate identification of the duration of the action in the video. With a set of proposed temporal regions, the next stage is to classify them into action classes. In short, we hope that the action proposed interval can cover all kinds of action time. A binary classification network is designed to distinguish action from the background, filtering out which clips contain action and which clips are backgrounds. Firstly, input the preprocessed image into the binary classification network, and each image gets a binary classification result of 0 or 1, where 0 means the image is the background, and 1 means the image is of an action. Then, a fault-tolerant mechanism is established to segment action suggestion intervals of different lengths, and the confidence of these images is taken as the score of the action suggestion interval. Finally, the intersection over the union of the action suggestion interval and the basic fact are calculated, and the non-maximum suppression algorithm is adopted. The location of the chopping board action instance is shown in Figure 4. The green box represents ground truths, the blue box represents a good prediction, and the red box represents a poor prediction. In short, we hope that the action proposed interval can cover all kinds of action time. A binary classification network is designed to distinguish action from the background, filtering out which clips contain action and which clips are backgrounds. Firstly, input the preprocessed image into the binary classification network, and each image gets a binary classification result of 0 or 1, where 0 means the image is the background, and 1 means the image is of an action. Then, a fault-tolerant mechanism is established to segment action suggestion intervals of different lengths, and the confidence of these images is taken as the score of the action suggestion interval. Finally, the intersection over the union of the action suggestion interval and the basic fact are calculated, and the non-maximum suppression algorithm is adopted. The location of the chopping board action instance is shown in Figure 4. The green box represents ground truths, the blue box represents a good prediction, and the red box represents a poor prediction.
A robust fault-tolerant mechanism is designed by us, which is controlled by two design parameters: the score threshold τ and the tolerance threshold γ. If the confidence of the image is greater than τ, the image is marked as '1', indicating that the image contains action. If the confidence of the image is less than τ, the image is marked as '0', indicating this image is the background. For the fault tolerance threshold γ, we choose an image as a starting point, and recursively expand it by absorbing subsequent images, and terminate the expansion if the proportion of action suggestion interval label '1' is less than γ. Obviously, this multi-threshold design enables us to remove background clips, generate action propose intervals of different lengths, and improve the accuracy of the video action filtering out which clips contain action and which clips are backgrounds. Firstly, input the preprocessed image into the binary classification network, and each image gets a binary classification result of 0 or 1, where 0 means the image is the background, and 1 means the image is of an action. Then, a fault-tolerant mechanism is established to segment action suggestion intervals of different lengths, and the confidence of these images is taken as the score of the action suggestion interval. Finally, the intersection over the union of the action suggestion interval and the basic fact are calculated, and the non-maximum suppression algorithm is adopted. The location of the chopping board action instance is shown in Figure 4. The green box represents ground truths, the blue box represents a good prediction, and the red box represents a poor prediction. A robust fault-tolerant mechanism is designed by us, which is controlled by two design parameters: the score threshold τ and the tolerance threshold γ. If the confidence of the image is greater than τ, the image is marked as '1', indicating that the image contains action. If the confidence of the image is less than τ, the image is marked as '0', indicating this image is the background. For the fault tolerance threshold γ, we choose an image as a starting point, and recursively expand it by absorbing subsequent images, and terminate the expansion if the proportion of action suggestion interval label '1' is less than γ. Obviously, this multi-threshold design enables us to remove background clips, generate action propose intervals of different lengths, and improve the accuracy of the video action location. As shown in Figure 5, the purple arrow indicates the extension direction of the action interval range. Compared with images, a video not only needs to consider the spatial location information of the video frames but also needs to consider the temporal relationship between the video frames. Pretrained models on ImageNet [22] have achieved great success in the field of analog images, extending the 2D convolution in the picture to the 3D convolution in the video, and adding the idea of two streams to the 3D convolution network. [23] proposed two-stream inflated 3D convolutional neural network (Two-Stream I3D), where the spatial stream network extracts features of the objects from static RGB video frames and the temporal stream network extracts the motion information; finally, the outputs of these two layers are fused to obtain the action recognition results. A new dynamic sign language recognition network integrating dual stream 3D convolutional neural networks and attention mechanisms is proposed by [24]. The convolutional block attention module (CBAM) [25] is introduced into the I3D network to enable the network to learn the salient information in the image and make the important features in the image more prominent without affecting the performance of the original network. However, considering that the spatial location information and time-domain relationships are two different concepts, the learning methods of the two levels must be different for neural networks. Using the same network structure for spatial and temporal streams will lead to more redundant information in the stream information of the two layers in the final fusion stage, which wastes a large number of neuron parameters of the dual-stream convolutional network.
Based on the above analysis, a network model integrating two-stream heterogeneous 3D convolution and attention mechanism is proposed, which introduces CBAM. The input is the original RGB image and the preprocessed optical flow map. The entire network model is shown in Figure 6. The spatial channel is added with CBAM after the Compared with images, a video not only needs to consider the spatial location information of the video frames but also needs to consider the temporal relationship between the video frames. Pretrained models on ImageNet [22] have achieved great success in the field of analog images, extending the 2D convolution in the picture to the 3D convolution in the video, and adding the idea of two streams to the 3D convolution network. [23] proposed two-stream inflated 3D convolutional neural network (Two-Stream I3D), where the spatial stream network extracts features of the objects from static RGB video frames and the temporal stream network extracts the motion information; finally, the outputs of these two layers are fused to obtain the action recognition results. A new dynamic sign language recognition network integrating dual stream 3D convolutional neural networks and attention mechanisms is proposed by [24]. The convolutional block attention module (CBAM) [25] is introduced into the I3D network to enable the network to learn the salient information in the image and make the important features in the image more prominent without affecting the performance of the original network. However, considering that the spatial location information and time-domain relationships are two different concepts, the learning methods of the two levels must be different for neural networks. Using the same network structure for spatial and temporal streams will lead to more redundant information in the stream information of the two layers in the final fusion stage, which wastes a large number of neuron parameters of the dual-stream convolutional network.
Based on the above analysis, a network model integrating two-stream heterogeneous 3D convolution and attention mechanism is proposed, which introduces CBAM. The input is the original RGB image and the preprocessed optical flow map. The entire network model is shown in Figure 6. The spatial channel is added with CBAM after the Concatenation layer of the Inception module of I3D; the network connection method is shown in Figure 6a. The time channel only uses the Inception module of the I3D network, and also adds CBAM after the Concatenation layer. The network connection method is shown in Figure 6b. In addition to adding the attention mechanism CBAM, the spatial channel also improves the I3D network structure by: (1) Removing the first max pooling layer to prevent the loss of low-level features of the image; (2) Removing the final average pooling operation, so that the global information in the image can be preserved. Using the dual-stream heterogeneous network not only comprehensively considers the rich spatial feature information and motion features in the video frame, but also reduces the neural network parameters and improves the computational efficiency of the model. Concatenation layer of the Inception module of I3D; the network connection method is shown in Figure 6a. The time channel only uses the Inception module of the I3D network, and also adds CBAM after the Concatenation layer. The network connection method is shown in Figure 6b. In addition to adding the attention mechanism CBAM, the spatial channel also improves the I3D network structure by: (1) Removing the first max pooling layer to prevent the loss of low-level features of the image; (2) Removing the final average pooling operation, so that the global information in the image can be preserved. Using the dual-stream heterogeneous network not only comprehensively considers the rich spatial feature information and motion features in the video frame, but also reduces the neural network parameters and improves the computational efficiency of the model. The loss function used for network training is the cross-entropy function.
where n represents the total action label category, L represents the OneHot encoding of the true label, and S represents the probability of the predicted action label output by the Softmax layer. In the training process, the Backpropagation Algorithm is used to continuously update the network parameters to reduce the loss, and finally, the argmax function is used to obtain the category label of the action according to the maximum probability value.
Video Consistency Analysis Module
Video conformance checking includes predefined model and conformance checking. Predefined modules define standard Petri net process models; consistency is a calculation of the fit of process models extracted from noisy process logs and videos from predefined models.
Predefined Model
After analysis of the dataset, each process executor performs a series of actions in a natural state, and then a Petri net is used to represent a predefined model of the process, as shown in Figure 7. The specific descriptions of the events in Figure 7 are shown in Table The loss function used for network training is the cross-entropy function.
where n represents the total action label category, L represents the OneHot encoding of the true label, and S represents the probability of the predicted action label output by the Softmax layer. In the training process, the Backpropagation Algorithm is used to continuously update the network parameters to reduce the loss, and finally, the argmax function is used to obtain the category label of the action according to the maximum probability value.
Video Consistency Analysis Module
Video conformance checking includes predefined model and conformance checking. Predefined modules define standard Petri net process models; consistency is a calculation of the fit of process models extracted from noisy process logs and videos from predefined models.
Predefined Model
After analysis of the dataset, each process executor performs a series of actions in a natural state, and then a Petri net is used to represent a predefined model of the process, as shown in Figure 7. The specific descriptions of the events in Figure 7 are shown in Table 1. The following is a description of the predefined model where the process executor enters the scene and then has two options: (1) Take the cutting board and rag, then put the cutting board and rag in order, then take the plate and cup, then put the plate and cup in order, then there are two options: 1 Take a fork, knife, spoon, and put the fork, knife, spoon in that order; 2 Take a fork, a spoon, and a knife, and then put the fork, spoon, and knife in that order. (2) Take the cutting board, then put the cutting board, then take the rag, then put the rag, then take the plate, then put the plate, then take the spoon, then put the spoon, then open the cupboard, take the cup, put the cup, or just take the cup, put the cup.
nsors 2023, 23, x FOR PEER REVIEW 8 of 15 1. The following is a description of the predefined model where the process executor enters the scene and then has two options: (1) Take the cutting board and rag, then put the cutting board and rag in order, then take the plate and cup, then put the plate and cup in order, then there are two options: ➀ Take a fork, knife, spoon, and put the fork, knife, spoon in that order; ➁ Take a fork, a spoon, and a knife, and then put the fork, spoon, and knife in that order. (2) Take the cutting board, then put the cutting board, then take the rag, then put the rag, then take the plate, then put the plate, then take the spoon, then put the spoon, then open the cupboard, take the cup, put the cup, or just take the cup, put the cup.
No matter which utensils the executor sets first, the last process executor will leave the scene. In order to verify that the process model extracted from the video is more consistent with the actual execution of the business than the process model obtained from the process log containing noise, we did a comparative experiment to calculate the fitness of the two Put the spoon t 10 Take the cup t 11 Put the cup t 12 Leave the scene t 13 Take the cutting board and the rag t 14 Put the cutting board and the rag t 15 Take the plate and the cup t 16 Put the plate and the cup t 17 Take the fork, knife, and spoon t 18 Put the Fork, knife, and spoon t 19 Open cupboard t 20 Take the fork, spoon, and knife t 21 Put the fork, spoon, and knife No matter which utensils the executor sets first, the last process executor will leave the scene.
Conformance Checking
In order to verify that the process model extracted from the video is more consistent with the actual execution of the business than the process model obtained from the process log containing noise, we did a comparative experiment to calculate the fitness of the two process models and the predefined model. Firstly, convert the process model extracted from the video, the process model obtained from the process log containing noise, and the predefined model into a directed graph, and then use the GED_NAR algorithm to calculate the fitness of the directed graph. Finally, the compliance results of the two process models and the predefined model are obtained.
With action recognition, we can get the labels of each action category in the video, and after setting a unique ID for each video, we can build a CSV file containing two columns of attributes: "video ID" and "action label". The "video ID" can represent the "case ID" in the process log, and the "action label" can represent the "activity" content in the process log. Finally, the video data process log file is obtained, and ProM is used to generate the Petri net model, as shown in Figure 8. predefined model into a directed graph, and then use the GED_NAR algorithm to calculate the fitness of the directed graph. Finally, the compliance results of the two process models and the predefined model are obtained.
With action recognition, we can get the labels of each action category in the video, and after setting a unique ID for each video, we can build a CSV file containing two columns of attributes: "video ID" and "action label". The "video ID" can represent the "case ID" in the process log, and the "action label" can represent the "activity" content in the process log. Finally, the video data process log file is obtained, and ProM is used to generate the Petri net model, as shown in Figure 8. (7) where wskipn represents the cost coefficient of inserting and deleting action nodes, and wskipe represents the cost coefficient of inserting and deleting action relation edges; the value range is [0,1], which can be set according to actual needs with the default value set to 1.
where fskipn represents the cost function of inserting and deleting action nodes, as shown in Formula (8), skipn represents the set of inserting and deleting action nodes, N1 and N2 represent the number of action nodes of the model.
where fskipe represents the cost function of inserting and deleting action relation edges, as shown in Formula (9), skipe represents the edge set of inserting and deleting, E1 and E2 represent the number of action relation edges of the model.
Definition 2: If G = (N,E) is a directed graph, there are two nodes p, with q ∈ N satisfying e = <p, q > ∈ E, then < p, q> is called a node adjacency relationship, abbreviated as NAR, for a given directed graph. All node adjacencies constitute an adjacency set, denoted as NARs = {<p,q > |e = <p,q> ∈ E}.
The degree of fitness based on the adjacency relationship is calculated as shown in Formula (10): The relevant definitions of the GED_NAR method are given below: The fitness calculation of two directed graphs G1 = (N1, E1) and G2 = (N2, E2) is shown in Formula (7) where wskipn represents the cost coefficient of inserting and deleting action nodes, and wskipe represents the cost coefficient of inserting and deleting action relation edges; the value range is [0,1], which can be set according to actual needs with the default value set to 1.
where fskipn represents the cost function of inserting and deleting action nodes, as shown in Formula (8), skipn represents the set of inserting and deleting action nodes, N 1 and N 2 represent the number of action nodes of the model.
where fskipe represents the cost function of inserting and deleting action relation edges, as shown in Formula (9), skipe represents the edge set of inserting and deleting, E 1 and E 2 represent the number of action relation edges of the model.
Definition 2. If G = (N,E) is a directed graph, there are two nodes p, with q ∈ N satisfying e = <p, q > ∈ E, then < p, q> is called a node adjacency relationship, abbreviated as NAR, for a given directed graph. All node adjacencies constitute an adjacency set, denoted as NARs = {<p, q > |e = <p, q> ∈ E}.
The degree of fitness based on the adjacency relationship is calculated as shown in Formula (10): Definition 3. If M 1 is a predefined model, M 2 is an extracted process model, and G 1 and G 2 are directed graphs of M 1 and M 2 , then the formula for the degree of fitness between the process model and the predefined model is shown in (11): where α and σ are two coefficients, and the value range is [0,1], which can be set according to actual needs with the default value of 0.5 and α + σ = 1.
Experiments
Experiments analyzed the dataset, action localization, action recognition, and conformance checking.
Dataset
At present, most of the datasets only involve action localization and action recognition, and there are very few datasets containing process information, so we use the public dataset TUM [26] for experiments, as shown in Figure 9. In the video, after the process executor enters the monitoring screen, he starts to take the cutting board and put it on the table. After the tableware is placed in an orderly manner, he finally leaves the monitoring scene. The duration of each video in the data set is in intervals of 1-2 min, and the action types in the video can be divided into taking a plate, placing a plate, etc. The "background class" is without any action. Finally, the video frame sequence is used to train and test the action localization and recognition networks. nsors 2023, 23, x FOR PEER REVIEW Definition 3: If M1 is a predefined model, M2 is an extracted process mod are directed graphs of M1 and M2, then the formula for the degree of fit process model and the predefined model is shown in (11): where α and σ are two coefficients, and the value range is [0,1], which can to actual needs with the default value of 0.5 and α + σ = 1.
Experiments
Experiments analyzed the dataset, action localization, action recog formance checking.
Dataset
At present, most of the datasets only involve action localization an tion, and there are very few datasets containing process information, so dataset TUM [26] for experiments, as shown in Figure 9. In the video, executor enters the monitoring screen, he starts to take the cutting board table. After the tableware is placed in an orderly manner, he finally leave scene. The duration of each video in the data set is in intervals of 1-2 mi types in the video can be divided into taking a plate, placing a plate, etc. T class" is without any action. Finally, the video frame sequence is used to action localization and recognition networks.
Experiment of Action Localization
The action localization method is to design a two-class network t frames containing actions. The indicators used are Iou, Recall, Precisio Precision. The final experimental results are shown in Table 2. We can fin
Experiment of Action Localization
The action localization method is to design a two-class network to filter the video frames containing actions. The indicators used are Iou, Recall, Precision, and Aver-age-Precision. The final experimental results are shown in Table 2. We can find that the action localization scheme designed in this paper can locate the starting position and ending position of the action very well, which is mainly due to the fact that we not only set an action threshold to indicate how many points are counted as actions but also set a tolerance threshold to prevent the interference of noise, that is, the appearance of several video frames with background in consecutive frames still adds to the clip. Therefore, the action localization model designed in this paper has better results.
Experiment of Action Recognition
We compare our model with other methods in Table 3, including LRCN [27], 3D-ConvNet [28], Two-StreamI3D [23], Two-Stream-CBAM-I3D [24], and AFSD [29]. The metrics used are accuracy, precision, recall, and a weighted composite evaluation index of the three for the experimental evaluation of action recognition. To avoid interference of other factors, the hyperparameter settings such as the learning rate, training rounds, and times used by the models are kept uniform. The results are shown in Table 3 and Figure 10. The various indicators of the fusion dual-stream heterogeneous 3D convolution and attention mechanism network model proposed in this paper are significantly improved compared to other models. Firstly, compared with the original Two-Stream I3D model, the accuracy rateincreased by 5%, the precision rate increased by 13.4%, the recall rate increased by 14.1%, and the average rate increased by 10.8%. Secondly, the two-stream heterogeneous structure reduces the complexity of the model and shortens the training time. Compared with the dual-stream CBAM-I3D model, the indicators are also slightly improved. Finally, the optical flow graphs generated by the preprocessing of the video data for the network inputs are improved to varying degrees. The analysis of the reasons for this shows that the network model proposed not only retains the originality of the input video information but also improves the expressive ability of the network, and automatically learns the importance of the spatial position information and temporal relationship of video frames, then according to the degree of importance, key features are enhanced and useless features are suppressed, so as to highlight the saliency information in the video frame. The introduction of the CBAM attention mechanism enables the network to learn more important spatial and temporal features in video frames without affecting the original network. The two-stream heterogeneous mode avoids repeated extraction of information, reduces the number of model calculation parameters and model size, and makes network convergence faster.
Conformance Checking Experimental Analysis
Conformance checking analyzes how the process model matches the predefined model. In addition, to convert the process model into a directed graph with reference to the conversion rules proposed in [30] (as shown in Figure 11), the GED_NAR method is used to calculate the fitness of the model.
According to the Definition 3 of the GED_NAR method, the process model obtained from the noisy process log and the predefined model can be obtained by using GED_NAR as: Fitness GED-NAR (G, G 2 ) = 0.5 × F GED (G, G 2 ) + 0.5 × F NAR (G, G 2 ) = 0.5 × 0.85 + 0.5 × 0.67 = 0.76.
From the experimental results, it can be seen that the fitness of the process model and the predefined model extracted from the video is greater than the fitness of the process model and the predefined model obtained from the process log containing noise. The experiment results show that the process model extracted from the video is more consistent with the actual execution process of the business.
Conclusions
Aiming at the problem that the process logs cannot be obtained effectively and the process logs generated by the information system will have noise, a method for extracting process models from videos and analyzing their consistency is proposed by us. Firstly, the video data preprocessing removes the background information irrelevant to the moving target in the video picture and only retains the spatiotemporal interest point area. Secondly, a binary classification network is used to generate action suggestion intervals of different lengths, which are then inputed to the fusion dual-stream heterogeneous, the three-dimensional convolution and attention mechanism network to classify actions. Finally, consistency detection is carried out with the predefined model and with the process model obtained from the process logs containing noise and the process model extracted from the video.
The method proposed in this paper is only for single-person video process modeling, and cannot handle multi-person collaborative business processes well. The next step is to study how to model a multi-person collaborative video process and conduct conformance checking.
|
2023-04-12T15:18:55.049Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5b17bbb7458af9b8c7e3c40a5843c3ffca0d1a16",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/8/3812/pdf?version=1680869394",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "241a07217f34e8fbe9bc5e5d327fb0caf7478925",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
235829355
|
pes2o/s2orc
|
v3-fos-license
|
A Novel Dual Quaternion Based Dynamic Motion Primitives for Acrobatic Flight
The realization of motion description is a challenging work for fixed-wing Unmanned Aerial Vehicle (UAV) acrobatic flight, due to the inherent coupling problem in ranslational-rotational motion. This paper aims to develop a novel maneuver description method through the idea of imitation learning, and there are two main contributions of our work: 1) A dual quaternion based dynamic motion primitives (DQ-DMP) is proposed and the state equations of the position and attitude can be combined without loss of accuracy. 2) An online hardware-inthe-loop (HITL) training system is established. Based on the DQDMP method, the geometric features of the demonstrated maneuver can be obtained in real-time, and the stability of the DQ-DMP is theoretically proved. The simulation results illustrate the superiority of the proposed method compared to the traditional position/attitude decoupling method.
INTRODUCTION
The acrobatic of UAV is usually accompanied by fast change in position and attitude, which is a relatively complex movement process [1]. Traditional methods are difficult to describe this kind of action. Imitation learning provides a way of thinking, typically the dynamic motion primitive (DMP) can represent complex motion, and guarantee the stability and continuity of the trajectory, which is a scheme worth exploring [2].
Motion primitives theory uses the sequencing of biological systems and the ability to adapt to motion units to explain the execution of complex motions. DMPs originated from the motion control of biological systems, and can be regarded as a rigorous mathematical formula for stable nonlinear dynamic systems of motion primitives [3]. DMPs are a trajectory planning method proposed by Stefan Schaal in [4] and updated by Auke Ijspeert in [5].For another Pastor et al. [6] first extended DMP to rotational motion and proposed quaternion DMP, Ude et al. [7] extended the rotation matrix quaternion DMP, Saveriano et al. [8] used the Lyapunov method to prove the stability of their system. These works extend DMP from space to space. On the application, DMPs are widely used in robotic arms, humanoid robots, medical enhancement robots, teleoperation robots, as well as UAV and other mobile robots. In the field of drones, Perk and Slotine [9] use DMP to define the flight path and obstacle avoidance of UAV, where the trajectory is generated by the movement of the joystick that controls the UAV. Later, Fang et al. [10] extended the method to encode the drone data demonstrated by the user, extract and encode the linear part of the flight trajectory, and combine them into flight control actions. In addition Tomić et al. [11] formulated the motion of the drone as an optimal control problem, the output of the optimal control solver is encoded using DMP, so that they could apply modifications to the UAV's flight trajectory in real time. Lee et al. [12] also incorporated DMP into the control scheme to modify the flight trajectory and avoid obstacles in flight. DMP has played an important role in avoiding real-time obstacles.
These works apply DMP to the field of UAV and have made a lot of progress, but they have not considered the coupling of position and attitude. Traditional methods usually decouple position and attitude. This method undermines the integrity of the problem, showing disadvantages and limitations that cannot be balanced. The dual quaternion uses only eight real numbers to describe the motion of a general rigid body, which can concurrently express rotation and translation as well as the coupling relationship between each other, which can realize the integration of pose in the true sense [13]. Based on quaternion DMP, this paper proposes DQ-DMP, expanding DMP to
SE
space. Finally, we apply it to describe the complex motion of fixed-wing UAV, and train DQ-DMP through expert teaching data, the simulation results show the feasibility of this method.
II. FORMULATION OF DMPS
In this section, we introduce several theoretical models of DMP. First, we introduce the classic DMP and the quaternion DMP, and then briefly introduce the dual quaternion to provide a basis for the next section of the dual quaternion DMP.
A. Classical DMP
For a typical second-order point attractor system , such as mass spring damping system, which can be expressed using the following equation zz y g y y , where g is the end point of the system, y is the state of the system, and is the gain, the control goal is to make the system reach the specified end point. In order to make the system move according to the trajectory we expect, the phase variable and forcing term are introduced [5] where the phase variable is the exponentially decayed clock signal from 1 to 0, obtained by the so-called regular system integration x xx , time scale factor can change the duration of motion,
fx is defined as the weighted average of N Gaussian kernel functions, its function is to change the acceleration of the system at different moments, and drive the system along a freewill smooth trajectory from the initial position 0 y to the end point g , where forcing term where i c is the center of Gaussian functions distributed along the motion phase, and i h is their width. For a given N , set equal to the total duration of the desired movement. Generally speaking, we can define parameters We use fx formed by basis function to approach the target driving term, the driving term and weight can be approximated as a linear relationship, that is It can be obtained that A ,and the parameter i can be calculated by the weighted least square method.
It should be noted that the above system defaults choose the inertial coordinate system.
For controlling a robot system with multiple degrees of freedom, we use a different system of equations (1) to express the motion of each degree of freedom, but use a common phase to synchronize them. It can be seen that the dynamic motion primitive is driven by the attractor dynamics differential equation, and represented by a combination of the nonlinear force term and the attractor force term. Nonlinear force can represent complex motion, and attractor force represents the target state. The nonlinear force weakens with time, and finally the attractor force dominates, so the dynamic motion primitive can smoothly converge to the target state. DMPs ensure the smoothness and continuity of the trajectory, and can express nonlinear motion without losing stability.
B. Quaternion DMP
In Cartesian space, attitude can be expressed as a unit quaternion is the unit ball of three-dimensional space, and is scalar part, is the vector part of the quaternion, 2 2 1 . Compared with the rotation matrix, the unit quaternion has fewer parameters, and there is no singularity in contrast to Euler angles, so it is frequently used to describe rotation operations in engineering.
The quaternion DMP of the inertial coordinate system is expressed as follows [7] where t is the sampling time, so q can be calculated by the above formula, where the exponential mapping of the quaternion , a rotation vector is exponentially mapped to a unit quaternion
C. Introduction to dual quaternions
The dual number was invented by Clifford in 1873 and further expanded by Study in 1891. The dual number is defined as where a and b are real numbers, called the real part and the dual part respectively, and is the nilpotent term.
A dual quaternion representing rotation, p q is an even part, representing translation, which are all quaternion and is nilpotent term. This method of representation can unify the translation and rotation in one space, the general rigid body motion described by the rotation q followed by the translation b p can be described as ˆ2 b q q q p , where b p represents a position in the body coordinate system [13]. The error of the dual quaternion can be expressed as They are the spinor in the inertial coordinate system and the spinor in the body coordinate system, s p and b p are the projections of the translation motion contained in the dual quaternion.
A. Dual quaternion DMP
The dual quaternion DMP in the body coordinate system is expressed as follows In the calculation process, it is necessary to define the orientation error between the dual quaternions q and ˆd q , this paper adopts the following logarithmic mapping Using kinematic equations, we can define the dual differential quaternion as follows Considering that the rotations is a fast item and is not affected by the translation, Vx will be split as follows By the formula (19) and (28) Under the premise of Vx can be inferred negative definite, p asymptotically converges to d p ,with 0 v . To sum up, the system globally asymptotically converges to ˆd q with 0 . IV. SIMULATION DESIGN The simulation system is built by the open source flight control px4 and the flight simulation software X-Plane. The joystick sends instructions to the flight control, after processing, the control signal is sent to X-Plane to control the aircraft in X-Plane. This is a typical hardware-in-the-loop simulation system. For different maneuvers, we collect flight data through expert teaching, and the sampling frequency is set to 100Hz. This data is used to train the DMP, the entire training process is shown in the following figure.
In this experiment, the somersaults motion is taken for a simulation test, which lasts 18. 9
V. RESULTS AND DISCUSSION
We show the results of the training in Figure 3 to 4. It should be noted that position of traditional pose DMP is only applicable to the inertial coordinate system. However, the reference frame of angular velocity is the body coordinate, because it is generally selected by default in the data collection of the flight controller.
For the quaternion DMP in the Sec. II, the formula (6) and (7) needs to be revised as follows in the body coordinate system For dual quaternion operations, a unified coordinate system is required. Here, the body coordinate system is selected for operation. In addition, the position and speed of the inertial coordinate system can be obtained through coordinate system conversion. In the training process and result display, different forms of scale transformation are performed on the position. It can be seen that the traditional pose DMP can achieve better results in their respective learning. However, such a system ignores the influence of attitude on position. During somersaults motion, the pitch angle and pitch angle velocity change fast, and the coupling of pose is serious. Obviously, in our dual quaternion DMP, when the attitude learning has a certain deviation, the position will also have a corresponding deviation, which is a performance of the UAV's motion constraint. Therefore, in the traditional pose decoupling DMP, it is more likely to train an unreachable state.
For the UAV system, it has to meet certain constraints of motion differential obviously. In this case, the dual quaternion DMP is more appropriate to describe the problem.
VI. CONCLUSION
In this paper, through the idea of imitation learning, the motion description of the acrobatic flight of the UAV is realized. Considering the coupling problem of the position and the attitude in the acrobatic flight, the dual quaternion is introduced to describe the translational and rotational motions in a unified manner. On the basis of dual quaternion DMP, the coupling problem of position and attitude is solved. Our method was successfully applied to the acrobatic flight in a hardware-in-the-loop simulation.
In the future we will strive to express more complex maneuvers through DMP.
APPENDIX
The basic operations of quaternions and dual quaternions are listed below.
The product of two quaternions is , , , q q q q q .
|
2021-07-15T01:16:04.957Z
|
2021-06-11T00:00:00.000
|
{
"year": 2021,
"sha1": "069cc1eaa97902da6b6ab222297fe5061340d556",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2107.06116",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "069cc1eaa97902da6b6ab222297fe5061340d556",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
58889611
|
pes2o/s2orc
|
v3-fos-license
|
Active Vibration Suppression of a 3-DOF Flexible Parallel Manipulator Using Efficient Modal Control
This paper addresses the dynamic modeling and efficient modal control of a planar parallel manipulator (PPM) with three flexible linkages actuated by linear ultrasonic motors (LUSM). To achieve active vibration control, multiple lead zirconate titanate (PZT) transducers aremounted on the flexible links as vibration sensors and actuators. Based on Lagrange’s equations, the dynamicmodel of the flexible links is derived with the dynamics of PZT actuators incorporated. Using the assumed mode method (AMM), the elastic motion of the flexible links are discretized under the assumptions of pinned-free boundary conditions, and the assumed mode shapes are validated through experimental modal test. Efficient modal control (EMC), in which the feedback forces in different modes are determined according to the vibration amplitude or energy of their own, is employed to control the PZT actuators to realize active vibration suppression. Modal filters are developed to extract the modal displacements and velocities from the vibration sensors. Numerical simulation and vibration control experiments are conducted to verify the proposed dynamic model and controller.The results show that the EMCmethod has the capability of suppressingmultimode vibration simultaneously, and both the structural and residual vibrations of the flexible links are effectively suppressed using EMC approach.
Introduction
With increasing demands of high speed and lightweight manipulators, robots with flexible links are designed and applied in many industrial fields, such as semiconductor manufacturing, astronautics, and automatic microassembly.Compared with traditional rigid manipulator, such robots have the advantages of high speed and acceleration, lower energy consumptions, and greater payload-to-arm weight ratio [1].However, the vibrations will be introduced in the system due to the flexibility, leading to longer settling time and lower motion tracking accuracy of the end-effector.Hence, vibration control technologies play a critical role in the field of flexible robots.Compared with conventional passive vibration control approaches, active vibration control can suppress multiple vibration modes simultaneously, has the ability to adapt the system and environmental variations, and usually is more effective than passive method [2].During active vibration control process, the vibration actuators bonded on the flexible parts generate forces according to the feedback signals of the displacement or velocity measured by the vibration sensors.If the output voltages of the vibration sensors are amplified properly through the active vibration controller, the actuator forces will increase the stiffness and damping of the flexible parts, thus attenuating the amplitude of the unwanted vibrations.With the development of the PZT materials, more and more flexible structures are mounted with multiple PZT transducers to achieve self-sensing and active vibration control [3,4].The dynamic modeling and control of these smart manipulators have been investigated by many researchers.Early investigations have focused primarily on modeling of serial flexible space arms or single flexible beams [5][6][7], and a detailed review was provided by Dwivedy and Eberhard [8].In contrast, little research literature on vibration suppression of parallel robot with smart flexible links has been published.Wang and Mills [9] presented a finite element model (FEM) of a PPM with elastic linkages for active vibration analysis using substructure methodology.Piras et al. [10] analyzed the dynamics of a planar fully parallel robot with flexible links using FEM.Zhang et al. [11] 2 Shock and Vibration formulated a dynamic model of a PPM based on pinnedpinned boundary conditions based on the assumed mode method.Yu et al. [12] conducted both theoretical and experimental studies on the dynamic analysis of a 3-RRR flexible parallel robot.However, the dynamic modeling of flexible parallel manipulator is a challenging work due to its complex coupling characteristics between the rigid motion and elastic deformation.
Based on the dynamic model and vibration transducers, a variety of active vibration control approaches have been studied since the 1970s.In [13]; two different vibration control laws, namely, velocity feedback control and positive position feedback control (PPF), are studied based on a single flexible beam mounted with single pair of PZT transducer.Zhang et al. [14] conducted strain rate feedback control to suppress unwanted oscillation of a 3-PRR flexible parallel manipulator with smart flexible links.Combined with input shaper and multimode PPF controller, Chu et al. [15] suppressed the residual vibration of a high speed flexible manipulator.Considering the exact boundary conditions, Zhang et al. [16] achieved active vibration control of a flexible parallel robot using input shaping control approach.To achieve high accuracy trajectory tracking while attenuating the structural vibration simultaneously, Zhang et al. [17] implemented variable structural control (VSC) and direct output feedback control (strain and strain rate feedback) on a flexible parallel manipulator driven by LUSM.However, usually only a limited small number of modes are selected to be controlled in practice, and hence the uncontrolled modes may lead to spillover, a phenomenon in which controls energy flows to the uncontrolled modes of the system [18].To prevent the recoupling of modal equations through feedback and spillover, Meirovitch and Baruh [19] proposed the independent modal space control (IMSC) approach.In this method, the control problem of continuous structure can be understood as controlling several single degree of freedom systems simultaneously, and each mode of the continuous structure is controlled by one independent controller related to its own modal displacement and velocity.However, the feedback gains of IMSC are calculated off-line and the control voltage for high modes is relatively high.When some vibration modes are excited to a greater extent by the disturbances during the operation, neither priority nor adjustment of feedback gains is given to these excited modes.Based on IMSC, Baz and Poh [20] developed modified independent modal space control (MIMSC) to overcome these disadvantages through controlling the different vibration modes depending upon its own energy.The vibration energy of each mode is identified at every intervals and the mode with highest vibration energy is controlled first.Although the different modes are prioritized and the applied voltages are decreased when using MIMSC, it will put computational efforts on the digital controller since the MIMSC needs to weigh and compare the vibration energies of all modes at every intervals.Singh et al. [21] proposed an efficient modal control (EMC) method to suppress multiple vibration modes of a cantilever beam.Compared with IMSC and MIMSC, the EMC method has a simpler feedback gains and lower amplitude of control voltages, which can be readily implemented on a controller.
Hence the EMC method is adopted to suppress the vibration of the flexible link in this study.
This paper addresses the dynamic modeling and active vibration control of a PPM with three smart flexible links.Firstly, the dynamic model of the flexible links mounted with multiple PZT transducers is formulated using Lagrange's equations and AMM, and the experimental modal tests of the flexible links are implemented to verify the assumed mode shapes.Then, based on the presented dynamic model, the efficient modal control strategies are adopted to realize the active vibration control of the flexible links through multiple PZT transducers.Finally, MATLAB simulations and vibration control experiments are provided to validate the effectiveness of the proposed control approach.
Dynamic Model of the Flexible Links
Incorporated with PZT Transducers represent the layout angle of the three linear guides and the angle between the -axis of the static frame and th links, respectively. is the length of the linkage.The prismatic motion of the LUSM is defined as .The coordinates of the moving platform are represented as a vector of = ( , , ) in the static coordinate system, and ( ) is the elastic deformation at the (0 ≤ ≤ ) of the th linkage.The detailed parameter definitions are shown in our preliminary study [17].
Discretization of the Elastic Motion.
The elastic motions in Figure 1 need to be discretized first for further dynamic analysis.Since the length of the flexible link is much longer than its thickness, the Euler-Bernoulli beam theory is adopted to model the flexible link, and hence only the transverse vibration of the link is considered.According to AMM, the elastic motion of the th flexible links can be presented as where () represents the unknown generalized coordinates of the th mode of the th link and () are the spatial shape functions.
Literatures [11,16] show that the moving platform will vibrate fiercely due to the deformation of the flexible link in such rigid-flexible PPM, especially under the operation of high speed and high acceleration.Hence we may consider the beginning of the flexible link is pinned at point while the end of the link is free but modeled with constraint forces applied by joint .Hence the normalized shape functions matching pinned-free boundary conditions are adopted to model the flexible links as follows [22]: where = ( + 0.25), = 1, 2, . . ., , and 0 ≤ ≤ .
Modal Testing Experiment.
To validate the assumptions of pinned-free mode shapes used in the dynamic model, experimental modal tests are carried out.As shown in Figure 2, the vibration properties of the flexible linkage, that is, mode shapes and natural frequencies, are identified by using an impact hammer (PCB 086B02), an accelerometer (PCB 333B32), and a dynamic response analyzer (Agilent 35670A).Based on the dynamic response analyzer, the first two mode shapes are validated, as illustrated in Figure 3.The first natural frequency is 92.5 Hz with a damping ratio of 0.041, and the second frequency is 241.3Hz with a damping ratio of 0.013.Figure 3 shows that the estimated mode shapes match well with the pinned-free mode shapes, but there still exists some difference at the end of the linkage.The reason is that the elastic motions of the three flexible links are coupled together through the moving platform.When one flexible link suffers an impact and starts to vibrate, the other two flexible links are also forced to vibrate due to the coupled dynamic characteristic and the closed-loop nature of the parallel structure.This clearly explains why the moving platform generates vibration during the operation.The detailed analysis of the modal test experiment is shown in [17,23].x coordinate along beam (mm) Amplitude e first assumed mode shape e first mode shape measured e second assumed mode shape e second mode shape measured The total kinetic energy of the PPM is presented as
Dynamic Equations of the
where the first and the second items of (3) represent the kinetic energies of the flexible links and the sliders, respectively, and the last two items are the kinetic energy of the moving platform.Equation ( 4) is defined as the position vector of on the th flexible links.Variable is the mass of the sliders, is the mass of the moving platform, is the mass moment of the platform, is the mass per unit length of the th sliders, and ( , ) are the coordinates of point in the static frame.Since the manipulator is moving in the - plane, the potential energy changes because gravity is ignored, and hence only the potential energy caused by the elastic motion of the flexible links is considered.The total potential energies of the PPM are presented as where and represent the Young modulus of elasticity and the second moment of area of the th flexible link, respectively.
To implement the efficient modal control for vibration control of the PPM, multiple PZT transducers are mounted on the flexible links as vibration sensors and actuators, and the control strategies are shown in Figure 4.According to [24,25], the generalized modal control forces applied by PZT actuators corresponding to the modal coordinates are given as where is the constant of PZT actuator, 2 and 1 are the left and right end positions of the th PZT actuator respectively, and () represents the control voltages imposed on the th PZT actuator of the th flexible link.
Then substituting (3)-( 6) into ( 7) and writing the results in matrix form with considering = 1, 2, 3 yields the following equations: where , is the excitation modal forces arising from the rigid body motion and coupling effect between elastic and rigid motion, and and are the positive mass matrix and stiffness matrix of the three flexible links, respectively.The detailed expressions of , , and are given in the appendix.
Efficient Modal Control
3.1.Independent Modal Space Controller.Since the EMC is based on the IMSC, the IMSC is analyzed in this chapter first.Independent modal space control has the ability to control each independent mode instead of controlling the continuous structures, with no coupling among the targeted modes.According to the dynamic equation ( 8), the independent modal equation for the th mode of the th flexible link is given as where reflects the modal frequency for the th mode of the th flexible link and is a positive constant.With IMSC, the modal control force is designed to only depend on the modal displacement and modal velocity as Substituting (10) to ( 9) yields the closed-form equations for the th mode of the th flexible link as Equation (11) clearly indicates that the active damping forces and the positive stiffness forces are imported to the modal equation through the vibration actuator, thus increasing the damping and stiffness features of the flexible links.The initial values of feedback gains of and can be determined using either pole assignment or optimal control method.According to the optimal control approach [26], the cost function which is related to the potential energy 2 2 , kinetic energy q 2 , and the required control input ( ) 2 is selected as where weighting factor represent a compromise between the required control input and vibration control efficiency.
Based on [27], the solution of ( 12) is expressed as 3.2.Efficient Modal Control.Since the control gains for higher vibration modes are much larger than the lower modes in the IMSC method, the applied control voltages will attain high values if the feedback gains are used directly without modifying.The goal of the proposed controller is to attenuate the multimode vibration simultaneously with relatively small control force.In EMC method [21], the feedback gains for each mode are modified according to its own modal displacement or energy, and hence the higher modes with lower vibration amplitude can be suppressed with low damping applied.According to EMC, only the feedback gains for the first mode are obtained through optimal control method, but the others are optimized using energy weighting method as follows: where = 2 2 + q 2 reflects the modal vibration energy for the th mode of the th link and represents the number of modes to be controlled.
Modal Filter and Synthesizer.
Real-time monitoring of modal coordinates plays a significant role in design of modal feedback controller.As any modal feedback controller, the modal displacements and velocities are required to be extracted from the vibration sensors in the proposed method.The most common methods used to measure the modal coordinates include state observers, temporal filters, and modal filters.Compared with state observers and temporal filters, modal filters extract modal coordinates from vibration sensors independent of the control work, and hence it can be directly applied to any modal feedback controller.Furthermore, modal filters also have the advantage of preventing observation spillover from the residual modes.Therefore, modal filters with discrete PZT sensors are adopted to provide modal coordinates separation.
According to [17,25], the resultant voltage generated in the PZT sensor is expressed with respect to the charge as where is the Young modulus of PZT sensor, 31 represents the constant of PZT materials, is the width of the PZT sensor, is the equivalent constant of piezoelectric capacitance, is defined as the distance between neutral axis of beam/sensor and that of midplane of beam and sensor, and 2 and 1 are the left and right end positions of the th PZT sensor, respectively.
Therefore, the modal filter expression for th smart link with PZT sensor is given as [28] () = () , (16) where is the output voltage vector of the PZT sensors attached on the th smart link, and is a × transformation matrix expressed as where = 31 / is PZT sensor constant, and the matrix related to the mode shape function of the th smart link is written as During the efficient modal control process, the output voltages of the PZT sensors are first transferred from physical coordinates to modal coordinates through modal filter, and then the modal coordinates are provided to the EMC controller to calculate the active modal forces.At last, the computed modal forces are converted to the control voltages corresponding to the PZT actuators in physical space using modal synthesizer.The overall control process is illustrated in Figure 4, and the modal synthesizer for th link is expressed as where is the modal control force vector calculated from EMC method for the th smart link, is the control voltage vector applied to the PZT actuators, and is a × transformation matrix expressed as where is the constant of PZT actuator.
A problem that must be mentioned is about the transformation matrices and .Taking the matrix used in the modal filter, for example, if the number of vibration sensors used to extract modal coordinate is as many as the vibration modes, namely, = in (18), the matrix would be square and the extracted modal coordinates would be equal to the ideal modal coordinate.However, the vibration Hence, the modal coordinates extracted from modal filter are only an approximation of the ideal modal coordinates, and the accuracy of the approximation is related to the number of the vibration sensors.In practice, increasing the number of the vibration sensors will put computational burden and make the system more complicated.The same problem also exists in the matrix of the modal synthesizer.Therefore, a compromise between real-time computing capability and the number of vibration transducers must be made when designing the overall system.
Simulation Results
. Numerical simulations in MATLAB software are performed first to verify the proposed control method.During the simulation, a circular trajectory motion of the moving platform is selected as the rigid motion input, as illustrated in Figure 1.The circular motion is given as The distance among points and is 600 mm and 120 mm, respectively.The effective stroke of each LUSM is 250 mm.Three pairs of PZT transducers are mounted at the locations of 50 mm, 100 mm, and 150 mm from on the flexible link as vibration sensors and actuators.The other parameters of the system are detailed in Table 1.The first three modes of the flexible links are selected to be controlled in the simulation work.Using the energy weighting method, the modified ratio for the second and third modes of the three flexible links are derived from (14) Since the modified ratios are based on the uncontrolled system response, it can be observed in (22) that the feedback ratios for the three flexible links are not identical.The vibration responses of the three flexible links are shown in Figure 5, which reveal that the oscillations of the three flexible links are suppressed rapidly with the proposed EMC strategy.Figure 6 shows that the first three modes' vibrations of the first flexible link are all attenuated effectively, which further validate that the EMC method can suppress multimode vibration simultaneously.
Experimental Results
The trajectory of the moving platform in the experiment is the same as that used in the numerical simulation.To guarantee the accuracy of the desired motion, the kinematic calibration work of the flexible PPM is carried out based on the visual feedback and the particle swarm optimization (PSO) algorithm.The motion control card (DMC-1842, Galil) is used to control the three LUSMs through three LUSM drivers (made by NUAA).The feedback signal is provided by the linear grating sensor (LIA20, NUMERIK JENA).A DSP control board (Seed DEC2812) is adopted to realize the active vibration control.PZT power amplifiers (XE-501, XMT) are adopted to amplify the control voltage of the PZT actuators.In the experiment, only the first flexible link is targeted to be controlled due to the limitation of the hardware.Three pairs of PZT sensors and actuators (PZT5, CSSC) are mounted verify that the proposed EMC method has the capability of suppressing multimode vibration simultaneously.From Figures 11-12, we also find that there still exist other frequencies besides the dominant natural frequencies.In fact, as mentioned in Section 3.3, if the vibration sensors are as many as vibration modes, then the modal coordinates can be extracted completely from the other frequencies, and the other vibration frequencies will not be observed in Figures 11-12.However, the vibration modes are infinite and usually the hardware is limited in practice (in our study only three PZT sensors are adopted).Hence the number of vibration modes which can be sensed are limited and the residual mode may participate in the extracted vibration mode; for example, the first natural frequency of 92 Hz is observed in Figure 12 and the second natural frequency of 244 Hz is observed in Figure 11.Besides, many other forced vibration components are also shown in Figures 11-12, such as 33 Hz, 64 Hz, and 144 Hz.These forced vibration frequencies are mainly from the movement of the sliders and flexible links, the dynamics of the LUSM, and the coupling effect between the rigid body motion and elastic motion.Since these forced vibration frequencies are usually away from the natural frequencies, the active damping force has nearly no effect on these forced vibration.Hence it is shown in that the amplitudes of these forced vibrations are almost unchanged during the control experiment.For further suppressing these forced vibrations, the joint motion control methods, such as singular perturbation approach or nonlinear control method, may be adopted to optimize the driving forces of the three LUSM.
Conclusions
This paper addresses the dynamic modeling of a PPM with flexible linkage driven by LUSMs.The Lagrange equations
Figure 1 :
Figure 1: Prototype and schematic model of the PPM.
Figure 2 :
Figure 2: Photography of the modal test experiment setup.
Flexible Links.Since we focus on the vibration control of the flexible link in this study, only the dynamic equations of the elastic motion of the flexible links are formulated.For the rigid body motions, the Kineto-Elasto dynamics (KED) assumptions are employed to provide a prescribed rigid motion.
Figure 3 :
Figure 3: First two mode shapes of the flexible link identified.
1 )
are weighed according to modal displacement or energy in that mode Modal synthesizer
( 22 )
Due to the different driving forces applied by LUSM, the vibration responses of the three flexible links are different.
Figure 5 :
Figure 5: The vibration responses at the three quarters of the three flexible links.
Figure 6 :
Figure 6: The first three mode vibrations at the three quarters of the first flexible link.
Figure 9 :Figure 10 :
Figure 9: Vibration response at the midpoint of the first link.
Table 1 :
Parameters of PPM and PZT transducers.
|
2018-12-19T10:32:28.981Z
|
2014-10-20T00:00:00.000
|
{
"year": 2014,
"sha1": "29a6e49c2d81567fc64d8df94510899fc69efb7c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/sv/2014/953694.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "29a6e49c2d81567fc64d8df94510899fc69efb7c",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
9101909
|
pes2o/s2orc
|
v3-fos-license
|
Reasoning with a Domain Model
A domain model is a knowledge base containing both domain specific and world knowledge. You may take the domain model to be both a universe of interest and a universe of problems. As a universe of interest the model contains all the information relevant and necessary for the intended use of the model as a store of information, a knowledge base. As a universe of problems the model represents a problem space and the relevant and necessary inferential tools needed by the model for the intended use as a problem-solving mechanism. Problem solving, in this case, means finding answers to queries about domain-specific knowledge. In this paper we shall discuss some fundamental problems related to the construction and use of a domain model called
In tro d u ctio n
The domain model presented in this paper is thought of as a module in a knowledge system using a natural language interface to retrieve information in a database. As a module of the overall system the domain model serves the purpose of evaluating user queries with respect to domain-specific knowledge and that of generating appropriate arguments for subsequent SQL commands. The domain-specific knowledge of the model comprises facts about domain-specific entities, their properties and possible relations between these entities, whereas world knowledge comprises information not represented in the domain but necessary for the model as a problem solver, e.g heuristics, general rules about causal or spatial relations and the like. The relevant rules and facts are used by an inference machine, not only to state information already explicitly at hand, but also to support the system in making implicit domain-specific knowledge explicit. The knowledge representation schemes used in the domain model presented are a semantic network, frames, so-called model predicates and heuristics. In the following sections we shall present and discuss the implementation and intended use of FRAME_WORLD, first of all problems of reasoning with inferential structures given by virtue of a specific representation scheme.
T he dom ain m odel
The knowledge in the domain model and the structure of the model depends entirely on the purpose it serves. As already mentioned, the model is thought of as a kind of filter, a means of controlling and checking the knowledge represented in the queries posed to the system by the user and either reject the query as a senseless one or compute and generate one or more arguments to be used in an SQL command to retrieve the required information. The basis for building and constructing the domain model, therefore, is the set of possible and allowed queries to the system, like for instance; who is the colleague of X, how many people are employed in the sales department, or how much is the salary of X? To answer questions of this sort you have to have access to both domainspecific and world knowledge. To know whether X and Y are colleagues, you have to have some rule telling you what it means to be colleagues and some means of checking if X and Y in our domain actually do fit this definition. If this is not the case, we do not want the system to react by simply answering 'No', but an output like: 'X is a customer, and Y is an employee'. To this purpose the domain model needs information about entities and relations in the domain and in the world outside the domain as well as some kind of machinery that uses this knowledge for information retrieval and query answering. Entities and relations between entities inside and outside the domain are represented as a network of nodes and links. The nodes in the net are conceptual entities, the knowledge primitives of the model. The links in the net either relate concepts as conceptual entities to each other or concepts as arguments of a semantic predicate to each other. The former kind of links are called conceptual links, the latter, the relational links, are called role relations. The description of a node comprises both the set of incoming and outgoing links as structural information about the concept as well as the set of conceptual features characterising the specific concept in question. This description is implemented as a frame. The role relations, too, are mapped into frames such that for each concept and for each role relation in the net there will be a frame with the same name as a description of that particular knowledge unit.
.1 T he netw ork
Using a network for knowledge representation in a domain model seems obvious. Knowledge pictured as a network makes it possible to represent a conceptual hierarchy as a nice structure of nodes and links representing all available information immediately ready for use. All you need is the right algorithm extracting the information or transfering information from more to less general nodes of concepts. It seems to reduce knowledge retrieval to simply finding the right node or nodes and the right path connecting two or more nodes with each other. It is, however, not as simple as that. Reasoning with a network presupposes a well-defined syntax and semantics of the net as discussed and emphasized in several papers (e.g. Woods 1987& 1990, Thomasson/Touretzky 1991. The idea of using networks as a representation scheme is that of making information attached to some node X accessible for other nodes connected to X. This property of a network is the fundamental principle of inheritance and path-based reasoning, and probably the most important reason for the popularity of this way of organizing knowledge and using a network as an inferential tool. Inheritance means that information kept in a node X is inherited by a node Y if Y is connected to X. Path-based reasoning means infering conclusions by way of finding a correct path through the net, in most cases simply by computing the transitive closure of a set of links in the net (Thomason/Touretzky 1991:239). Let us illustrate these principles using a fragment of the domain net. In this fragment (fig. 1) we have two different kinds of conceptual links labelled ako and apo, a kind of and a part of , and a relational link labelled works_in stating that an employee works in a department. Both the ako and apo relations are transitive relations, and without any further restrictions one might infer that a subordinate is a kind of legal person. This conclusion is derived by simply computing the transitive closure of the links involved, but it not a valid one because it is based on two different and incompatible concepts: the concept/zrin as a subconcept of the superconcept legal person, a generic concept defined by a set of conceptual features, and the concept firm defined by the set of parts constituting it as a whole, one of which is a department. To avoid conclusions like the one just presented we have to define both the syntax and the semantics of the net. The net in FRAME_WORLD consists of the following components: (1) A set of nodes F = {Cfi,...,Cfn}, generic concepts defined by a set of conceptual features, (2) A set of nodes P = {Cpi,...,Cpn}, part-whole concepts defined by a set of parts, (3) A link type: Lako. labelled 'ako', (4) A link type: Lapo, labelled 'apo', and (5) A link type: Lrole. labelled with the name of the role. A well-formed link in the net is a triple of one of the following types: (6) <Lako>Cfi,CQ> (7) <Lapo.Cpk,Cpi>
(8) <Lapo,Cfm,Cpn>
A well-formed path in the net is a structure of well-formed links. The interpretation of a well-formed link goes as follows: (9) Lako(X) = Y: X < Y, X is a subconcept of the superconcept Y, (10) Lapo(X) = Y: X ct Y, X is a part of Y. Using these definitions we can reject the conclusion; a subordinate is a kind of legal person because the final link of the path: *<Lako.Cp,Cf>, the firm being a kind of legal person is not a wellformed link. It is easy to see now how the definition of a well-formed link and of a well-formed path at the same time defines the inferential structure of the net as a sequence of well-formed links. The syntax of a well-formed link also defines the syntax of a well-formed query, and the interpretation of a well-formed query is the same as that of a well-formed link. The link type Lrole is not part of the inferential structure in the net. This link type is part of the definition of concepts and a means of associating concepts with thematic roles like (11) Ldeal with(X) = Y: deaI_with(X:actor,Y:locus)
.2 F ram es
The network, as demonstrated in the previous section, is a knowledge base mapping a conceptual hierarchy into nodes representing conceptual entities and links representing conceptual relations. These nodes and links are the knowledge primitives in the domain model. In addition, the network also keeps information about role relations associating concepts as arguments of a semantic predicate with thematic roles. The description of the nodes and the role relations as objects of information is placed in the frames in the model. A structural description of a node comprises all incoming and outgoing labelled links in the traditional slotrfiller structure, using the label of a link as slot and the value of a link as filler. The description of a generic concept, further, comprises the conceptual features defining the concept in a slot labelled attributes.
For each concept and for each role relation there will be a frame describing the entity in question. Role relations as knowledge objects are treated in the same way as conceptual entities, i.e. as structured objects of a taxonomic hierarchy. Based on the syntax adopted by the project all the The role relation dealjwith is described as a kind of process with a role structure to be computed by the procedure calculate. The possible values of X and Y are computed using the so called model predicates.
.M odel pred icates
Model predicates were introduced by (Henriksen/Haagensen 1991) as a means of checking the validity of types of arguments. Thus the interpretation of the model predicate; deal_with(nRM,CUSTOMER) defines the valid arguments of the semantic predicate deal_with to be of the type FIRM and CUSTOMER. In FRAME_WORLD we have extended the function of model predicates to also associating types of arguments with thematic roles. In our domain model we have the following three instances of handle_med (eng. deal_with): handle_med(actor:firma,locus:kunde) handle_med(actor:firma,theme:vare) handle_med(actor:kunde,locus:firma) Instead of having a frame for each reading of the predicate the procedure calculate will compute the relevant role structure. The actual use and function of the model predicates will be demonstrated in section 4.4.
.4 R u les
The core of the domain model as a reasoning system comprises the network and the frames. In addition, the model may use both the model predicates and a set of domain-independent rules as part of an inference procedure. The rules, representing general world knowledge, play a very important role in making implicit domain-specific knowledge explicit defining where to look and what to look for in the knowledge base. For the present only three rules have been implemented defining the concepts superior and colleague and the role relation an_employee_of. These rules, however, illustrate the need for and use of world knowledge implemented as rules.
.5 T he in feren ce m achinery
The inference machinery of the model is a set of Prolog procedures. The strategy implemented is based on the principle of inheritance and pathbased reasoning using build-in facilities of Prolog. The basic operation of the machinery is that of applying the interpretation of a link as a function to a node yielding as value another node. This is not the place, however, to go into details with the inference machinery. Let us, instead, take a look at how the domain model actually may be used and how it functions as a knowledge filter and generator in a question-answering system.
R easoning w ith the dom ain m odel
In this section we shall focus on the intended use of the domain model. For the present, we can only show how to use the network, the frames, the model predicates and the rules as part of a reasoning system. This may, however, give you an idea of the intended performance of the model as a whole
.1 T he fram es
The frame structure is utilized in two ways: (a) either to instantiate variables used by the inference machinery with values found in a frame, or (b) to find one or more frames matching a description: ?-frame(leder,Slots).
.2 T he netw ork
As you have probably already noticed, the structure of a frame as a description of a node is an encoded fragment of the network. The inference machinery uses this property of a frame in path-based reasoning. Actually, there is no network explicitly at hand in the domain model, but using the structure of the frames the inference machinery may generate one or more sub-nets computing the transitive closure of a link in the net: ? -get_frame(Name,ako-Ako). Name = person Ako = entity Name = physical person Ako = person Name = employee Ako = physical person ?-get_frame(Name,apo-Apo). Name = department Apo = firm Name = employee Apo = department Name = manager Apo = department Generating hierarchies of this kind may at a later time be used as an instrument to check whether some inferred type value, say, secretary, is subsumed by some other type value, employee, and, consequently, a valid argument of the semantic predicate work as in: work(secretary, department).
.3 In h eritan ce
Inheritance normally means inheriting properties. This is also true of the domain model although inheritance in this case rather means structure copying (Winston 1974:263). The concept physical person in the net is defined by the features: Navn, Adresse and CPR. These features are representend in the corresponding frame in a slot labelled attributes and may be inherited by all subsumed concepts like ? -get_frame(sekretaer,attr-Attr). Attr = [navn:NAVN,adresse:ADRESSE,cpr:CPR] This is also true of role relations as features defining a generic concept: ? -get_frame(sekretaer,[role-Role,roles-Roles]). Role = arbejde Roles = arbejde(actor:ansat,locus:firma) In this case the role relation and the role structure is inherited from the superconcept employee.
.M odel predicates
The model predicates are potential inferential tools, tools to support the inference machinery as a means of controlling types and values instantiated by the inference machinery. These predicates may be used in three different ways: (1) the procedure calculate called in a frame computes all possible role structures of a specific predicate: ?-get_frame(handle_med,roles-RoleStr). handle_med(actor:firma,locus:kunde) handle_med(actor: firma, theme: vare) handle_med(actor:kunde,locus:firma) The rules in the domain model are implemented as Prolog rules. The definition of two colleagues, X and Y, presupposes that they are both in the same department and that they are both at the same level of employment, that is either subordinates or managers of a kind. The latter condition means that the persons in question as nodes in the net has to be either sister nodes or subsumed by the same superconcept, the former condition is implemented using a shared variable, AFD, in the call of the knowledge base. A simplified version of the actual rule, then, is: kollega(X,Y):-get_frame(STX,ako-Ako), get_frame(STY,ako-Ako), table(X,STX,AFD), table(Y,STY,AFD), X \== Y. Using rules like this one is one way of incorporating domain-independent knowledge in the domain model. The user of the system is not supposed to have any knowledge about the data structures in the knowledge base. If you don't want to tune the knowledge base to some specific application or to be usable for only a limited amount of users you will have to supply the domain model with several rules like the one just presented, changing general knowledge about the domain into domain-specific knowledge and making implicit knowledge explicit.
Sum m ary
In this paper we have presented some principles and methods used to map domain-specific and world knowledge into a domain model called FRAME_WORLD. We also showed that, having access to knowledge about conceptual entities and relationships in the domain in question, this model may be used as part of a reasoning mechanism to both check and generate types and values as valid arguments of semantic predicates . The aim of using such a domain model is to facilitate the dialogue between the end-user and a knowledge database. FRAME_WORLD is still just a toy model, but yet a useful tool to investigate and test principles and methods underlying the construction and use of a domain model.
|
2017-01-07T08:35:44.032Z
|
1993-01-01T00:00:00.000
|
{
"year": 1994,
"sha1": "1f97d2f57e1d8cd0b0b68219f63c7cc0d5d44e86",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "1f97d2f57e1d8cd0b0b68219f63c7cc0d5d44e86",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
233984336
|
pes2o/s2orc
|
v3-fos-license
|
Topical Imiquimod for the Treatment of High-Grade Squamous Intraepithelial Lesions of the Cervix
Weekly topical treatment with imiquimod is effective in promoting regression of cervical high-grade squamous intraepithelial lesions.
RESULTS: Ninety women were enrolled: 49 in the experimental group and 41 in the control group. In the PP population, histologic regression was observed in 23 of 38 participants (61%) in the experimental group compared with 9 of 40 (23%) in the control group (P5.001). Surgical margins were negative for HSIL in 36 of 38 participants (95%) in the experimental group and 28 of 40 (70%) in the control group (P5.004). In the ITT population, rates of histologic regression also were significantly higher in the experimental group. Rates of adverse events in the experimental group were 74% (28/38) in the PP population and 78% (35/45) in the ITT population. Adverse events were mild, with abdominal pain being the most common. Three patients in the experimental group had grade 2 adverse events, including vaginal ulcer, vaginal pruritus with local edema, and moderate pelvic pain. H istologically confirmed high-grade squamous intraepithelial lesions (HSIL) of the cervix is induced by human papillomavirus (HPV) 1 and is a precursor of cervical cancer. Surgical excision by either cold knife conization, laser conization, or loop electrosurgical excision procedure (LEEP) is the gold standard treatment. 2 However, surgical excision is associated with obstetric complications such as preterm delivery, premature rupture of amniotic membranes, chorioamnionitis, low birth weight, admission of the newborn to the intensive care unit, and an increase in perinatal morbidity. [3][4][5][6] Imiquimod is an imidazoquinoline amine that binds to Toll-like receptors 7 and 8 of macrophages, producing cytokines and interferons (IFNs), specifically IFN-alpha and IFN-beta, that limit viral replication and stimulate natural killer cells. 7 Additionally, these cytokines and IFNs stimulate dendritic cells, thereby generating proliferation of CD4 T lymphocytes, IFN-gamma, cytokines and activation of CD8 T lymphocytes, all of which are toxic to HPV. 8,9 The aim of this study was to evaluate the histologic response of cervical HSIL after topical application of 5% imiquimod cream, with histologic response defined as histologic regression to cervical intraepithelial neoplasia (CIN) 1 or less in the LEEP specimen. Secondary objectives included the effect of imiquimod treatment on LEEP margin status, adverse events, and tolerance of treatment.
METHODS
Patients aged 25-50 years with a confirmed diagnosis of CIN 2-3 were prospectively enrolled at Barretos Cancer Hospital in Barretos, Brazil, from August 2017 through April 2019. All patients had had participated in our screening program for cervical cancer prevention and had abnormal cervical cytology. Colposcopy was performed by a gynecologist using acetic acid 5%, followed by Lugol's solution 1% at magnification increments from 63 to 403 with directed cervical biopsy. Findings were classified according to the 2011 Colposcopic Terminology of the International Federation for Cervical Pathology and Colposcopy. 10 In cases of histologically confirmed CIN 2 or CIN 3, women were invited to participate in this study. Exclusion criteria included: 1) suspected or confirmed invasive squamous carcinoma or in situ or invasive *Patients redrawn from intention-to-treat analysis. † Patients redrawn from per protocol analysis. ‡ Excluded from per protocol and intention-to-treat analysis because they became pregnant during treatment and had not yet undergone loop electrosurgical excision procedure (LEEP). HSIL, high-grade squamous intraepithelial lesion.
Fonseca. Topical Imiquimod for HSIL of the Cervix. Obstet Gynecol 2021. adenocarcinoma by colposcopy, biopsy, or cytology; 2) current pregnancy or lactation; 3) immunosuppression due to HIV or organ transplantation; and 4) previous treatment for HSIL. Eligible patients who agreed to participate in the study provided informed consent. The CONSORT (Consolidated Standards of Reporting Trials) flow diagram 11 is shown in Figure 1.
The study was a randomized phase II trial, without blinding and with parallel groups (Clinical-Trials.gov Identifier: NCT03233412). The study was approved by the Barretos Cancer Hospital Research Ethics Committee (No. 2,133,654). Patients were randomly assigned to two parallel groups: 1) imiquimod followed by LEEP and 2) LEEP without preceding treatment. After randomization, all pathology samples were reanalyzed by two pathologists with specialized training in gynecologic cancers to confirm HSIL.
The randomization sequence was in blocks of eight using the R software 3.4.3 by function sample. This list was loaded on the REDCap 12 platform where a simple random allocation.
Patients in both groups underwent molecular testing for high-risk HPV (COBAS 4800 test). The Brazilian cervical cancer screening program is based only on cervical cytology. These patients underwent the COBAS HPV test only on the date of entry into the study, that is, 3 months after the collection of cytology in the screening program and 1 month after the initial colposcopy and confirmatory HSIL biopsy.
Patients in the experimental group underwent application of imiquimod directly to the cervix (Video 1) once a week for 12 weeks followed by LEEP. The women underwent a weekly gynecologic examination during which a speculum was used to visualize the cervix, and 250 mg of 5% imiquimod cream was applied using a disposable brush (Viba-Brush) (Fig. 2). The entire transformation zone of the cervix was covered during the application (Fig. 3). Sexual abstinence was advised for at least 72 hours after application. Before the seventh week of treatment, the women underwent a repeat colposcopic examination to assess clinical response by a doctor different from the one responsible for the application of imiquimod.
In both the experimental and control groups, patients underwent LEEP with local anesthesia (with mepivacaine) performed by the same surgeon (B.O.F.), per local guidelines. Our patient follow-up schedule was identical for the control group and the experimental group: cytology, high-risk HPV testing, and colposcopy every 6 months for at least 2 years.
All pathology slides from biopsy and LEEP were evaluated by two pathologists with specialized training in gynecologic cancers. Consensus was achieved for discordant cases. Histologic diagnosis categories included cervicitis-benign, CIN 1, CIN 2, CIN 3, and invasive cancer. If HSIL could not be precisely graded as CIN 2 or 3, it was defined as high-grade CIN. In cases of uncertain diagnosis, complementary immunohistochemical examination was performed. In cases positive for p16, the final diagnosis was high-grade CIN. When p16 staining was inconclusive, Ki-67 staining was performed. The size of the LEEP Scan this image to view Video 1 on your smartphone. specimens and the status of the surgical margins (endocervical, ectocervical, or both) were recorded. Response was categorized as: regression, defined as CIN 1 or cervicitis-benign; persistence, defined as presence of HSIL; or progression, defined as presence of invasive cervical carcinoma.
Adverse events were documented weekly according to patient reports and findings on the gynecologic examination. They were graded according to the Common Terminology Criteria for Adverse Events guidelines v.4.03 13 from grade 0 (no symptoms) to grade 5 (death). In patients with grade 1 adverse events, treatment was continued as long as the patient was willing; treatment for symptoms was prescribed if necessary. In patients with grade 2 adverse events, treatment was suspended for 7 days and then reassessment was performed to determine if treatment could be restarted. In patients with grade 3 or 4 adverse events, treatment would be suspended and a LEEP scheduled as soon as possible.
Based on the HSIL regression rates previously reported by Grimm et al, 14 of a 34% response difference (39% and 73% in the placebo and imiquimod groups, respectively) with a significance level of 5% and a power of 85%, a sample size of 41 was estimated for each group (G-Power software 3.1.9.6). Assuming a rate of loss to follow-up of 20% for the experimental group and in an effort to ensure equal numbers of patients in the two groups at the end of the study, eight additional patients were included in the experimental group, for a total sample size of 90 patients.
Analyses were undertaken in two populations: the per protocol (PP) population, defined as patients who completed the entire study protocol, excluding patients with protocol violations, and the intentionto-treat (ITT) population, defined as patients who fully or partially completed the study protocol. A mandatory interim analysis for imiquimod efficacy was done in the middle of the study recruitment and showed that 60% (ITT population) to 80% (PP population) of patients treated with imiquimod had histologic regression of HSIL by analysis of the LEEP specimens.
Patient characteristics were summarized with mean and SD for quantitative variables and relative and absolute frequencies for qualitative variables. Normality of the data was verified using the Shapiro-Wilk and Kolmogorov-Smirnov tests. The average number of patients who would need to receive the intervention for the outcome to occur, the number needed to treat (NNT), was calculated as the inverse of absolute risk reduction. Absolute risk reduction was defined as the percentage of patients with histologic regression in the control group subtracted from the percentage of patients with histologic
RESULTS
The study included 41 participants in the control group and 49 in the experimental group. As a result of findings on reevaluation of the pathology samples after randomization, one patient was excluded from the control group because HSIL was not confirmed, and three patients were excluded from the experimental group, two because HSIL was not confirmed and one because invasive squamous cell carcinoma was diagnosed. Therefore, we included for treatment allocation 40 patients in the control group and 46 patients in the experimental group.
In the experimental group, after treatment allocation, one patient was excluded from per protocol and ITT analysis because she became pregnant during treatment and she had not yet undergone LEEP. This patient had her treatment interrupted in the fifth week of pregnancy when she had already undergone eight applications of imiquimod. She underwent regular prenatal care, and no teratogenic effects were observed during pregnancy. At the end of the study, the infant was 5 months old and the patient was being followed up at the colposcopy clinic according to Barretos Cancer Hospital's standard care protocol. Seven more women in the experimental group were excluded from per protocol analysis because of systemic side effects (four patients) and transportation problems making it difficult to come to the hospital (three patients). Therefore, in the control group, we included 40 patients in the per protocol and ITT analyses, and in the experimental group, we included 45 patients in the ITT analysis and 38 patients in the per protocol analysis (Fig. 1). All patients removed from the study analysis were treated at the Barretos Cancer Hospital in accordance with the standard institutional protocol.
Sociodemographic and clinical characteristics were balanced between the two groups in the ITT population (Table 1). Characteristics were also balanced between groups in the PP population. In 14 patients, HSIL could not be graded as CIN 2 or CIN 3, and these lesions were classified as highgrade CIN.
The rate of histologic regression was higher in the experimental group than in the control group in both populations analyzed (Table 2). In the PP population, histologic regression occurred in 22.5% of the LEEP specimens in the control group and in 60.5% of the specimens in the experimental group (P5.001), resulting in a NNT of 2.63 (95% CI 1.7-5.6). High-grade squamous intraepithelial lesions persisted in 75% of the specimens in the control group and in 39.5% of those in the experimental group (P5.002). In the ITT population, histologic regression occurred in 22.5% of the LEEP specimen in the control group and 53.3% of the specimens in the experimental group (P5.004), resulting in a NNT of 3.25 (95% CI 2.0-9.1). Highgrade squamous intraepithelial lesions persisted in 75% of the specimens in the control group and in 44.5% of those in the experimental group (P5.008). The analysis of histologic regression only in CIN 3 is showed in Appendix 1, available online at http:// links.lww.com/AOG/C298, and the comparison of the response in CIN 2 and CIN 3 is showed in Appendix 2, also available online at http://links.lww.com/ AOG/C298. One patient in the control group had progression of the lesion in the LEEP specimen. The diagnosis was superficially invasive squamous cell carcinoma, stage IA1. She underwent laparoscopic hysterectomy and bilateral salpingectomy. The pathology report showed HSIL (CIN 3) without residual invasive neoplasia, with negative vaginal margins. In the ITT population, one patient in the experimental group had progression of the lesion. This patient underwent nine applications of imiquimod, after which she discontinued treatment. She subsequently underwent LEEP. The pathology report showed stage IA1 invasive squamous cell carcinoma. Treatment was complemented with laparoscopic hysterectomy and bilateral salpingectomy. The surgical pathology report did not show any residual invasive lesion and showed margins negative for precursor lesion.
Rates of histologic regression stratified by sociodemographic and clinical characteristics are shown in Table 3. As in both the PP and ITT populations, the histologic regression rate and the positivity of the high-risk HPV test varied significantly by treatment group (experimental or control: P,.20 for both) a multiple logistic regression model was designed (Table 4). As one patient had an invalid HPV test, 77 women were included in the PP population, and 84 were included in the ITT population. In the PP population, the OR was 31.7 (95% CI 3.9-257.5) (P,.001) for HSIL regression in patients in the experimental group, compared with patients in the control group. In the ITT population, the OR was 24.0 (95% CI 3.0-191.1) (P5.003) for HSIL regression in patients in the experimental group compared with those in the control group.
The status of the surgical margins in the LEEP specimen is summarized in Table 5. In the control group, the surgical margins were negative for intraepithelial lesion in 28 of 40 patients (70.0%). In the experimental group, the surgical margins were negative in 36 of 38 patients (94.7%) in the PP population (P5.004) and 40 of 45 patients (88.9%) in the ITT population (P5.055). The depth of the surgical specimens was equivalent in the two groups.
The mean interval between the diagnosis of HSIL and the LEEP procedure was 16.066.1 weeks in the control group and 21.062.6 weeks in the experimental group (P,.001). We analyzed whether this delay could interfere in lesion regression, persistence or progression. The mean interval between HSIL diagnosis and LEEP was 17.665.8 weeks in patients with histologic regression and 19.764.4 weeks in patients with persistent disease or progression (P5.09). The rate of histologic regression was higher in the experimental group than in the control group regardless of high-risk HPV type or histologic grade ( Table 6).
The side effects are summarized in Table 7. Twenty-eight of 38 women (73.7%) in the PP population and 35 of 45 (77.8%) in the ITT population reported adverse events. Two patients (4%) in the ITT population had grade 2 symptoms. One of them reported intermittent vaginal pruritus with local edema on the day of previous application of imiquimod with spontaneous resolution within 24 hours. The other patient reported, when she came to the hospital for the fourth application of imiquimod, that she had experienced moderate pelvic pain that limited her daily activities for less than 24 hours after the two previous applications. These LEEP, loop electrosurgical excision procedure; ITT, intention to treat; PP, per protocol. Data are n (%), n/N (%), or average6SD unless otherwise specified. P-value was calculated to compare the control and experimental groups. * Pearson's x 2 test. † Mann-Whitney U test. Among the six patients with grade 1 findings, three had increased vaginal discharge, two had mild vaginal bleeding on the speculum examination, and one had focal and superficial erosion of the cervix. The patient with a grade 2 abnormality had a vaginal ulcer in the vaginal introitus, already undergoing epithelialization, that was diagnosed before the fourth application of imiquimod. After improvement of her condition, she completed the 12 weeks of treatment without recurrence of the ulcerated lesion.
DISCUSSION
Weekly topical treatment with imiquimod for 12 weeks is effective in promoting regression of cervical HSIL. One clinical application of these findings is the potential to use in larger lesions to achieve a higher rate of free surgical margins. We observed histologic regression (to CIN 1 or less) in more than half of patients, which suggests this might be an alternative treatment strategy to a cervical excision procedure. Imiquimod is approved by the U.S. Food and Drug Administration for use in the treatment of external genital and perianal warts, small superficial basal cell carcinomas, and clinically typical actinic keratoses. 15 Its off-label use in vulvar intraepithelial neoplasia and vaginal intraepithelial neoplasia is common and is supported by a solid base of evidence in the literature [16][17][18][19][20][21][22][23][24][25][26] ; however, few studies have focused on the effect of topical imiquimod treatment in patients with CIN. 14,[27][28][29] Most prior studies that evaluated the efficacy of imiquimod in cervical intraepithelial lesions included patients with low-grade lesions (CIN 1) (Jung PS, Kim JH, Kim D. Application of topical imiquimod for treatment of cervical intraepithelial neoplasia in young women: a preliminary result of a pilot study [abstract]. Gynecol Oncol 2016;141:103-4). 28,29 It is difficult to compare results of those studies with our results because our study only included patients with HSIL. Topical imiquimod for exclusive treatment of HSIL was examined in two randomized clinical trials. 14, 30 Koeneman et al 30 interrupted their study because of poor accrual after 12 patients had been recruited. Grimm et al 14 demonstrated histologic regression in 73% of patients in the group treated with imiquimod, resulting in a NNT of 2.9. However, histologic regression was evaluated only with colposcopy-directed biopsy, not removal of the entire transformation zone. 14 In our clinical trial, all patients underwent LEEP, which allowed a thorough assess-ment of histologic regression. In our study, the rate of positive surgical margins in the LEEP specimens was 5.3% in the experimental group, lower than rates reported in the literature after LEEP without imiquimod, which range from 27% to 46.5%. [31][32][33][34] There is evidence that the higher degree of lesion, depth of the conization specimen, and higher parity are risk factors for margin involvement. [31][32][33][34] The low frequency of positive margins in our study after imiquimod and LEEP might mean that even if there is no histologic regression, the topical treatment has reduced the lesion length.
Adverse events were frequent among the patients in our current study, with abdominal pain being the most common. In a recent case series, three patients had immunomodulatory treatment discontinued because of severe adverse events such as hyponatremia, severe headache, and corneal erosion, which required hospitalization of two of the patients. 35 Temporary hair loss also was reported in two patients treated with imiquimod as a vaginal suppository. 36 Grimm et al 14 observed adverse events in 97% of patients treated with imiquimod. In our population, no adverse event was higher than grade 2. This lower intensity of adverse effects might be related to the direct application of imiquimod to the cervix, minimizing absorption outside the target organ. In addition, we believe that the once-a-week frequency of application of imiquimod might have reduced local adverse effects.
The limitations of our study include the distance between the patients' cities of residence and Barretos Cancer Hospital, which resulted in missed follow-up visits and imiquimod applications. The delay between CIN 2 and CIN 3 diagnosis and the LEEP procedure in the control group could be a limitation, but we analyzed whether this delay could interfere in lesion regression, persistence or progression. As we showed, this period did not affect the lesion evolution. PP, per protocol; ITT, intention to treat; AE, adverse event; CTCAE, Common Terminology Criteria for Adverse Events. Data are n (%) or n/N (%). * There are patients with more than one symptom, so the sum of the details of the symptoms is greater than 28(PP) or 35(ITT). † According to Common Terminology Criteria for Adverse Events version 4.0.
|
2021-05-08T06:17:02.064Z
|
2021-05-04T00:00:00.000
|
{
"year": 2021,
"sha1": "afe6851b6ac25214efe2ea4955c98d4206177d7e",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/greenjournal/Fulltext/2021/06000/Topical_Imiquimod_for_the_Treatment_of_High_Grade.11.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "68e2c3b8868dd0c6a567893949a67e333acdfad2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268642878
|
pes2o/s2orc
|
v3-fos-license
|
Specific Emitter Identification through Multi-Domain Mixed Kernel Canonical Correlation Analysis
.
Introduction
Specific emitter identification (SEI) by a radar involves identifying individual radars by analyzing the distinctive features or fingerprints embedded in their signals [1].This intricate procedure comprises key stages such as data pre-processing, feature extraction, classification, and identification.The correlation of radar signals with specific radars and their associated platforms is pivotal for target intent analysis, decision support, and situational awareness [2].Consequently, radar SEI has attracted considerable attention in the fields of electronic reconnaissance and electronic countermeasures [3].
The key component for individual radar emitter identification is the fingerprint feature of the radar transmitter, characterized by stability and uniqueness [4,5].Fingerprint features persist in radar signals, resisting complete erasure due to their accidental modulation features arising from minute changes in radar technology [3,6].Each radar must be assigned unique labels, considering that even radars of the same model may be affected by minor faults in electronics, operating time, and environmental conditions [7].Extracting fingerprint features encoded in received radar signals poses a substantial challenge, particularly when these features are immersed in strong noise signals.To date, research on radar fingerprint feature extraction has predominantly focused on the time domain [8], frequency domain [9,10], time-frequency domain [11][12][13], ambiguity-function domain [14], and neural network domains [15][16][17] of the signals.While experiments have demonstrated satisfactory recognition results using the aforementioned methods, the subtle distinctions between fingerprint characteristics may be overlooked by the discussed extraction strategies.
Relying solely on a single feature in radar SEI leads to a decline in recognition accuracy when confronted with diverse radars and varying signal backgrounds.In recent years, multimodal fusion technology has garnered considerable attention, demonstrating remarkable achievements across multiple domains.Researchers have developed the multimodal approach to radar SEI, employing the feature-level fusion method within diverse feature domains.Compared with the previously employed single-feature method, the feature fusion method leverages the differences between multiple features to preserve fingerprint information [18][19][20].In one study [18], an innovative parallel feature fusion technique was developed based on the investigation of a simple series-parallel relationship between different features.However, drawbacks such as the absence of a nonlinear description of uncorrelated data and high feature dimensionality were noted.Another study [19] employed a local kernel approach based on the widely utilized dimensionality reduction method principal component analysis (PCA), enhancing dimensionality reduction efficiency.Nevertheless, PCA-based algorithms fail to harness non-linear correlations among several features since they do not consider the inherent links between features.With the rapid evolution of neural networks, deep learning has been extensively used in SEI.The multi-channel approach, capitalizing on the substantial scalability of neural networks for data fusion, is well-conceived and performs admirably in SEI with multi-feature fusion.In a pertinent study [20], a multi-channel deep learning model was employed to autonomously learn to fuse multiple features and thoroughly extract fingerprint feature information.However, the training of deep learning models often demands a substantial number of training samples.In reality, acquiring a significant quantity of actual detected radar signals proves challenging, posing an incongruity with the application of data-driven methodologies.Thus, multi-feature fusion must satisfy three essential requirements: (1) the ability to explore relationships among different features, (2) the extraction of effective feature information, and (3) the reduction of feature dimensionality.
Canonical correlation analysis (CCA) serves as a linear multivariate statistical method for examining the correlation between two sets of features, identifying canonical eigenvectors with higher correlation indicative of original features.CCA encounters challenges when extracting a valid representation from data that does not adhere to a linear distribution.Therefore, to adapt to the non-linear data characteristics, the kernel method based on CCA can be employed.The kernel function facilitates the projection of fingerprint features into a feature space, adept at handling non-linear data and extracting weak fingerprint information with greater ease.The choice of kernel function in kernel methods leads to distinct mapping spaces and diverse ways of describing the data.Jia [21] classified kernel functions into two main types: local kernels and global kernels.However, both types exhibit only one type of interpolation and extrapolation capabilities.Presently, the kernel function in radar SEI predominantly relies on a single kernel.To overcome the limitations of a single kernel, researchers explore the linear combination of kernel functions, such as multiple kernel learning (MKL) [22] and mixed kernel [23,24] approaches.In the realm of multi-kernel mapping, the high-dimensional space amalgamates multiple feature spaces.Each basic kernel optimally leverages its ability to map various features within the combinatorial space.Authors in a study [22] employed MKL to select the most appropriate base kernel at each data point, determining the optimal kernel through linear weighting of the base kernels.However, the experimental choice of the base kernel and the combination method lacks theoretical grounding and remains uncertain.In another study [23,24], addressing the absence of a single kernel with only one interpolation or extrapolation ability, authors proposed a mixed kernel by weighting the radial basis function (RBF) kernel with interpolation ability and the polynomial (poly) kernel with extrapolation ability, respectively.Essentially, mixed kernel models fall into the category of single kernel models but adopt the form of MKL.This differentiation arises from the assignment of independent weights to each kernel function in the mixed kernel, circumventing the need for solving complex optimization problems present in MKL.Mixed kernels exhibit both a robust theoretical foundation and a more convenient combination method while circumventing the necessity to learn a large number of base kernel weights.
Building on the literature, this paper proposes a multi-domain mixed kernel canonical correlation analysis (MMKCCA) for radar SEI.The primary focus involves initially extracting fingerprint features in four feature domains from the radar signal with noise removed.The mixed kernel, better suited to the characteristics of multi-feature data, is then applied as the kernel function for fingerprint feature fusion using the kernel canonical correlation analysis (KCCA) technique.Additionally, the fused features are fed into the random forest classifier for classification and recognition.Experimental results indicate that the proposed method yields superior recognition outcomes, with accuracy reaching 95% under lower feature dimensions.The main contributions of this paper are threefold: (1) introducing a multi-domain feature fusion method for radar SEI based on KCCA, affirming the complementarity of different feature domains; (2) addressing the limitations of local and global kernels by proposing a mixed kernel that combines the two in a weighted composition, better adapting to the characteristics of data with multi-domain features; and (3) efficiently lowering feature dimensions while maintaining SEI recognition performance with a modest number of samples.
The remainder of this paper is organized as follows: Section 2 outlines the fundamentals of CCA; Section 3 discusses kernel function principles and selection; Section 4 describes experiments using the dataset and the aforementioned theoretical study; and Section 5 summarizes the findings of the experiments and proposes directions for future research.
Analysis of Canonical Correlation Analysis (CCA)
The numerous features obtained for the same radar signal exhibit some degree of interrelation, presenting an opportunity to fully exploit the complementing effect between these features for effective feature fusion.This work employs CCA, a technique with multivariate statistical analysis capabilities adept at uncovering subtle variations and intercorrelations among distinct features.The principles and methods of CCA will be briefly introduced below.
Hotelling introduced CCA in 1936, essentially involving finding the feature vector with the highest correlation rather than the original features.The method utilizes the correlation between features as the discriminant criterion, achieving both the reduction of original features to eliminate information redundancy and the purpose of feature fusion [25].CCA can be seen as the problem of finding basis vectors for two sets of variables such that the correlations between the projections of the variables onto these basis vectors are mutually maximized.The simplified flow of CCA is illustrated in Figure 1: that the proposed method yields superior recognition outcomes, wit 95% under lower feature dimensions.The main contributions of this (1) introducing a multi-domain feature fusion method for radar SEI firming the complementarity of different feature domains; (2) addre of local and global kernels by proposing a mixed kernel that com weighted composition, better adapting to the characteristics of data features; and (3) efficiently lowering feature dimensions while main tion performance with a modest number of samples.
The remainder of this paper is organized as follows: Section 2 out tals of CCA; Section 3 discusses kernel function principles and selectio experiments using the dataset and the aforementioned theoretical study marizes the findings of the experiments and proposes directions for f
Analysis of Canonical Correlation Analysis (CCA)
The numerous features obtained for the same radar signal exh interrelation, presenting an opportunity to fully exploit the compleme these features for effective feature fusion.This work employs CCA, a tivariate statistical analysis capabilities adept at uncovering subtle v correlations among distinct features.The principles and methods of introduced below.
Hotelling introduced CCA in 1936, essentially involving findin with the highest correlation rather than the original features.The met relation between features as the discriminant criterion, achieving bo original features to eliminate information redundancy and the purpo [25].CCA can be seen as the problem of finding basis vectors for two s that the correlations between the projections of the variables onto th mutually maximized.The simplified flow of CCA is illustrated in Fig The symbols used by CCA and KCCA are shown in Table 1.The two types of fingerprint feature vectors can be represented as x ∈ R p and y ∈ R q .Canonical correlation analysis seeks a pair of linear transformation ω x and ω y , one for each of the sets of feature vectors x and y, such that when the set of vectors is transformed, the corresponding vectors Z x and Z y are maximally correlated.ρ represents the correlation function.
The first stage of canonical correlation is to choose ω x and ω y to maximize the correlation ρ between the two feature vectors x and y [26].The correlation function can be expressed as follows: where C xx and C yy denote the covariance matrices of the two types of fingerprint features, respectively, and C xy represents the mutual covariance matrix between them.
To ensure a unique solution, the following constraints are applied: In order to get a linear transformation ω x and ω y .The Lagrange criterion functions is constructed by combining correlation function ρ and condition function Equation (2) [27]: Taking derivatives with respect to ω x and ω y and setting them to zero yields: By multiplying Equation (4) by ω x T and subtracting ω y T from Equation ( 5), Based on the constraints in Equation (2), λ y − λ x = 0, and λ = λ y = λ x .Assuming that the covariance matrix C yy is singular: Electronics 2024, 13, 1173 5 of 16 Substituting ω y from Equation (7) to Equation (4), The projection vector of the features ω x can be found using the eigenvalue Equation (8).Substituting ω x into Equation (7) yields the projection vector ω y .Thus, the typical correlation features after projection can be obtained as Z x = ω x T x and Z y = ω y T y, and their combination yields the final feature fusion Z = Z x , Z y .
In situations where the original feature data deviates from Gaussian or linear distribution, effective information extraction from linearly operated CCA becomes challenging.Therefore, CCA is extended to nonlinear CCA to better handle situations where the relationship between different features is nonlinearly distributed, yielding effective features.In an attempt to increase the flexibility of the feature selection, kernelization of CCA (KCCA) has been applied to map the hypotheses to a higher-dimensional feature space.The subsequent section provides a detailed introduction to nonlinear CCA with the kernel function.
Kernel CCA
Given that CCA operates linearly, it encounters limitations in effectively extracting nonlinear data features.KCCA introduces an innovative approach that employs a kernel function to nonlinearly extend the original fingerprint characteristics, projecting them into a high-dimensional feature space [27][28][29].This method not only accommodates nonlinear data, transforming a nonlinear problem into a linear one, but also facilitates enhanced access to fine-grained fingerprint information.The underlying principle of KCCA is concisely described below, with a visual representation provided in Figure 2.
relation features after projection can be obtained as In situations where the original feature data deviates from Gaussi bution, effective information extraction from linearly operated CCA bec Therefore, CCA is extended to nonlinear CCA to better handle situatio tionship between different features is nonlinearly distributed, yielding In an attempt to increase the flexibility of the feature selection, kern (KCCA) has been applied to map the hypotheses to a higher-dimensio The subsequent section provides a detailed introduction to nonlinear CC function.
Kernel CCA
Given that CCA operates linearly, it encounters limitations in effe nonlinear data features.KCCA introduces an innovative approach that function to nonlinearly extend the original fingerprint characteristics, pr a high-dimensional feature space [27][28][29].This method not only accomm data, transforming a nonlinear problem into a linear one, but also fa access to fine-grained fingerprint information.The underlying principl cisely described below, with a visual representation provided in Figure For the original fingerprint feature vectors x ∈ R p and y ∈ R q , high-dimensional feature vectors are obtained through the following kernel nonlinear transformation: where ϕ signifies the mapping of the original feature vector x to the high-dimensional feature space, and ϕ(y) follows a similar process.Kernels are methods of implicitly mapping data into a higher-dimensional feature space.The kernel function K x i , x j operation can be expressed as: Using the definition of the covariance matrix in Equation ( 1), we can rewrite the covariance matrix C xx and C yy .
where we use x ′ to denote the transpose of a vector x.
The linear transformation ω x and ω y can be rewritten as the projection of the feature vectors onto the transformation ωx and ωy : Substituting into Equation ( 1), the correlation function can be expressed as follows: Let K x = xx ′ and K y = yy ′ .The maximization criterion function is reformulated as: Once again, this criterion function must adhere to the following constraint: The subsequent computation aligns with the standard CCA procedures.Derive the projection vectors ωx and ωy to obtain typical correlation features Z x = K x ωx and The resolution of nonlinear relationships between features is facilitated by projecting features into a higher-dimensional space using kernel functions.However, the various projection forms and feature descriptions offered by different kernel functions must be explored.The next subsection discusses how a suitable kernel function can be selected for the variety of radiation source features.
Mixed Kernel
The effectiveness of the kernel function's nonlinear fit in the feature space relies not only on its capacity for learning from neighboring data (i.e., interpolation) but also on its ability to extend beyond its observed data range (i.e., extrapolation).Kernel functions can be categorized into local and global types.The local kernel, exemplified by the RBF kernel function, excels in interpolation but lacks extrapolation capabilities.Conversely, the global kernel, exemplified by the poly kernel function, exhibits superior extrapolation but weaker interpolation capabilities.
The formula for RBF kernel function: where σ denotes the kernel width.
Electronics 2024, 13, 1173 7 of 16 The formula for poly kernel function: where d is the kernel parameter that denotes the degree of the poly.The performance of these two kernel functions is illustrated in Figure 3.The RBF kernel reaches its maximum value when the test point's distance is zero, gradually approaching zero as the distance increases.This indicates a limited learning ability beyond a specific range.Conversely, the poly kernel exhibits increasing kernel values across all ranges as the poly degree rises, but its interpolation ability weakens.To harness both nonlinear learning capabilities simultaneously, a combined approach is considered.
The formula for RBF kernel function: where σ denotes the kernel width.The formula for poly kernel function: where d is the kernel parameter that denotes the degree of the poly.The performance of these two kernel functions is illustrated in Figure 3.The RBF kernel reaches its maximum value when the test point's distance is zero, gradually approaching zero as the distance increases.This indicates a limited learning ability beyond a specific range.Conversely, the poly kernel exhibits increasing kernel values across all ranges as the poly degree rises, but its interpolation ability weakens.To harness both nonlinear learning capabilities simultaneously, a combined approach is considered.Unlike single kernel models, mixed kernel models possess both interpolation and extrapolation capabilities.Unlike their single kernel counterparts, mixed kernel models offer a more expansive assumption space, making them better suited for approximating real-world problem objective functions [23,24].
Approach to combining mixed kernels: ( ) where ω ∈ represents the mixture weight.Figure 4 demonstrates the kernel values resulting from the fusion of the RBF and poly kernels, with assumption of parameters 1 σ = for the RBF kernel and 1 d = for the poly kernel.Adjusting the ω parameter in Equation ( 15) reveals that the mixed kernel function exhibits a consistent nonlinear fitting effect under varying weights.However, the choice of parameters significantly influences algorithm performance, a topic explored in the subsequent section.Unlike single kernel models, mixed kernel models possess both interpolation and extrapolation capabilities.Unlike their single kernel counterparts, mixed kernel models offer a more expansive assumption space, making them better suited for approximating real-world problem objective functions [23,24].
Approach to combining mixed kernels: where ω ∈ [0, 1] represents the mixture weight.Figure 4 demonstrates the kernel values resulting from the fusion of the RBF and poly kernels, with assumption of parameters σ = 1 for the RBF kernel and d = 1 for the poly kernel.Adjusting the ω parameter in Equation ( 15) reveals that the mixed kernel function exhibits a consistent nonlinear fitting effect under varying weights.However, the choice of parameters significantly influences algorithm performance, a topic explored in the subsequent section.
Parameters Optimization
The selection of parameters in the aforementioned mixed kernel method directly impacts the algorithm's performance.Thus, employing parameter optimization becomes essential to identify the optimal parameter combination and enhance algorithm performance [21].A genetic algorithm is used for parameter optimization, leveraging the inheritance of superior parameters from the previous generation to expedite the optimization process.While there is a possibility of falling into local optima and missing the global optimum solution, an examination of the kernel function reveals a small parameter range, minimizing the risk of overlooking the global optimum solution.Recognition accuracy under different parameter settings serves as the fitness function, with the optimal parameter combination identified when either the highest recognition accuracy condition is met or the number of genetic iterations reaches 200.
Parameters Optimization
The selection of parameters in the aforementioned mixed kernel method directly im pacts the algorithm's performance.Thus, employing parameter optimization becomes e sential to identify the optimal parameter combination and enhance algorithm perfo mance [21].A genetic algorithm is used for parameter optimization, leveraging the inhe itance of superior parameters from the previous generation to expedite the optimizatio process.While there is a possibility of falling into local optima and missing the glob optimum solution, an examination of the kernel function reveals a small parameter rang minimizing the risk of overlooking the global optimum solution.Recognition accurac under different parameter settings serves as the fitness function, with the optimal param eter combination identified when either the highest recognition accuracy condition is m or the number of genetic iterations reaches 200.
Datasets
To assess the recognition effectiveness of the proposed algorithm, experimental va dation is conducted using a radar dataset.The dataset comprises pulse signals emitted b eight analogue radars of three models, collected within a laboratory environment.Eac simulation radar captures 200 radar pulses, with each pulse signal consisting of 1200 sam ple points.In evaluating the feature fusion algorithm's performance, a random forest cla sifier is chosen for classification and recognition.Training utilizes 80% of the collecte samples, while the remaining 20% are reserved for testing.
Kernel Function Analysis
Examining the influence of kernel function parameters, this subsection investigat the parameter selection for the proposed mixed kernel.Key parameters include those the RBF kernel function σ , the degree parameter d in the poly kernel function, and th weights ω in the mixed kernel function.The MKCCA approach employed in this stud
Experimental Analysis 4.1. Datasets
To assess the recognition effectiveness of the proposed algorithm, experimental validation is conducted using a radar dataset.The dataset comprises pulse signals emitted by eight analogue radars of three models, collected within a laboratory environment.Each simulation radar captures 200 radar pulses, with each pulse signal consisting of 1200 sample points.In evaluating the feature fusion algorithm's performance, a random forest classifier is chosen for classification and recognition.Training utilizes 80% of the collected samples, while the remaining 20% are reserved for testing.
Kernel Function Analysis
Examining the influence of kernel function parameters, this subsection investigates the parameter selection for the proposed mixed kernel.Key parameters include those in the RBF kernel function σ, the degree parameter d in the poly kernel function, and the weights ω in the mixed kernel function.The MKCCA approach employed in this study involves critical parameter selection, as each parameter significantly affects the algorithm's performance.Given the difficulty in estimating these parameters, they often require prior information and are manually determined within an appropriate range.In practice, optimal parameters in experiments typically necessitate only a brief search within a narrow range.The parameters of the kernel function in KCCA are discussed below.First, feature vectors are input to the KCCA, which varies the parameters of the kernel function and outputs dimensionally variable feature fusion vectors, which are used as inputs to the Random Forest classifier, which outputs individual recognition accuracy.Experiments are conducted for the range of parameter values one by one to analysis the change in individual recognition accuracy for parameter values with different feature dimensions.Considering the information redundancy associated with excessively high feature dimensions, the maximum feature dimension is capped at 60.
The RBF kernel function, a focal point of recent research, represents the local kernel function.Research indicates that the parameter σ of the RBF kernel exhibits strong inter-polation ability when taking smaller values.Conversely, larger values of σ weaken the kernel's interpolation ability while enhancing extrapolation ability.Hence, the range for σ is concentrated between (0, 5).To explore the RBF kernel's performance under different parameters, the range of σ in this subsection is {0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4}, as illustrated in the left of Figure 5.
change in individual recognition accuracy for parameter values with different feature dimensions.Considering the information redundancy associated with excessively high feature dimensions, the maximum feature dimension is capped at 60.
The RBF kernel function, a focal point of recent research, represents the local kernel function.Research indicates that the parameter σ of the RBF kernel exhibits strong interpolation ability when taking smaller values.Conversely, larger values of σ weaken the kernel's interpolation ability while enhancing extrapolation ability.Hence, the range for σ is concentrated between (0, 5).To explore the RBF kernel's performance under different parameters, the range of σ in this subsection is {0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4}, as illustrated in the left of Figure 5.In this experiment, σ is selected from the specified range, with individual identification accuracy serving as the criterion as the feature dimension increases.The figure reveals that the RBF kernel function performs optimally when σ is set to 1.However, with σ set to 3.5, the recognition accuracy remains consistently low with increasing feature dimensions.For all other σ values within the specified range, recognition accuracy falls within the mid-range.When σ is set to 4, recognition rates rapidly increase with small feature dimensions but decrease as dimensions increase.Thus, determining the value of σ is particularly crucial for optimizing the RBF kernel function's performance.
Represented Simultaneously, the mixed kernel possesses the ability to interpolate and extrapolate, ensuring better adaptation to the characteristics of multi-featured data.According to Equation ( 13), the mixed kernel function requires determining the weight ω of the two.The value of weight ω is set in the range of {0.05, 0.06, 0.07, 0.08, 0.09, 0.10}.Considering the potential correlation between the three parameters, a genetic algorithm is employed for parameter optimization.The algorithm performs optimally when the three parameters are set to In this experiment, σ is selected from the specified range, with individual identification accuracy serving as the criterion as the feature dimension increases.The figure reveals that the RBF kernel function performs optimally when σ is set to 1.However, with σ set to 3.5, the recognition accuracy remains consistently low with increasing feature dimensions.For all other σ values within the specified range, recognition accuracy falls within the midrange.When σ is set to 4, recognition rates rapidly increase with small feature dimensions but decrease as dimensions increase.Thus, determining the value of σ is particularly crucial for optimizing the RBF kernel function's performance.
Represented by the poly kernel function, research has demonstrated its commendable extrapolation capability.Larger values of parameter d contribute to superior interpolation capability but diminish extrapolation capability.Conversely, smaller values of d enhance extrapolation capability but weaken interpolation capability.The parameter range for the poly kernel is set to {1, 2, 3, 4, 5}, as illustrated in the right of Figure 5, showcasing the recognition accuracy comparison for varying values of parameter d.At d = 1, as feature dimensions range between (10,30), recognition accuracy exhibits slower growth compared to other parameter values.At d = 3, recognition accuracy remains stable when the feature dimension reaches 12.
Simultaneously, the mixed kernel possesses the ability to interpolate and extrapolate, ensuring better adaptation to the characteristics of multi-featured data.According to Equation ( 13), the mixed kernel function requires determining the weight ω of the two.The value of weight ω is set in the range of {0.05, 0.06, 0.07, 0.08, 0.09, 0.10}.Considering the potential correlation between the three parameters, a genetic algorithm is employed for parameter optimization.The algorithm performs optimally when the three parameters are set to σ = 1, d = 1 and ω = 0.06, respectively.
To explore the influence of the weight ω on the algorithm's recognition effectiveness, comparative experiments with different parameter values are conducted.The left of Figure 6 presents the recognition rate comparison for the mixed kernel parameter ω, considering values in the set range.As feature dimensions increase up to 10, the proposed algorithm exhibits a rapid increase in recognition accuracy under all parameters.Subsequently, recognition accuracy gradually declines as dimensions continue to grow, indicating information redundancy and a subsequent decrease in individual recognition accuracy.This validates the algorithm's ability to extract effective feature information with a low fea-ture dimension.When ω is set to 0.06, recognition accuracy is highest within the parameter set, with recognition accuracy under other parameters distributed between (85, 90).
Figure 6 presents the recognition rate comparison for the mixed kernel parameter ω , considering values in the set range.As feature dimensions increase up to 10, the proposed algorithm exhibits a rapid increase in recognition accuracy under all parameters.Subsequently, recognition accuracy gradually declines as dimensions continue to grow, indicating information redundancy and a subsequent decrease in individual recognition accuracy.This validates the algorithm's ability to extract effective feature information with a low feature dimension.When ω is set to 0.06, recognition accuracy is highest within the parameter set, with recognition accuracy under other parameters distributed between (85, 90).Comparing the detection accuracy of the RBF and poly kernel functions mentioned earlier revealed that the mixed kernel model proposed in this paper outperforms them in radar SEI, as illustrated in the right of Figure 6.The figure displays the parameter settings of the three kernel functions with the highest recognition accuracy within their respective parameter sets.Specifically, the parameter σ of the RBF is set to 1, the parameter d of the poly kernel function is set to 1, and the weight ω of the mixed kernel is set to 0.06.The accuracy comparison reveals that the RBF kernel function attains very high recognition accuracy with smaller feature dimensions below 10.However, its accuracy diminishes as dimensions increase.In contrast, the poly kernel function exhibits a gradual increase in recognition rate within the feature dimensions of 10 to 20, and after reaching 20, the accuracy rate elevates to a higher level.
By comparing the four graphs in Figures 5 and 6, it can be observed that the RBF kernel function performs well in low dimensions (around 10) but exhibits poorer performance in high dimensions.In contrast, the polynomial kernel function demonstrates highly stable performance in high dimensions.The mixed kernel function addresses the limitations of the former, showing relatively stable performance in dimensions below 40.In summary, the RBF kernel function demonstrates superior nonlinear expansion capability for a smaller number of data points, implying robust interpolation ability but weaker extrapolation capability with more data points.On the other hand, the poly kernel function excels in extrapolation ability for a greater number of data points, showing a slower increase in recognition rate initially.Combining the strengths of both, the mixed kernel function achieves higher recognition accuracy than the other two kernel functions when the feature dimension is less than 40.However, its accuracy significantly decreases beyond 40.For the method proposed in this paper, it attains superior recognition performance when the feature dimension is below 40, showcasing commendable interpolation and extrapolation abilities.
The fitting speed of different kernel functions varies, providing an additional dimension for assessing their performance.Table 2 presents the time comparison of the three kernel functions, showcasing their fitting speeds.This data reflects the average fitting time across various parameters in preceding trials, offering an accurate depiction of the kernel Comparing the detection accuracy of the RBF and poly kernel functions mentioned earlier revealed that the mixed kernel model proposed in this paper outperforms them in radar SEI, as illustrated in the right of Figure 6.The figure displays the parameter settings of the three kernel functions with the highest recognition accuracy within their respective parameter sets.Specifically, the parameter σ of the RBF is set to 1, the parameter d of the poly kernel function is set to 1, and the weight ω of the mixed kernel is set to 0.06.The accuracy comparison reveals that the RBF kernel function attains very high recognition accuracy with smaller feature dimensions below 10.However, its accuracy diminishes as dimensions increase.In contrast, the poly kernel function exhibits a gradual increase in recognition rate within the feature dimensions of 10 to 20, and after reaching 20, the accuracy rate elevates to a higher level.
By comparing the four graphs in Figures 5 and 6, it can be observed that the RBF kernel function performs well in low dimensions (around 10) but exhibits poorer performance in high dimensions.In contrast, the polynomial kernel function demonstrates highly stable performance in high dimensions.The mixed kernel function addresses the limitations of the former, showing relatively stable performance in dimensions below 40.In summary, the RBF kernel function demonstrates superior nonlinear expansion capability for a smaller number of data points, implying robust interpolation ability but weaker extrapolation capability with more data points.On the other hand, the poly kernel function excels in extrapolation ability for a greater number of data points, showing a slower increase in recognition rate initially.Combining the strengths of both, the mixed kernel function achieves higher recognition accuracy than the other two kernel functions when the feature dimension is less than 40.However, its accuracy significantly decreases beyond 40.For the method proposed in this paper, it attains superior recognition performance when the feature dimension is below 40, showcasing commendable interpolation and extrapolation abilities.
The fitting speed of different kernel functions varies, providing an additional dimension for assessing their performance.Table 2 presents the time comparison of the three kernel functions, showcasing their fitting speeds.This data reflects the average fitting time across various parameters in preceding trials, offering an accurate depiction of the kernel functions' performance.Among them, the poly kernel function exhibits the fastest fitting speed under the same dataset.The hybrid kernel model proposed in this paper demonstrates a slightly shorter time and faster fitting speed compared to the commonly used RBF kernel function.
Multi-Domain Feature Fusion Analysis
Building upon the literature on fingerprint feature extraction, this study derives representative fingerprint features from various domains for fusion.Specifically, four fingerprint features are extracted: the envelope rising edge feature (E), the spectrum feature (F), the short-time Fourier transform (S), and the near-zero slice of the ambiguity function (A).This notation allows for clear representation: E denotes time domain features, F denotes frequency domain features, S denotes time-frequency domain features, and A denotes the ambiguity function (AF).The definitions of the four fingerprint features are given below and are shown in Figure 7.In the multi-feature fusion experiment, the number of fused features is incremental increased-two, three, and four features are fused to assess the impact on recognition a curacy.The recognition accuracy, averaged across preserving feature dimensions 1 to 10 is displayed using a histogram.
The average recognition accuracy of a single feature across all dimensions of the fe ture set is presented in the left of Figure 8.The recognition rate ranges from 63 to 80, ind cating insufficient effectiveness for recognizing individual radar radiation sources.Re ognizing the complementarity between features from different domains, the fusion of va ious features is explored in the subsequent analyses.The signal envelope A(t) is defined as: where s I (t) and s Q (t) are orthogonal signals and the rising edge of the envelope is the front part of the envelope.The signal spectrum U( f ) is defined as: where u(t) is the radar signal.The short-time Fourier transform is defined as: where a is the window function and * is the complex conjunction.The near-zero slice of the ambiguity function A u (τ, ξ) is defined as [30,31]: where ξ denotes the frequency shift, usually taken as 1 for values near zero.U( f ) denotes the signal spectrum.U * ( f ± ξ) denotes the conjugate frequency shift of the signal spectrum.The fusion process of MKCCA is depicted in Figure 7, illustrating the dimensions of the four fingerprint features extracted.MKCCA takes two feature vectors as input, producing the first 50-dimensional vectors with the highest correlation.These vectors are then fused column-wise, resulting in a 100-dimensional fusion vector.This procedure is repeated for the remaining two features, and the two MKCCA outputs are fused.The optimal feature dimension is determined through a systematic increase in feature dimensions.It is then passed through a random forest classifier.
In the multi-feature fusion experiment, the number of fused features is incrementally increased-two, three, and four features are fused to assess the impact on recognition accuracy.The recognition accuracy, averaged across preserving feature dimensions 1 to 100, is displayed using a histogram.
The average recognition accuracy of a single feature across all dimensions of the feature set is presented in the left of Figure 8.The recognition rate ranges from 63 to 80, indicating insufficient effectiveness for recognizing individual radar radiation sources.Recognizing the complementarity between features from different domains, the fusion of various features is explored in the subsequent analyses.In the multi-feature fusion experiment, the number of fused features is incrementally increased-two, three, and four features are fused to assess the impact on recognition accuracy.The recognition accuracy, averaged across preserving feature dimensions 1 to 100, is displayed using a histogram.
The average recognition accuracy of a single feature across all dimensions of the feature set is presented in the left of Figure 8.The recognition rate ranges from 63 to 80, indicating insufficient effectiveness for recognizing individual radar radiation sources.Recognizing the complementarity between features from different domains, the fusion of various features is explored in the subsequent analyses.The right of Figure 8 illustrates that recognition accuracies of dual feature fusion fall within the interval (80, 89).The fusion of time-frequency features with frequency features performs best, followed by the fusion of ambiguity functions with frequency features.Although the recognition effect improves compared to a single feature, the distribution interval remains large, and the accuracy falls short of expectations.
As illustrated in the left of Figure 9, the recognition accuracy of three-feature fusion lies within the interval (87, 90), displaying reduced distribution and improved stability compared to single-and dual-feature fusion methods.The recognition result of four- The right of Figure 8 illustrates that recognition accuracies of dual feature fusion fall within the interval (80, 89).The fusion of time-frequency features with frequency features performs best, followed by the fusion of ambiguity functions with frequency features.Although the recognition effect improves compared to a single feature, the distribution interval remains large, and the accuracy falls short of expectations.
As illustrated in the left of Figure 9, the recognition accuracy of three-feature fusion lies within the interval (87, 90), displaying reduced distribution and improved stability compared to single-and dual-feature fusion methods.The recognition result of fourfeature fusion reaches 94%.Complementary use of different feature domains significantly enhances individual recognition of radar radiation sources, providing higher stability to accommodate sample diversity.
feature fusion reaches 94%.Complementary use of different feature domains significantly enhances individual recognition of radar radiation sources, providing higher stability to accommodate sample diversity.In the KCCA fusion algorithm, features with the largest correlation coefficient must be combined as fusion features.The right of Figure 9 illustrates that the improved recognition impact after fusion is proportional to the feature dimension.A turning point is observed at a feature dimension of 10, where the recognition impact is significantly enhanced for dimensions less than 10, while the recognition rate remains constant for dimensions larger than 10.
Performance Analysis
This section delves into the performance of various feature fusion algorithms, comparing the recognition accuracy of the proposed method with existing fusion algorithms.Furthermore, it explores the influence of different feature dimensions on accuracy.Figure 10 presents a performance comparison of the fusion algorithm.In this experiment, the dimension range for all fusion algorithms is set from 0 to 60. Notably, the recognition accuracy of each fusion algorithm gradually increases with the growing dimension.The study underscores that the method proposed in this article outperforms other fusion algorithms, achieving the highest recognition accuracy of 95% while utilizing a smaller feature dimension of 10.The standard CCA algorithm exhibits In the KCCA fusion algorithm, features with the largest correlation coefficient must combined as fusion features.The right of Figure 9 illustrates that the improved recognition impact after fusion is proportional to the feature dimension.A turning point is observed at a feature dimension of 10, where the recognition impact is significantly enhanced for dimensions less than 10, while the recognition rate remains constant for dimensions larger than 10.
Performance Analysis
This section delves into the performance of various feature fusion algorithms, comparing the recognition accuracy of the proposed method with existing fusion algorithms.Furthermore, it explores the influence of different feature dimensions on accuracy.Figure 10 presents a performance comparison of the fusion algorithm.In the KCCA fusion algorithm, features with the largest correlation be combined as fusion features.The right of Figure 9 illustrates that the nition impact after fusion is proportional to the feature dimension.A tu served at a feature dimension of 10, where the recognition impact is hanced for dimensions less than 10, while the recognition rate remain mensions larger than 10.
Performance Analysis
This section delves into the performance of various feature fusion paring the recognition accuracy of the proposed method with existing fu Furthermore, it explores the influence of different feature dimensions on a presents a performance comparison of the fusion algorithm.In this experiment, the dimension range for all fusion algorithms is Notably, the recognition accuracy of each fusion algorithm gradually in growing dimension.The study underscores that the method proposed i performs other fusion algorithms, achieving the highest recognition while utilizing a smaller feature dimension of 10.The standard CCA a In this experiment, the dimension range for all fusion algorithms is set from 0 to 60. Notably, the recognition accuracy of each fusion algorithm gradually increases with the growing dimension.The study underscores that the method proposed in this article outperforms other fusion algorithms, achieving the highest recognition accuracy of 95% while utilizing a smaller feature dimension of 10.The standard CCA algorithm exhibits inferior recognition accuracy compared to several other fusion algorithms at lower feature dimensions.Recognition outcomes become comparable to other algorithms only when the feature dimension exceeds 40, achieving a rate of approximately 87%.KCCA with the RBF kernel attains the highest recognition accuracy of 89.37% at a feature dimension of 10, but the accuracy drops as the feature dimension increases to 50.KCCA with a poly kernel achieves a peak recognition accuracy of 87.34% when increasing feature dimensionality to 20, demonstrating stable performance with dimension increase.In contrast, the KPCA algorithm shows a gradual ascent in recognition accuracy with a feature dimension increased to 10, albeit at a sluggish rate.Therefore, the algorithm introduced in this study exhibits a noticeable enhancement in recognition accuracy while significantly reducing feature redundancy compared to other fusion algorithms.
Table 3 presents the time spent by all fusion algorithms in the experiment, from loading the original dataset to obtaining the recognized results.The average time spent in the experiment is calculated for the final result.Notably, the poly CCA fusion algorithm, utilizing a poly kernel function, demonstrates the shortest algorithm time.The method proposed in this paper ranks as the second shortest.It is noteworthy that the CCA methods employed all had shorter durations compared to KPCA in terms of time.In terms of algorithm time, both the CCA method and the KCCA method extended with a kernel function outperform PCA in terms of timeliness.
Summary
This paper introduces a novel approach for SEI of radar radiation sources based on MMKCCA.The acquired radar signals undergo analysis to extract distinctive features from the time domain, frequency domain, time-frequency domain, and ambiguity-function domain.Subsequently, these features are amalgamated utilizing the MKCCA data fusion method, yielding a comprehensive feature set for identification purposes.The experimentation affirms the complementary nature of various feature domains, underscoring the efficacy of this fusion method in harnessing the complementarity of multiple features.The kernel function plays a pivotal role in transforming features into a high-dimensional space, facilitating the identification of nonlinear relationships among features within that space.Employing a mixed kernel-comprising a weighted combination of a local kernel and a global kernel-enhances the extraction of effective information from diverse features.This approach not only fosters an improved understanding of the nonlinear correlations among features but also significantly reduces the feature dimension.The experimental results demonstrate that the proposed feature fusion method yields a commendable recognition effect even with lower feature dimensions.It was not able to experiment with all fingerprint features in this paper, thus future research will focus on the performance of other fingerprint aspects.
Figure 5 .
Figure 5. Performance of RBF kernel parameter σ (left) and poly kernel parameter d (right).
by the poly kernel function, research has demonstrated its commendable extrapolation capability.Larger values of parameter d contribute to superior inter- polation capability but diminish extrapolation capability.Conversely, smaller values of d enhance extrapolation capability but weaken interpolation capability.The parameter range for the poly kernel is set to {1, 2, 3, 4, 5}, as illustrated in the right of Figure 5, showcasing the recognition accuracy comparison for varying values of parameter d.At 1 d = , as feature dimensions range between (10, 30), recognition accuracy exhibits slower growth compared to other parameter values.At 3 d = , recognition accuracy remains stable when the feature dimension reaches 12.
To explore the influence of the weight ω on the algorithm's recognition effectiveness, comparative experiments with different parameter values are conducted.The left of
Figure 5 .
Figure 5. Performance of RBF kernel parameter σ (left) and poly kernel parameter d (right).
Figure 6 .
Figure 6.Performance of the mixed kernel parameter ω (left) and comparison of kernel functions (right).
Figure 6 .
Figure 6.Performance of the mixed kernel parameter ω (left) and comparison of kernel functions (right).
Electronics 2024 ,Figure 7 .
Figure 7. Multi features fusion flow diagram.(The diagram of four features is shown on the top the figure, the main steps are shown in the dashed box on the left of the figure, and the detail process of feature fusion is shown on the right of the figure.).
Figure 7 .
Figure 7. Multi features fusion flow diagram.(The diagram of four features is shown on the top of the figure, the main steps are shown in the dashed box on the left of the figure, and the detailed process of feature fusion is shown on the right of the figure.).
Figure 7 .
Figure 7. Multi features fusion flow diagram.(The diagram of four features is shown on the top of the figure, the main steps are shown in the dashed box on the left of the figure, and the detailed process of feature fusion is shown on the right of the figure.).
Figure 8 .
Figure 8. Recognition effect of single feature (left) and double features(right).
Figure 8 .
Figure 8. Recognition effect of single feature (left) and double features (right).
Figure 9 .
Figure 9. Recognition effect of multiple features (left) and influence of feature dimensions on recognition performance (right).
Figure 10 .
Figure 10.Performance of the fusion algorithm.
Figure 9 .
Figure 9. Recognition effect of multiple features (left) and influence of feature dimensions on recognition performance (right).
Figure 9 .
Figure 9. Recognition effect of multiple features (left) and influence of feature di nition performance (right).
Figure 10 .
Figure 10.Performance of the fusion algorithm.
Figure 10 .
Figure 10.Performance of the fusion algorithm.
Table 3 .
Time spent on the above algorithm.
|
2024-03-24T15:06:50.426Z
|
2024-03-22T00:00:00.000
|
{
"year": 2024,
"sha1": "86bd74872133cf8ff5c71c99931f8e9650133a00",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/13/7/1173/pdf?version=1711106488",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1cf3d9a91dfc4198ff06d5f19c3c6e24c3e61509",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": []
}
|
234624610
|
pes2o/s2orc
|
v3-fos-license
|
Love as a linguacultural space (on the basis of paroemias in the English, Welsh, Gaelic and Scots languages)
The present work explores the feeling of love in linguistic world pictures of different ethnic groups that comprise a single nation. The authors attempted to describe this feeling as a linguacultural space and define its aspects, both universal and nationally specific. The methodology of the research encompassed a comparative method, continuous sampling method, method of structural and semantic analysis and method of contextual analysis. The paper offers the analysis of paroemiological units in English, Welsh, Scottish Gaelic and Scots languages. Qualitative and quantitative characteristics of representation of the concept of Love were defined. Semantic structure of this word was revealed. There were described nationally specific features of representation of “man’s” matter when structuring Love within the framework of linguistic culture in the linguistic world picture. As the result of the study, a generalized scheme of the linguacultural space of Love is revealed at the level of a nation’s linguistic picture of the world, and the idioethnic one at the level of an individual ethnic group. Certain standards and stereotypes contained in this linguacultural space are distinguished as well.
Introduction
In the age of globalization, different ethnic groups converge rapidly. Certain elements of various cultures gain global popularity. Such internationalization makes us reconsider and systemize the customary ideas, stereotypes and images that we have about each other and the world, particularly such "familiar" and "conceivable" phenomena as love. Love can be attributed to universal feelings, since, at the biological level, it springs up in a certain area of human brain. It is accompanied by a number of psychological processes and has its own mechanism of action, which is culturally universal. Only being verbalized through a language, it acquires certain idioethnic features, which can be traced in a linguistic world picture of a single nation formed by several ethnic groups. Such comparison will allow "removing" all idioethnic layers and reveal the core that is common and typical in the views and ideas that are "transmitted" by different linguistic cultures that make up a single nation.
An example to consider is the linguistic picture of the world (LPW) of British people. The history of the British Isles is quite hard, and unfortunately, today it is often considered only from the English perspective, without taking into account the ethnic groups of Wales, Scotland and Ireland, although people currently living in these territories still do not feel that they are a part of the Anglo-Saxon world. Besides English, which is the official language of Great Britain, other national languages are still used in its territory: Scots, Welsh, Gaelic (Irish dialect, Scottish dialect) and Cornish. Therefore, the British LPW consists of languages belonging to historical population of Great Britain: English, Welsh, Gaelic (Irish and Scottish dialects) and Scots. Ethnic cultures of native speakers of these languages have not merged into one culture despite their common history, each one has its own nationally specific ideas and stereotypes (The Cornish language is not considered in the present paper, since there have remained very few native speakers by now, Irish Gaelic and Scottish Gaelic are considered as a single dialect continuum). The objective of the present study is to distinguish similarities and differences in ideas that make up the linguistic pictures of the world (LPW) of the languages listed above. In order to achieve this goal, paroemiological units were analyzed as carriers of nationally specific stereotypes. The paroemiological scope of these languages served as proceedings for the research, since it, being complete, reflects the ethnic component of consciousness of those people who has composed it. Of course, modern British linguistic culture is still experiencing certain dynamics in the structure of the paroemiological basis, but in order to avoid the recency effect, we will limit the scope of dictionaries of proverbs to those edited in the end of the 19th and the beginning of the 20th century, so that the research would be performed and the patterns would be defined based on already finished processes.
Methods
Methods of investigating the empirical material are determined by the goal, tasks, object and subject of this work. The comparative method was used for systemic comparison of complete lexical units belonging to different languages in order to detect structural and semantic differences. The method of continuous sampling drawn from lexicographic publications and scientific periodicals was used to select empirical material and to find differences in ways of using a concept in different languages. The method of structural and semantic analysis of lexical units was used to identify general and distinctive properties and characteristics of lexical structures in order to further work out their objective typology. The method of contextual analysis helped find semantic diversities in semantically/lexically alike lexical units.
Results
Love in a man's picture of the world (MPW) of British people is represented by a diverse set of paroemiological units. For example, they touch upon relationship between a man and a woman who are not married: (English) He who loseth a whore, is a great gainer [2] (Welsh) Cerid chwaer diriad can ni charer [14] A man, who is worthless hated by all, is loved by his sister, (Gaelic) Cha chòir do dhuine a ghràdh 'us' aithne chur a dh-aon taobh [11] Love the one, be friends with the other, a man is given advice on how to behave in order to win a woman's heart: (English) He that woos a maid, must seldom come in her sight, but he that woos a widow must woo her day and night [2], (Welsh) Gwell gwraig o'i chanmawl [14] Praise the woman and you will rise in her estimation, (Gaelic) An uair a chì thu bean oileanach, beir oirre, beir oirre, mur beir thus 'oirre, beiridh fear eile oirre [11] When you see a well-mannered woman, grab her, otherwise the other will do it for you, (Scots) Tak a lass wi 'the tear i' her ee [4] Rejected girl is easier to get, even if she once rejected you, 'love' in marriage is also mentioned: (English) Who marries for love without money, has good nights and sorry days [2] (Welsh) Ni cherir yn llwyr oni ddelo'r ŵyr. [13] True love comes with the first grandson, (Gaelic) Socraichidh am pòsadh an gaol [11] Marriage mourns love, (Scots) Naething to be done but draw in your chair and sit down [4] Nothing to be done [with such a bride] but draw in your chair and sit down.
The model of relationship between a man and a woman is not the only one for the representation of love. The material under study contains a clear description of what is inside a man in love: (Welsh) Tra gweno meingan, cipia gusan [14] As long as your beloved beauty smiles, you will love and kiss her, (Gaelic) Fuath giullain, a chiad leannan [11] A man hates his first love, (Scots) A woman's love will traise further than horses [4] A woman's love will draw further than horses. Among other things, negative acts and doings of a man in love are listed: (English) If you can kiss the mistress, never kiss the maid [2], (Gaelic) Teinne chaoran is gaol ghiullan, Cha do mhair iad fada riamh [10] The fire in a peatery and a boy's love never lasted long, (Scots) He got the knights bone off her [9] He seduced her before the wedding.
The examples given above demonstrate that in paroemias of the languages under study, love is verbalized as a response to an impetus that provokes certain actions (reaction turned outside), as well as psychological and spiritual experiences (internal reaction). Thus, it can be concluded that in the British LPW this feeling is represented as a space: internal (INemotional state) and external (EX 1 -behavior caused by feeling, EX 2 -behavioral reaction to the object of love). The dialectical unity of these spaces makes up a single generalized linguacultural space of 'love' (hereinafter this feeling is put in single quotation marks to designate the linguacultural space, which is not equal to the Russian word), which can be graphically depicted as follows (see Figure 1). While not being a basic emotion (i.e. other animals do not experience it), 'love' bears the imprint of a particular linguistic culture. Therefore, each language under study has its generalized space being constructed by a different set of finite points (paroemias), so the diagram presented above would have different numerical extent and, thus, be filled with different "meanings". In the English language, the paroemiological field (the selected corpus of proverbs and sayings) is represented by 52 units, in Welsh -by 12, in Gaelic -by 22, and in Scots -by 17, and their distribution according to "meanings" of internal and external spaces also varies (see table. 1). To perform a more accurate comparison of data and identify hidden patterns, the specific gravity of each "meaning" is to be calculated, and next internal and external spaces separately for each language according to the formula: Thus, Figure 1 takes the following form (see Figure 2, 3, 4, 5).
Discussion
The comparison of the data presented in Figure 2 allows singling out one main similarity found in all four languages: the numerical prevalence of the external space (EX-1: Behavior) over the internal one. Also, partially "typical" characteristics are observed in the Gaelic and English languages: in both linguistic cultures, the emotional state ranks the second, while the description of behavioral responses has the least specific weight. Meanwhile in the Scots language behavioral responses are more significant than the emotional state, and in the Welsh language these "meanings" are not referred at all.
Along with "typical" characteristics, linguistic cultures under study have a significant number of idioethnic features, which are manifested not only numerically, but also through a combination of "meanings", which makes an asymmetric representation: the behavior of a man in love in paroemiological units of the English language is aimed at conquering women. However, paroemias that describe love both in marriage and without getting married are less common. In Welsh and Gaelic, by contrast, they speak of an unmarried man's love more often than of a married. In the Scots language, only advice is given on how to court a girl properly. The behavioral reactions of a man in love are not represented in the Welsh language at all, in the Gaelic and Scots languages the meaning "Impermanence" prevail, while in English -"Lust". The emotional state of a man is also represented in different ways. For example, in the Gaelic Scots languages, it is represented by the one meaning "Love is blind", in English there are two more meanings: "Love changes one" and "Love in spite", in Welsh emotional experiences are described through the meaning "Love in spite" and "Love is blind".
When performing the present study, it was noted that "meanings", while forming a linguacultural space, simultaneously reflect certain standards and stereotypes. For example, the paroemiological field of the English language (see Fig. 2a) contains such standards (forms) (the present work uses the classification of love styles proposed by John Alan Lee: 1) eros (spontaneous, enthusiastic, love in the form of honoring), 2) filia (treated as friendship/liking), 3) storge (tender love), 4) agape (sacrificial, ultimate, love for God), 5) ludus (game, sexual interest), 6) mania (obsession, passion), 7) pragma (rational, selfinterested) [8]) of love: eros A lover's soul lives in the body of his mistress [2]. pragma Marry first, and love will follow [1] In Welsh, love is represented in the form of ludus Tra gweno meingan, cipia gusan [14] As long as your beloved beauty smiles, you will kiss her, mania Mal llyfu mel oddiar ddrain [14] Loving a woman full of contempt is the same as licking honey from thorns, filia: Cerid chwaer diriad can ni charer [11] A man, who is worthless and hated by everyone, is loved by his sister. The Gaelic language has such standards of love as ludus Cho teth ri gaol seòladair. [11] Hot as the love of a sailor, and mania Teine chaoran 'us gaol ghiul'an, [3] A boy's love is like fire in the peatary. Love in the Scottish language is represented by the following standards: eros A woman's love will traise further than horses [4], pragma Naething to be done but draw in your chair and sit down [3]. Interesting fact that the standards mentioned above are observed only in the internal space, almost none of them can be found in the external space.
In the external space, stereotypical ideas of behavior are captured. For example, in the English language, love is of special significance: He that does not love a woman, sucked a sow [6], lover is haunted by obsessive thoughts: A lover's soul lives in the body of his mistress [2]., there is a fever of feelings: He that cannot hate cannot love [5], surge of energy: He that has love in his breast, has spurs in his sides [1], a lover is emotionally dependent: A man has choice to begin love, but not to end it [6], barriers and misfortunes only heighten his passion: Nineteen naysays o' a maiden are a ha' f grant [3], there are also some paroemias about sexual relationship: He that woos a maid, must feign, lie and flatter, But he that woos a widow, must down with his breeches, and at her [3], the fleetingness of love is also referred to: Lad's love's a busk of broom, hot awhile and soon done [3]. In the Welsh language, stereotypes of man's behavior are not as numerous as in English: only typical representations of praising the subject of passion are found there: Gwell gwraig o'i chanmawl [14]. By praising a woman, you will increase her value, the fever of feelings is also represented: A garer neu gaseir a welir o bell [13]. The lover is seen from afar, and mood swings: Nid siomedigaeth ond gwraig [14]. It's not a woman, it's disappointment. The Gaelic language also has a stereotype that a man in love experiences a fever of feelings: Fuatli giullain, a chiad leannan [4]. A boy hates his first love afterwards, surge of energy: Far nach ionmhuinnduine, is ann a'sfhasa'éigneachadh [1]. You cannot prevail against a lover, emotional dependence: Mairg léigeas a rún le mnaoi [12]. Oh madman, woe to you who have trusted a woman, the fleetingness of feelings is also indicated: Teinne chaoran is gaol ghiullan, Cha do mhair iad fada riamh [10]. Peataries and boy's love never live long. Stereotypes existing in the Scottish language touch upon sexual relationship between a man and a woman: He got the knights bene off 'her [9]. He has already "laid her", fleetingness of love: There are mair maidens than maukins. [4] He lost one, he will find another very soon, and it is also indicated that misfortunes intensify passion: Nineteen naysay o 'a maiden is half a grant [2]. Nineteen "no" of a girl is a half of "yes".
Highlighted idioethnic standards and stereotypes help find common ground in the representation of 'love' in the British MPW: "typical" standards of honoring love (eros), love of sexual attraction (ludus), passionate love and "typical" stereotypes of a man in love (obstacles and misfortunes intensify passion, which entails the fever of feelings and the surge of energy that fade away very fast, sexual relationship without getting married/before marriage is also described).
Conclusion
All of the above allows concluding that all the languages under study, represent love in the traditional biological and psychological sense, i.e. in the form of a feeling and attitude arising as a response to an incentive or without an incentive. Therefore, 'love' may be described as a two-dimensional linguacultural space represented by the "internal -external" binarity. The internal space is built of an emotional state, while the external is divided into the behavior conditioned by a feeling and a behavioral response to an object of love. The boundary between the internal and external spaces is fuzzy and lies in the human mind. Moreover, the study on the paroemiological field of languages under consideration showed that 'love' is also a feeling of a social nature: being depicted in paroemias, the instincts of homo sapiens are "civilized" and ritualized, and take the form of stereotypes and standards. Moreover, in the linguacultural space of 'love', the standards are verbalized within the internal space, while stereotypes -within the external spaces. Similarities and interconnections of the ideas of love in the languages understudy can be explained by the common historical line of their speakers, as well as the universal biological grounds of psychological processes. The main prerequisite of the existence of nationally specific features in the representation of love in the linguistic cultures under study is the desire to confirm their national identity and protect their group values.
Further research prospects consist in consideration on the female picture of the world and in study on corresponding paroemiological units. In addition, it seems reasonable to consider other linguistic cultures that make up a single nation, just like the cultures of the British Isles. In conclusion, it would be worth noting that studies on idioethnic features of a linguistic picture of the world of a nation that consists of several ethnic groups preserving their identity, helps clarify and prevent difficulties arising in the process of intercultural communication between representatives of ethnic groups, as well as of native speakers of other languages.
|
2020-12-10T09:05:01.686Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "46d5a8cd3fde74077c8d1b0b69bab032bbac3a9c",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/70/e3sconf_itse2020_21007.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "50655a341deda37a0b97c67ca60634e16f6b6554",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"History"
]
}
|
52028138
|
pes2o/s2orc
|
v3-fos-license
|
University of Birmingham The effect of ultrasound treatment on the structural, physical and emulsifying properties of animal and vegetable proteins
The ultrasonic effect on the physicochemical and emulsifying properties of three animal proteins, bovine gelatin (BG), fi sh gelatin (FG) and egg white protein (EWP), and three vegetable proteins, pea protein isolate (PPI), soy protein isolate (SPI) and rice protein isolate (RPI), was investigated. Protein solutions (0.1 e 10 wt.%) were sonicated at an acoustic intensity of ~34 W cm (cid:1) 2 for 2 min. The structural and physical properties of the proteins were probed in terms of changes in size, hydrodynamic volume and molecular structure using DLS and SLS, intrinsic viscosity and SDS-PAGE, respectively. The emulsifying performance of ultrasound treated animal and vegetable proteins were compared to their untreated counterparts and Brij 97. Ultrasound treatment reduced the size of all proteins, with the exception of RPI, and no reduction in the primary structure molecular weight pro fi le of proteins was observed in all cases. Emulsions prepared with all untreated proteins yielded submicron droplets at concentrations (cid:3) 1 wt.%, whilst at concentrations > 5 wt.% emulsions prepared with EWP, SPI and RPI yielded micron sized droplets ( > 10 m m) due to pressure denaturation of protein from homogenisation. Emulsions produced with sonicated FG, SPI and RPI had the similar droplet sizes as untreated proteins at the same concentrations, whilst sonicated BG, EWP and PPI emulsions at concentrations (cid:3) 1 wt.% had a smaller droplet size compared to emulsions prepared with their untreated counterparts. This effect was consistent with the observed reduction in the interfacial tension between these untreated and ultrasound treated proteins. © 2015 Elsevier Ltd. All rights reserved.
The effect of ultrasound treatment on the structural, physical and emulsifying properties of animal and vegetable proteins
Introduction
Proteins perform a vast array of functions in both the food and pharmaceutical industries, such as emulsification, foaming, encapsulation, viscosity enhancement and gelation.This functionality arises from the complex chemical make-up of these molecules (O'Connell & Flynn, 2007;Walstra & van Vliet, 2003).Proteins are of particular interest in food systems as emulsifiers, due to their ability to adsorb to oil-water interfaces and form interfacial films (Foegeding & Davis, 2011;Lam & Nickerson, 2013).The surface activity of proteins owes to the amphiphilic nature these molecules possess, because of the presence of both hydrophobic and hydrophilic regions in their peptide chains (Beverung, Radke, & Blanch, 1999;O'Connell & Flynn, 2007).Due to proteins larger molecular weight lending to their bulkier structure by comparison to low molecular weight emulsifiers (e.g.Brij 97) proteins diffuse more slowly to the oil-water interface through the continuous phase (Dickinson, 1999;McClements, 2005).Once at the interface proteins undergo surface denaturation and rearrange themselves in order to position their hydrophobic and hydrophilic amino groups in the oil and aqueous phase respectively, reducing the interfacial tension and overall free energy of the system (Caetano da Silva Lannes & Natali Miquelim, 2013;McClements, 2004).Proteins provide several advantages for emulsion droplet stabilisation, such as proteineprotein interactions at interfaces, and electrostatic and steric stabilisation due to the charged and bulky nature of these biopolymers (Lam & Nickerson, 2013;McClements, 2004;O'Connell & Flynn, 2007).
Ultrasound is an acoustic wave with a frequency greater than 20 kHz, the threshold for human auditory detection (Knorr, Zenker, Heinz, & Lee, 2004).Ultrasound can be classified in two distinct categories based on the frequency range, high frequency (100 kHz to 1 MHz) low power (<1 W cm À2 ) ultrasound, utilised most commonly for the analytical evaluation of the physicochemical properties of food (Chemat, Zill-e-Huma, & Khan, 2011), and low frequency (20e100 kHz) high power (10e1000 W cm À2 ) ultrasound recently employed for the alteration of foods, either physically or chemically (McClements, 1995).The effects of high power ultrasound on food structures is attributed to the ultrasonic cavitations, the rapid formation and collapse of gas bubbles, which is generated by localised pressure differentials occurring over short periods of times (a few microseconds).These ultrasonic cavitations cause hydrodynamic shear forces and a rise in temperature at the site of bubble collapse (up to 5000 C) contribute to the observed effects of high power ultrasound (Güzey, Gülseren, Bruce, & Weiss, 2006;O'Brien, 2007;O'Donnell, Tiwari, Bourke, & Cullen, 2010).
Ultrasound treatment of food proteins has been related to affect the physicochemical properties of a number of protein sources including soy protein isolate/concentrate (including soy flakes; Arzeni, Martínez, et al., 2012;Hu et al., 2013;Jambrak, Lelas, Mason, Kre si c, & Badanjak, 2009;Karki et al., 2009Karki et al., , 2010) ) and egg white protein (Arzeni, Martínez, et al., (2012); Arzeni, P erez, & Pilosof, 2012;Krise, 2011).Arzeni, Martínez, et al., (2012), Arzeni, P erez, et al., (2012) studied the effect of ultrasound upon the structural and emulsifying properties of egg white protein (EWP) and observed an increase in the hydrophobicity and emulsion stability of ultrasound treated EWP by comparison to untreated EWP.In addition, Krise (2011) reported no significant reduction in the primary protein structure molecular weight profile of EWP after sonication at 55 kHz for 12 min.Similarly, Karki et al. (2010) and Hu et al. (2013) observed no significant changes in the primary protein structure molecular weight profile of ultrasound treated soy protein.Furthermore, Arzeni, Martínez, et al. (2012) described a significant reduction in protein aggregate size for soy protein isolate (SPI).However, the effect of ultrasound treatment upon gelatin, either mammalian or piscine derived, pea protein isolate or rice protein isolate has yet to be investigated.
Gelatin is a highly versatile biopolymer widely used in a myriad of industries, from the food industry for gelation and viscosity enhancement, and the pharmaceutical industry for the manufacture of soft and hard capsules (Duconseille, Astruc, Quintana, Meersman, & Sante-Lhoutellier, 2014;Haug, Draget, & Smidsrød, 2004;Schrieber & Gareis, 2007).Gelatin is prepared from the irreversible hydrolysis of collagen (a water insoluble structural protein of connective tissues in animals) under either acidic or alkaline conditions in the presence of heat, yielding a variety of peptide-chain species (Schrieber & Gareis, 2007;Veis, 1964).Gelatin is a composite mixture of three main protein fractions: free a-chains, b-chains, the covalent linkage between two a-chains, and g-chains, the covalent linkage between three a-chains (Haug & Draget, 2009).Gelatin is unique among proteins owing to the lack of appreciable internal structuring, so that in aqueous solutions at sufficiently high temperatures the peptide chains take up random configurations, analogous to the behaviour of synthetic linear-chain polymers (Veis, 1964).
Egg white protein (EWP) is a functional ingredient widely used in the food industry, due to its emulsifying, foaming and gelation capabilities, and utilised within a wide range of food applications, including noodles, mayonnaise, cakes and confectionary (McClements, 2009;Mine, 2002).EWP is globular in nature with highly defined tertiary and quaternary structures.The main protein fractions of egg white protein include ovalbumin (~55%), ovotransferrin (~12%) and ovomucin (~11%), as well as over 30 other protein fractions (Anton, Nau, & Lechevalier, 2009).
Soy protein isolate (SPI) is of particular interest to the food industry, as it is the largest commercially available vegetable protein source owing to its high nutritional value and current low cost, and a highly functional ingredient due to its emulsifying and gelling capabilities, however, this functionality is dependent upon the extraction method utilised for the preparation of the isolate (Achouri, Zamani, & Boye, 2012;Molina, Defaye, & Ledward, 2002;Sorgentini, Wagner, & Aiidn, 1995).SPI, extracted from Glycine max, is an oilseed legume grown primarily in the United Sates, Brazil, Paraguay and Uruguay (Gonzalez-Perez & Arellano, 2009).Similar to pulse legumes, like PPI, the major protein factions in oilseed legumes are albumins (2S; <80 kDa) and globulins, the dominant fractions in SPI are glycinin (11S; 300e360 kDa) and b-conglycinin (7S; 150e190 kDa) a trimeric glycoprotein (Gonzalez-Perez & Arellano, 2009;Shewry, Napier, & Tatham, 1995).
In this work, three animal proteins, bovine gelatin (BG), fish gelatin (FG) and egg white protein (EWP), and three vegetable proteins, pea protein isolate (PPI), soy protein isolate (SPI) and rice protein isolate (RPI), all of which are composite mixtures of a number of protein fractions, were investigated in order to assess the significance of high power ultrasound treatment on industrially relevant food proteins.The objectives of this research were to discern the effects of ultrasound treatment upon animal and vegetable proteins, in particular changes in physicochemical properties, measured in terms of size, molecular structure and intrinsic viscosity.Furthermore, differences in the performance of proteins as emulsifiers after ultrasound treatment was assessed in terms emulsion droplet size, emulsion stability and interfacial tension.Oil-in-water emulsions were prepared with either untreated or ultrasound treated BG, FG, EWP, PPI, SPI and RPI at different concentrations and compared between them and to a low molecular weight emulsifier, Brij 97.
Materials
Bovine gelatin (BG; 175 Bloom), cold water fish gelatin (FG; 200 Bloom), egg white protein from chickens (EWP), Brij ® 97 and sodium azide were purchased from Sigma Aldrich (UK).Pea protein isolate (PPI), soy protein isolate (SPI) and rice protein isolate (RPI) were all kindly provided by Kerry Ingredients (Listowel, Ireland).The composition of the animal and vegetable proteins used in this study is presented in Table 1, acquired from the material specification forms from suppliers.The oil used was commercially available rapeseed oil.The water used in all experiments was passed through a double distillation unit (A4000D, Aquatron, UK).
Preparation of untreated protein solutions
Bovine gelatin (BG), fish gelatin (FG) and rice protein isolate (RPI) solutions were prepared by dispersion in water and adjusting the pH of the solution to 7.08 ± 0.04 with 1 M NaOH, as the initial pH of the solution is close to the isoelectric point, 5.32, 5.02 and 4.85, for BG, FG and RPI, respectively.BG, FG, EWP, PPI, SPI and RPI were dispersed in water to obtain solutions within a protein concentration range of 0.1e10 wt.%,where all the animal proteins were soluble at the range of concentrations, whilst the vegetable proteins possessed an insoluble component regardless of hydration time.Sodium azide (0.02 wt.%) was added to the solution to mitigate against microbial activity.
Ultrasound treatment of protein solutions
An ultrasonic processor (Viber Cell 750, Sonics, USA) with a 12 mm diameter stainless steel probe was used to ultrasound treat 50 ml aliquots of BG, FG, EWP, PPI, SPI and RPI solutions in 100 ml plastic beakers, which were placed in an ice bath to reduce heat gain.The protein solutions were sonicated with a frequency of 20 kHz and amplitude of 95% (wave amplitude of 108 mm at 100% amplitude) for up to 2 min.This yielded an ultrasonic power intensity of ~34 W cm À2 , which was determined calorimetrically by measuring the temperature rise of the sample as a function of treatment time, under adiabatic conditions.The acoustic power intensity, I a (W cm À2 ), was calculated as follows (Margulis & Margulis, 2003): where P a (W) is the acoustic power, S A is the surface area of the ultrasound emitting surface (1.13 cm 2 ), m is the mass of ultrasound treated solution (g), c p is the specific heat of the medium (4.18 kJ/ gK) and dT/dt is the rate of temperature change with respect to time, starting at t ¼ 0 ( C/s).
The temperature of the protein solutions was measured before and after sonication by means of a digital thermometer (TGST3, Sensor-Tech Ltd., Ireland), with an accuracy of ±0.1 C. Prior to ultrasound treatment, the temperature of protein solutions was within the range of 5e10 C, whilst the temperature BG and FG solutions was within a temperature range of 45e50 C, above the helix coil transition temperature.After ultrasonic irradiation, the temperature of all protein solutions raised to approximately ~45 C.
Characterisation of untreated and ultrasound treated proteins
2.2.3.1.pH measurements.The pH of animal and vegetable protein solutions was measured before and after sonication at a temperature of 20 C. pH measurements were made by using a SevenEasy pH meter (Mettler Toledo, UK).This instrument was calibrated with buffer standard solutions of known pH.The pH values are reported as the average and the standard deviation of three repeat measurements.
2.2.3.2.Microstructure characterisation.The size of untreated and ultrasound treated animal proteins was measured by dynamic light scattering (DLS) using a Zetasizer Nano Series (Malvern Instruments, UK), and the size of untreated and ultrasound treated vegetable proteins was measured by static light scattering (SLS) using the Mastersizer 2000 (Malvern Instruments, UK).Protein size values are reported as Z-average (D z ).The width of the protein size distribution was expressed in terms of span (Span ¼ D v0.9 À D v0.1 / D v0.5 ), where D v0.9 , D v0.1 , and D v0.5 are the equivalent volume diameters at 90, 10 and 50% cumulative volume, respectively.Low span values indicate a narrow size distribution.The protein size and span values are reported as the average and the standard deviation of three repeat measurements.
2.2.3.3.Microstructure visualisation.Cryogenic scanning electron microscopy (Cryo-SEM; Philips XL30 FEG ESSEM) was used to visualise the microstructure of untreated and ultrasound treated proteins.One drop of protein solution was frozen to approximately À180 C in liquid nitrogen slush.Samples were then fractured and etched for 3 min at a temperature of À90 C inside a preparation chamber.Afterwards, samples were sputter coated with gold and scanned, during which the temperature was kept below À160 C by addition of liquid nitrogen to the system.2.2.3.4.Molecular structure characterisation.The molecular structure of untreated and ultrasound treated animal and vegetable proteins was determined by sodium dodecyl sulphateepolyacrylamide gel electrophoresis (SDS-PAGE), using a Mini-Protean 3 Electrophoresis System (Bio-Rad, UK), where proteins were tested using the reducing method.100 mL of protein solution at a concentration of 1 wt.% was added to 900 mL of Laemmli buffer (Bio-Rad, UK; 65.8 mM TriseHCl, 2.1% SDS, 26.3% (w/v) glycerol, 0.01% bromophenol blue) and 100 mL of b-mercaptoethanol (Bio-Rad, UK) in 2 mL micro tubes and sealed.These 2 mL micro tubes were placed in a float in a water bath at a temperature of 90 C for 30 min, to allow the reduction reaction to take place.A 10 mL aliquot was taken from each sample and loaded onto a Tris-acrylamide gel (Bio-Rad, UK; 4e20% Mini Protean TGX Gel, 10 wells).A molecular weight standard (Bio-Rad, UK; Precision Plus Protein™ All Blue Standards) was used to determine the primary protein structure molecular weight profile of the samples.Gel electrophoresis was carried out initially at 55 V (I > 20 mA) for 10 min, then at 155 V (I > 55 mA) for 45 min in a running buffer (10Â Tris/Glycine/SDS Buffer, Bio-Rad, UK; 4% Tris, 15% glycine, 0.5% SDS).The gels were removed from the gel cassette and stained with Coomassie Bio-safe stain (Bio-Rad, UK; 4% phosphoric acid, 0.5% methanol, 0.05% ethanol) for 1 h and de-stained with distilled water overnight.
2.2.3.5.Intrinsic viscosity measurements.The intrinsic viscosity of untreated and ultrasound treated animal and vegetable proteins was determined by a double extrapolation to a zero concentration (2) Kraemer (1938): where h sp is the specific viscosity (viscosity of the solvent, h 0 /viscosity of the solution, h), c the protein concentration (w/v%), [h] the intrinsic viscosity (dL/g), k H the Huggins constant.h rel is the relative viscosity (viscosity of the solution, h/viscosity of the solvent, h 0 ) and k K is the Kraemer constant.
The concentration ranges used for the determination of the intrinsic viscosity of BG, FG, EWP, PPI, SPI and RPI were 0.1e0.5 wt.%, 0.25e1.5 wt.%, 1.5e3 wt.%, 0.5e0.8wt.%, 1.5e3 wt.% and 0.5e2 wt.%, respectively.The validity of the regression procedure is confined within a discrete range of h rel , 1.2 < h rel < 2. The upper limit is due to the hydrodynamic interaction between associates of protein molecules, and the lower limit is due to inaccuracy in the determination of very low viscosity fluids.A value of h rel approaching 1 indicates the lower limit (Morris et al., 1981).
The viscosity of the protein solutions was measured at 20 C using a Kinexus rheometer (Malvern Instruments, UK) equipped with a double gap geometry (25 mm diameter, 40 mm height).For the determination of intrinsic viscosity by extrapolation to infinite dilution, there must be linearity between shear stress and shear rate, which indicates a Newtonian behaviour region on the range of shear rate used in the measurements.The Newtonian plateau region of the BG, FG, EWP, PPI, SPI and RPI solutions at the range of concentrations used, was found within a shear rate range of 25e1000 s À1 (data not shown).Thus, the values of viscosity of the protein solutions and that of the solvent (distilled water) were selected from the flow curves data at a constant shear rate of 250 s À1 (within the Newtonian region), which were subsequently used to determine the specific viscosity, h sp , the relative viscosity, h rel , and the intrinsic viscosity, [h].At least three replicates of each measurement were made.
Preparation of oil-in-water emulsions
10 wt.% dispersed phase (rapeseed oil) was added to the continuous aqueous phase containing either untreated or sonicated animal or vegetable proteins or Brij 97 at different concentrations, ranging from 0.1 to 10 wt.%.An oil-in-water pre-emulsion was prepared by emulsifying this mixture at 8000 rpm for 2 min using a high shear mixer (SL2T, Silverson, UK).Submicron oil-in-water emulsions were then prepared by further emulsifying the preemulsion using a high-pressure valve homogeniser (Panda NS 1001L-2K, GEA Niro Soavi, UK) at 125 MPa for 2 passes.The initial temperature of EWP, PPI, SPI and RPI emulsions was a temperature of 5 C to prevent thermal denaturation of proteins from high pressure homogenisation, whilst denaturation may still occur due the high shear during high pressure processing.The initial temperature of BG and FG emulsions was at a temperature of 50 C to prevent gelation of gelatin (bovine or fish) during the homogenisation process.High pressure processing increases the temperature of the processed material, and consequently, the final temperatures of emulsions prepared with EWP, PPI, SPI and RPI, and gelatin (BG and FG), after homogenisation were ~45 C and ~90 C, respectively.2.2.5.Characterisation of oil-in-water emulsions 2.2.5.1.Droplet size measurements.The droplet size of the emulsions was measured by SLS using a Mastersizer 2000 (Malvern Instruments, UK) immediately after emulsification.Emulsion droplet size values are reported as the volume-surface mean diameter (Sauter diameter; d 3,2 ).The stability of the emulsions was assessed by droplet size measurements over 28 days, where emulsions were stored under refrigeration conditions (4 C) throughout the duration of the stability study.The droplet sizes and error bars are reported as the mean and standard deviation, respectively, of measured emulsions prepared in triplicate.
2.2.5.2.Interfacial tension measurements.The interfacial tension between the aqueous phase (pure water, animal or vegetable protein solutions, or surfactant solution) and oil phase (rapeseed oil) was measured using a tensiometer K100 (Kr} uss, Germany) with the Wilhelmy plate method.The Wilhelmy plate has a length, width and thickness of 19.9 mm, 10 mm and 0.2 mm, respectively and is made of platinum.The Wilhelmy plate was immersed in 20 g of aqueous phase to a depth of 3 mm.Subsequently, an interface between the aqueous phase and oil phase was created by carefully pipetting 50 g of the oil phase over the aqueous phase.The test was conducted over 3600 s and the temperature was maintained at 20 C throughout the duration of the test.The interfacial tension values and the error bars are reported as the mean and standard deviation, respectively, of three repeat measurements.
2.2.5.3.Emulsion visualisation.Cryogenic scanning electron microscopy (Cryo-SEM; Philips XL30 FEG ESSEM) was used to visualise the microstructure of pre-emulsions using untreated and sonicated proteins.One drop of pre-emulsion was frozen to approximately À180 C in liquid nitrogen slush.Samples were then fractured and etched for 3 min at a temperature of À90 C inside a preparation chamber.Afterwards, samples were sputter coated with gold and scanned, during which the temperature was kept below À160 C by addition of liquid nitrogen to the system.
Statistical analysis
Student's t-test with a 95% confidence interval was used to assess the significance of the results obtained.t-test data with P < 0.05 were considered statistically significant.
Effect of ultrasound treatment on the structural and physical properties of BG, FG, EWP, PPI, SPI and RPI
The effect of duration of ultrasonic irradiation on the size and pH of BG, FG, EWP, PPI, SPI and RPI was initially investigated.0.1 wt.% solutions of BG, FG, EWP, PPI, SPI and RPI were sonicated for 15, 30, 60 and 120 s, with an ultrasonic frequency of 20 kHz and an amplitude of 95%.Protein size and pH measurements for untreated, and ultrasound treated BG, FG, EWP, PPI, SPI and RPI as a function of time are shown in Fig. 1 and Table 2.The size of the vegetable proteins isolates presented in Fig. 1 prior to sonication (i.e.t ¼ 0) are in a highly aggregated state due to protein denaturation from the processing to obtain these isolates.Fig. 1 shows that there is a significant reduction (P < 0.05) in protein size with an increase in the sonication time, and the results also highlight that after a sonication of 1 min there is minimal further reduction in protein size of BG, FG, EWP, PPI and SPI.This decrease in protein size is attributed to disruption of the hydrophobic and electrostatic interactions which maintain untreated protein aggregates from the high hydrodynamic shear forces associated with ultrasonic cavitations.However, there is no significant reduction (P > 0.05) in the size of RPI agglomerates, irrespective of treatment time, due to the highly aggregated structure of the insoluble component of RPI, ascribed to both the presence of carbohydrate within the aggregate structure and the denaturation of protein during the preparation of the protein isolate, restricting size reduction by way of ultrasound treatment (Guraya & James, 2002;Marshall & Wadsworth, 1994;Mujoo, Chandrashekar, & Zakiuddin Ali, 1998).The pH of all animal and vegetable protein solutions, with the exception of RPI, decreased significantly (P < 0.05) with increasing sonication time.Equivalent to the protein size measurements, after a treatment time of 1 min the pH of protein solutions decreased no further.The decrease in pH of animal and vegetable protein solutions is thought to be associated with the transitional changes resulting in deprotonation of acidic amino acid residues (Sakurai, Konuma, Yagi, & Goto, 2009) 2014), who showed that an increased sonication led to a significant reduction of protein size and pH for dairy proteins up to a sonication time of 1 min, as with animal and vegetable proteins, with an ultrasound treatment of 20 kHz and an amplitude of 95%.
The stability of sonicated animal and vegetable proteins solutions as a function of time was investigated by protein size and protein size distribution (span) of sonicated BG, FG, EWP, PPI, SPI and RPI.Animal and vegetable protein solutions with a concentration of 0.1 wt.% were ultrasound treated at 20 kHz and ~34 W cm À2 for a sonication time of 2 min, as no further decrease in protein size after a sonication time of 1 min was observed (cf., Table 2).The protein size and span values of sonicated animal and vegetable proteins were measured immediately after treatment and after 1 and 7 days, in order to assess the stability of protein size and protein size distribution.Protein size measurements and span values obtained from DLS and SLS for untreated and ultrasound treated BG, FG, EWP, PPI, SPI and RPI are shown in Table 3.
As can be seen from Table 3, ultrasound treatment produced a significant reduction (P < 0.05) in the size and span of BG, FG and EWP.However, 7 days after sonication an increase in the size and the broadening of the distribution was observed for BG, FG and EWP.The effective size reduction of the ultrasound treatment to BG, FG and EWP on day 7 was 85.6%, 80% and 74.25% respectively.In the case of PPI and SPI, the results in Table 3 show that ultrasound treatment significantly (P < 0.05) reduced the aggregate size and a broadening of the protein size distribution.The size distribution of PPI and SPI after ultrasound treatment is bimodal, one population having a similar size as the parent untreated protein, and the other population is nano-sized (~120 nm).The span of the distribution and protein size on day 7 for PPI and SPI was quite similar to that after immediate sonication, representing an effective protein size reduction of 95.7% and 82.3% for PPI and SPI respectively.This significant reduction in aggregate size of both PPI and SPI from ultrasound treatment allows for improved solubilisation and (2009), who observed a significant reduction in the size of SPI aggregates.Arzeni, Martínez, et al. (2012) also observed a decrease in the protein size for sonicated SPI but an increase in size for EWP treated by ultrasound, whereby this increase in size of EWP aggregates is associated with thermal aggregation during the ultrasound treatment.The reason for the observed decrease in the protein size of BG, FG, EWP, PPI and SPI is due to disruption of noncovalent associative forces, such as hydrophobic and electrostatic interactions, and hydrogen bonding, which maintain protein aggregates in solution induced by high levels hydrodynamic shear and turbulence due to ultrasonic cavitations.The observed increase in size for BG, FG and EWP after 7 days is thought to be due to reorganisation of proteins into sub-aggregates due to non-covalent interactions (electrostatic and hydrophobic).In the case of PPI and SPI, the static size observed is due to the more defined structure of the PPI and SPI aggregates in comparison to the fully hydrated animal proteins, which allows for greater molecular interactions and mobility (Veis, 1964).In order to validate these hypotheses, cryo-SEM micrographs were captured of untreated and 7 days after sonication of BG, EWP, SPI and PPI solution at 1 wt.% for all proteins tested (Fig. 2).Untreated BG in solution (cf., Fig. 2a) appears to be distributed into discrete fibres, which is consistent with the literature, describing gelatin as a fibrous protein (Schrieber & Gareis, 2007;Veis, 1964), whilst BG treated by ultrasound (cf., Fig. 2b) appears to be in the form of fibrils of the parent untreated BG fibre, where the width of the fibres and the fibrils is equivalent, yet the length of the fibrils is shorter than the untreated BG fibres.In the case of untreated SPI (cf., Fig. 2c) large aggregates of protein can be seen, composed of discrete entities, whereas sonicated SPI (cf., Fig. 2d) has a notably reduced protein size, with a monodisperse size distribution.Similar results were observed for FG, EWP and PPI (data not shown).These results are in agreement with previously discussed observations (cf., Table 3), and adds evidence to the hypothesis that ultrasound treatment causes disruption of protein aggregates, that subsequently reorganise themselves into smaller sub-associates.
The molecular structure of untreated and ultrasound treated animal and vegetable proteins was investigated next.Protein solutions at a concentration of 1 wt.% were ultrasound treated for 2 min at 20 kHz, with a power intensity of ~34 W cm À2 .Electrophoretic profiles obtained by SDS-PAGE for untreated and ultrasound treated BG, FG, EWP, SPI, PPI and RPI, and the molecular weight standard, are shown in Fig. 3.No difference in the protein fractions was observed between untreated and sonicated BG, FG, EWP, SPI, PPI and RPI (cf., Fig. 3).These results are in concurrence with those reported by Krise (2011) who showed no difference in the primary structure molecular weight profile between untreated and ultrasound treated egg white, with a treatment conducted at 55 kHz, 45.33 W cm À2 for 12 min.Moreover, the obtained protein fractions are in agreement with the literature for gelatin (Gouinlock, Flory, & Scheraga, 1955;Veis, 1964), EWP (Anton et al., 2009), SPI (Gonzalez-Perez & Arellano, 2009), PPI (Sun & Arntfield, 2012) and RPI (Hamaker, 1994;Juliano, 1985).The intrinsic viscosity, [h], was obtained by the fitting of experimental viscosity data to the Huggins' and Kraemer equations, for untreated and ultrasound irradiated animal and vegetable protein solutions, as shown in Fig. 4 for EWP and PPI.The other proteins investigated as part of this study (BG, FG, SPI and RPI) display similar behaviour to EWP (i.e.negative k H and k K values).The values of [h] and the Huggins', k H , and Kraemer, k K , constants for each of the proteins investigated in this study are listed in Table 4.
Intrinsic viscosity, [h],
demonstrates the degree of hydration of proteins and provides information about the associate hydrodynamic volume, which is related to molecular conformation of proteins in solution (Behrouzian, Razavi, & Karazhiyan, 2014;Harding, 1997;Sousa, Mitchell, Hill, & Harding, 1995).A comparison of the [h] between untreated and ultrasound treated animal and vegetable proteins (cf., Table 4) demonstrates that ultrasound treatment induced a significant reduction (P < 0.05) in the intrinsic viscosity of BG, FG, EWP, PPI and SPI in solution, and consequently a significant reduction in the hydrodynamic volume occupied by the proteins and the solvents entrained within them.These results are in agreement with the reduction in associate size (cf., Table 3) and cryo-SEM micrographs (cf., Fig. 2), however, for the case of RPI, there is no reduction in the intrinsic viscosity, which is consistent with the previous size measurements (cf., Table 3).Gouinlock et al. (1955), Lefebvre (1982) and Prakash (1994) reported intrinsic viscosity values of 6.9 dL/g for gelatin, 0.326 dL/g for ovalbumin and 0.46 dL/g for glycinin (11S; soy globulin), respectively.These values differ to those obtained in this work untreated BG, EWP and SPI (cf., Table 4).These differences may be a consequence of the complexity of EWP and SPI solutions, which are composed of a mixture of protein fractions rather than single component ovalbumin and glycinin (Lefebvre, 1982;Prakash, 1994), and in case of gelatin, differences may arise due to variability in preparation of the gelatin from collagen, which determines the molecular weight profile of the resulting gelatin (Veis, 1964).Extrinsic variations in solvent quality greatly affect the determination of intrinsic viscosity and further accounts for the differences between the single fraction proteins and the multi-component proteins investigated in this study.Extrinsic factors affecting intrinsic viscosity include temperature, pH, initial mineral content and composition, co-solvents, additional salts and their concentration (Harding, 1997).Furthermore, the large [h] of both BG and FG by comparison to the other proteins investigated as part of this study is due to the random coil conformation of these molecules in solutions, which consequently entrain more water giving a larger overall hydrodynamic volume.
Intrinsic viscosity of a protein solution can be used to indicate the degree of hydrophobicity of the protein (Tanner & Rha, 1980).The intrinsic viscosity of protein associates in solution is dependent on its conformation and degree of hydration, which dictate the amount of hydrophobic residues that are within the interior of protein associates.A decrease in the intrinsic viscosity also leads to dehydration of amphiphilic biopolymers, increasing the hydrophobicity of the biopolymer and thus reducing the energy required for adsorption of amphiphilic biopolymers to the oil-water interface (Khan, Bibi, Pervaiz, Mahmood, & Siddiq, 2012).Thus, the significant reduction (P < 0.05) of intrinsic viscosity induced by ultrasound treatment (cf., Table 4), expresses an increase in the degree of hydrophobicity of BG, FG, EWP, PPI and SPI.
The Huggins' and Kraemer coefficients are adequate for the assessment of solvent quality.Positive values of the Huggins' coefficient, k H , within a range of 0.25e0.5 indicate good solvation, whilst k H values within a range of 0.5e1.0 are related to poor solvents (Delpech & Oliveira, 2005; Pamies, Hern andez Cifre, del Carmen L opez Martínez, & García de la Torre, 2008).Conversely negative values for the Kraemer coefficient, k K , indicate good solvent, yet positive values express poor solvation (Delpech & Oliveira, 2005;Harding, 1997;Pamies et al., 2008).The values for the k H and k K (cf., Table 4) are both negative, with the exception of untreated PPI exhibiting a positive k H value, indicating good solvation when considering k K , yet unusual behaviour in the case of k H .
Nonetheless, negative values of k H have been reported in the literature for biopolymers with amphiphilic properties, such as bovine serum albumin (Curvale, Masuelli, & Padilla, 2008), sodium caseinate, whey protein isolate and milk protein isolate (O'Sullivan, Arellano, et al., 2014;O'Sullivan, Pichot, et al., 2014), all dispersed within serum.Positive k H values are associated with uniform surface charges of polymers (Sousa et al., 1995), indicating that untreated PPI aggregates have a uniform surface charge, and after ultrasound treatment conformational changes occur yielding an amphiphatic character on the surface of the ultrasound treated PPI, observed by the negative k H value.It is also important to observe that the relation k H þ k K ¼ 0.5, generally accepted to indicate adequacy of experimental results for hydrocolloids, was not found for any of the proteins investigated in this study (cf., Table 4).This effect is thought to be associated with the amphiphatic nature of the proteins used in this study (by comparison to non-amphiphilic polysaccharides) yielding negative values of k H and k K .Similar results have been reported in the literature for other amphiphilic polymers (Curvale et al., 2008;O'Sullivan, Arellano, et al., 2014;Yilgor, Ward, Yilgor, & Atilla, 2006).In addition, the values of k H and k K tend to decrease after ultrasound treatment indicating improved solvation of proteins (Delpech & Oliveira, 2005).
Comparison of the emulsifying properties of untreated and ultrasound treated BG, FG, EWP, PPI, SPI and RPI
Oil-in-water emulsions were prepared with 10 wt.% rapeseed oil and an aqueous continuous phase containing either untreated or ultrasound irradiated (2 min at 20 kHz, ~34 W cm À2 ) BG, FG, EWP, PPI, SPI and RPI, or a low molecular weight surfactant, Brij 97, at a range of emulsifier concentrations (0.1e10 wt.%).Emulsions were prepared using high-pressure valve homogenisation (125 MPa for 2 passes) and droplet sizes as a function of emulsifier type and concentration are shown in Fig. 5.The emulsion droplet sizes were measured immediately after emulsification, and all exhibited unimodal droplet size distributions.
Emulsions prepared with sonicated BG (cf., Fig. 5a), EWP (cf., Fig. 5c) and PPI (cf., Fig. 5d) at concentrations <1 wt.% yielded a significant (P < 0.05) reduction in emulsion droplet size by comparison to their untreated counterparts.At concentrations !1 wt.% the emulsions prepared with untreated and ultrasound treated BG, EWP and PPI exhibited similar droplet sizes.The decrease in emulsion droplet size after ultrasound treatment at concentrations <1 wt.% is consistent with the significant reduction (P < 0.05) in protein size (increase in surface area-to-volume ratio) upon ultrasound treatment of BG, EWP and PPI solutions (cf., Table 3) which allows for more rapid adsorption of protein to the oil-water interface, as reported by Damodaran and Razumovsky (2008).In addition, the significant increase of hydrophobicity of ultrasound treated BG, EWP and PPI and the decrease in intrinsic viscosity (cf., Table 4; Khan et al., 2012) would lead to an increased rate of protein adsorption to the oil-water interface, reducing interfacial tension allowing for improved facilitation of droplet break-up.The submicron droplets obtained for untreated PPI are in agreement with droplet sizes obtained by those measured by Donsì, Senatore, Huang, and Ferrari (2010), in the order of ~200 nm for emulsions containing pea protein (4 wt.%).
Emulsions prepared with the tested concentrations of untreated and ultrasound treated FG (cf., Fig. 5b), SPI (data not shown) and RPI (data not shown) yielded similar droplet sizes, where emulsions prepared with 0.1 wt.% FG yielded emulsion droplets ~5 mm, and both SPI and RPI yielded ~2 mm droplets at the same concentration.Furthermore, at similar concentrations PPI yielded smaller emulsion droplets than those prepared with SPI, making SPI a poorer emulsifier, in agreement with the results of Vose (1980).This behaviour was anticipated for RPI, where no significant reduction (P > 0.05) in protein size was observed (cf., Table 3), yet unexpected when considering the significant reduction (P < 0.05; increase in surface area-to-volume ratio) of protein size observed for both sonicated FG and SPI (cf., Table 3).Moreover, the significant increase in hydrophobicity of ultrasound treated FG and SPI expressed by the decrease in intrinsic viscosity (cf., Table 4; Khan et al., 2012;Tanner & Rha, 1980) would also be expected to result in faster adsorption of protein to the oil-water interface, however it appears that the rate of protein adsorption of ultrasound treated FG and SPI to the oil-water interface remains unchanged regardless of the smaller protein associate sizes and increase in hydrophobicity, when compared with untreated FG and SPI.Even though ultrasound treatment reduces the aggregate size of SPI, proteins possessing an overall low molecular weight, such as EWP (ovalbumin is ~44 kDa), are capable of forming smaller emulsion droplets than larger molecular weight proteins (glycinin is 360 kDa) as lower molecular weight species have greater molecular mobility through the bulk for adsorbing to oil-water interfaces (Beverung et al., 1999;Caetano da Silva Lannes & Natali Miquelim, 2013).The submicron droplets achieved for untreated FG are consistent with droplet sizes obtained by Surh, Decker, and McClements (2006), in the order of ~300 nm for emulsions containing either low molecular weight (~55 kDa) or high molecular weight (~120 kDa) fish gelatin (4 wt.%).At protein concentrations >1 wt.% for emulsions prepared with either untreated or ultrasound treated EWP (cf., Fig. 5c), SPI and RPI micron sized entities (>10 mm) were formed.Unexpectedly, emulsions prepared with PPI did not exhibit the formation of these entities, even though the structure of PPI is similar to that of SPI.The degree and structure of the denatured component of PPI likely varies to that of SPI and accounts for the non-aggregating behaviour of PPI.Emulsions being processed using high pressure homogenisation experience both increases in temperature and regions of high hydrodynamic shear, both of these mechanisms result in denaturation of proteins.These micron sized entities are attributed to denaturation and aggregation of protein due to the high levels of hydrodynamic shear present during the homogenisation process, as thermal effects were minimised by ensuring that the emulsions were processed at a temperature of 5 C, and the outlet temperature was less than 45 C in all cases, lower than the thermal denaturation temperatures of EWP, SPI and RPI (Ju, Hettiarachchy, & Rath, 2001;Sorgentini et al., 1995;Van der Plancken, Van Loey, & Hendrickx, 2006).Hydrostatic pressure induced gelation of EWP, SPI and RPI has been reported in the literature (Messens, Van Camp, & Huyghebaert, 1997;Molina et al., 2002;Tang & Ma, 2009;Zhang-Cun et al., 2013) and the formation of these entities is attributed to the high shear forces exerted upon the proteins while under high shear conditions, whereby the excess of bulk protein allows for greater interpenetration of protein chains under high shear yielding the formation of discrete entities composed of oil droplets within denatured aggregated protein.
Unexpectedly, emulsions prepared with a higher concentration of protein (10 wt.%) yielded a significant (P < 0.05) reduction in entity size in comparison to those prepared with the lower concentration (5 wt.%).This behaviour is ascribed to an increased rate of formation and number of aggregates formed at higher concentrations during the short time within the shear field.
Emulsion droplets sizes for all animal and vegetable proteins investigated (cf., Fig. 5) are smaller than that of the size of the untreated proteins (cf., Table 3).Be that as it may, the reported proteins sizes (cf., Table 3) represent aggregates of protein molecules and not discrete protein fractions.Native ovalbumin and glycinin have hydrodynamic radii (R h ) of approximately 3 nm and 12.5 nm respectively (García De La Torre, Huertas, & Carrasco, 2000; Peng, Quass, Dayto, & Allen, 1984), in comparison to size data presented in Table 3, whereby the EWP and SPI have D z values of EWP and SPI of approximately 1.6 and 1.7 mm, respectively.This disparity in size is due to the preparation of these protein isolates whereby shear and temperature result in the formation of insoluble aggregated material, in comparison to the soluble native protein fractions.Proteins in aqueous solutions associate together to form aggregates due to hydrophobic and electrostatic interactions (O'Connell, Grinberg, & de Kruif, 2003), however in the presence of a hydrophobic dispersed phase (i.e.rapeseed oil) the protein fractions which comprise the aggregate disassociates and adsorb to the oil-water interface (Beverung et al., 1999;O'Connell & Flynn, 2007), which accounts for the fabrication of submicron droplets presented in this study.
The emulsion droplet sizes presented in Fig. 5, which were shown to be dependent on the emulsifier type, can be interpreted by comparing the interfacial tension of the studied systems.Fig. 5 presents the interfacial tension between water and rapeseed oil, for untreated and ultrasound treated BG, FG, PPI and SPI, and Brij 97, all at an emulsifier concentration of 0.1 wt.%.In order to assess the presence of surface active impurities within the dispersed phase, the interfacial tension between distilled water and rapeseed oil was measured.Fig. 6 shows that the interfacial tension of all systems decreases continually as a function of time.In light of these results, the decrease of interfacial tension with time is attributed primarily to the nature of the dispersed phase used, and to a lesser degree the type of emulsifier.Gaonkar (1989Gaonkar ( , 1991) ) explained that the time dependent nature of interfacial tension of commercially available vegetable oils against water was due to the adsorption of surface active impurities present within the oils at the oilewater interface.Gaonkar (1989Gaonkar ( , 1991) ) also reported that after purification of the vegetable oils (percolation through a synthetic magnesium silicate bed), the time dependency of interfacial tension was no longer observed.
No significant differences (P > 0.05) were observed in the obtained values of interfacial tension between untreated and ultrasound treated FG (cf., Fig. 6b) and RPI (data not shown).These results are consistent with droplet size data, where no significant difference in the droplet size was observed.Significant differences were shown for the initial rate of decrease of interfacial tension when comparing untreated and ultrasound treated PPI (cf., Fig. 6c).Ultrasound treated PPI aggregates are smaller than untreated PPI (cf., Table 3) and have greater hydrophobicity (i.e.reduction in [h]; cf., Table 4) accounting for the significant reduction of initial interfacial tension, enhancing droplet break-up during emulsification.Significant differences (P < 0.05) in the equilibrium interfacial tension values were observed when comparing untreated and sonicated BG (cf., Fig. 6a), EWP (data not shown) and SPI (cf., Fig. 6d).These results are consistent with the observed significant reduction (P < 0.05) in emulsion droplet size for BG (cf., Fig. 5a) and EWP (cf., Fig. 5c) and adds evidence to the hypotheses that aggregates of sonicated BG and EWP adsorb faster to the interface due to higher surface area-to-volume ratio (cf., Table 3; smaller protein size) and increased hydrophobicity (i.e.reduction in [h]; cf., Table 4), significantly reducing the equilibrium interfacial tension, yielding smaller emulsion droplets.No significant reduction (P > 0.05) in emulsion droplet size was noted for SPI, despite the observed reduction in equilibrium interfacial tension of SPI (cf., Fig. 6d) which may be a consequence of alternative protein conformations at the oil-water interface.These hypotheses were explored by cryo-SEM of pre-emulsions, to allow for visualisation emulsion droplet interface, prepared with untreated and ultrasound treated BG and SPI at an emulsifier concentration of 1 wt.% for all pre-emulsions tested (cf., Fig. 7).
Emulsion droplets of pre-emulsions prepared with untreated BG (cf., Fig. 7a) show fibres of gelatin tracking around the surface of the droplets whereas emulsion droplets of pre-emulsions prepared with ultrasound treated BG (cf., Fig. 7b) show the smaller fibrils of gelatin at the interface of the droplets, yielding improved interfacial packing of protein, accounting for the lower equilibrium interfacial tension (cf., Fig. 6a) and the decrease in droplet size (cf., Fig. 5a).The droplet surfaces of pre-emulsions prepared with ultrasound SPI (cf., Fig. 7d) appear to be are smoother by comparison to the seeming more textured droplet interfaces observed for pre-emulsions prepared with untreated SPI (cf., Fig. 7c).These findings are consistent with the interfacial tension data (cf., Fig. 6), where a significant reduction (P < 0.05) of the equilibrium interfacial tension upon sonication of BG and SPI was observed, and accounted for by visualisation of the improved interfacial packing of protein.
The stability of oil-in-water emulsions prepared with untreated and sonicated BG, FG, EWP, PPI, SPI and RPI, and Brij 97 for comparative purposes, was assessed over a 28 day period.Fig. 8 shows the development of droplet size (d 3,2 ) as a function of time for emulsions prepared with untreated and ultrasound irradiated BG, FG, PPI and SPI, as well as Brij 97, at an emulsifier concentration of 0.1 wt.%.
Emulsions prepared with untreated BG (cf., Fig. 8a) exhibited a growth in droplet size, and this coalescence was also observed for emulsions prepared with 0.5 wt.% untreated BG, while emulsions prepared with higher concentrations (!1 wt.%) of untreated BG were stable for the 28 days of the study (data not shown).However, it can also be seen (cf., Fig. 8a) that emulsions prepared with ultrasound treated BG were resistant to coalescence over the 28 days of the study, and had the same stability of Brij 97.The behaviour exhibited by 0.1 wt.% ultrasound treated BG was observed at all concentrations investigated in this study (data not shown).This improved stability of ultrasound treated BG by comparison to untreated BG is thought to be associated with an increase in the hydrophobicity (i.e.decrease in the intrinsic viscosity; cf., Table 4) and improved interfacial packing of ultrasound treated BG by comparison to untreated BG as observed by a decrease in the equilibrium interfacial tension (cf., Fig. 6a) and cryo-SEM visualisation (cf., Fig. 7a and b).In contrast, results in Fig. 8b show that emulsions prepared with both untreated and ultrasound treated FG display coalescence, yet ultrasound treated FG displayed a notable decrease in emulsion stability by comparison to untreated FG.The emulsion stability of untreated and ultrasound treated FG is analogous to untreated BG, where coalescence was observed at concentration of 0.5 wt.%, and stable emulsions were achieved with higher emulsifier concentrations (!1 wt.%; data not shown).This decrease in emulsion stability after ultrasound treatment of FG is thought to be associated with a weaker interfacial layer of ultrasound treated FG by comparison to untreated FG allowing for a greater degree of coalescence, accounting for the decrease in emulsion stability.Emulsions prepared with either untreated or sonicated EWP (data not shown), PPI (cf., Fig. 8c), SPI (cf., Fig. 8d) and RPI (data not shown), and Brij 97 (cf., Fig. 8) were all stable against coalescence and bridging flocculation over the 28 days of this study.This stability was observed for all concentrations probed in this study (!0.5 wt.%) of untreated and ultrasound treated EWP, PPI, SPI and RPI investigated, as well as for Brij 97 (data not shown).In all cases no phase separation was observed in the emulsions, whilst emulsions with droplet sizes >1 mm exhibited gravitational separation with a cream layer present one day after preparation.Furthermore, the d 3,2 is lower in all cases at an emulsifier concentration of 0.1 wt.% for ultrasound treated proteins by comparison to that of their untreated counterparts, as previously discussed.
Conclusions
This study showed that ultrasound treatment (20 kHz, ~34 W cm À2 for 2 min) of animal and vegetable proteins significantly (P < 0.05) reduced aggregate size and hydrodynamic volume, with the exception of RPI.The reduction in protein size was attributed to the hydrodynamic shear forces associated with ultrasonic cavitations.In spite of the aggregate size reduction, no differences in primary structure molecular weight profile were observed between untreated and ultrasound irradiated BG, FG, EWP, PPI, SPI and RPI.
Unanticipatedly, emulsions prepared with the ultrasound treated FG, SPI and RPI proteins had the same droplet sizes as those obtained with their untreated counterparts, and were stable at the same concentrations, with the exception of emulsions prepared with ultrasound treated FG where reduced emulsion stability at lower concentrations (<1 wt.%) was exhibited.These results suggest that sonication did not significantly affect the rate of FG or RPI surface denaturation at the interface, as no significant (P > 0.05) reduction in the equilibrium interfacial tension between untreated and ultrasound irradiated FG or RPI was observed.By comparison, emulsions fabricated with ultrasound treated BG, EWP and PPI at concentrations <1 wt.% had smaller emulsion sizes than their untreated counterparts at the same concentrations.This behaviour was attributed to a reduction in protein size (i.e.increased mobility through the bulk) and an increase in the hydrophobicity (reflected by a decrease in the intrinsic viscosity) of sonicated BG, EWP and PPI.Furthermore, emulsions prepared with ultrasound treated BG had improved stability against coalescence for 28 days at all concentrations investigated.This enhancement in emulsion stability attributed to improved interfacial packing, observed by a lower equilibrium interfacial tension and cryo-SEM micrographs.
Ultrasound treatment can thus improve the solubility of previously poorly soluble vegetable proteins (PPI and SPI) and moreover, is capable of improving the emulsifying performance of other proteins (BG, EWP and PPI).
which were contained within the interior of associated structures of untreated proteins prior to ultrasound treatment.Our results are in agreement with those of O'Sullivan, Arellano, et al. (2014) and O'Sullivan, Pichot, et al. (
Fig. 6 .
Fig. 6.Interfacial tension between water and pure vegetable oil as a function of emulsifier type: (a) Untreated BG, ultrasound treated BG and Brij 97, (b) Untreated FG, ultrasound treated FG and Brij 97, (c) Untreated PPI, ultrasound treated PPI and Brij 97 and (d) Untreated SPI, ultrasound treated SPI and Brij 97.
Table 1
Composition and pH (measured at a concentration of 1 wt.% and a temperature of 25 C)of bovine gelatin (BG), fish gelatin (FG), egg white protein (EWP), pea protein isolate (PPI), soy protein isolate (SPI) and rice protein isolate (RPI).
Table 2
Effect of sonication time on pH of BG, FG, EWP, PPI, SPI and RPI solutions at a concentration of 0.1 wt.%.The standard deviation for all pH measurements was <0.04.
Please cite this article in press as: O'Sullivan, J., et al., The effect of ultrasound treatment on the structural, physical and emulsifying properties of animal and vegetable proteins, Food Hydrocolloids (2015), http://dx.doi.org/10.1016/j.foodhyd.2015.02.009 prolonged stability of these vegetable protein isolates to sedimentation.Our results are in agreement with those of Jambrak et al.
Table 3
Average protein size (D z ) and span of untreated and ultrasound treated BG, FG, EWP, PPI, SPI and RPI at a concentration of 0.1 wt.%.
|
2020-01-15T13:20:54.926Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "7b8f5088f514e9d607bef9b7258407365c0d9e2b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.foodhyd.2015.02.009",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3a72f8f83d9ad1d87ea9abbddf754b41bac71d74",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
213266153
|
pes2o/s2orc
|
v3-fos-license
|
Urban land cover mapping, using open satellite data. Case study of the municipality of Thessaloniki.
The use of open satellite data has become a valuable tool for monitoring the urban environment due to high temporal, spectral and spatial resolution. Planners and stakeholders can rely on these data since they offer an alternative point of view of a very complex urban scenery. The processing of satellite images with the combined use of different types of data (statistical, in situ etc) can extract information for the present situation, in urban and suburban areas. This study use image data from the Sentinel satellite mission, in order to extract urban land cover characteristics in Thessaloniki city. The Sentinel mission started the image acquisition on 2014. The data set are freely available for downloading. The use of high resolution satellite images (e.g IKONOS) in urban studies, has been proved very useful all these years. On the other hand the use of a medium size spatial resolution satellite, as Sentinel-2, with 10m pixel size, is a very promising optical earth-observation satellite, which is tested in this study. Image processing techniques used to produce classified map of urban characteristics and land cover classes (built up areas, vegetation etc). The contribution of a radar image of Sentinel-1, in the classification process, is also tested. All the above processing is done with an automatic procedure, giving additional value to the proposed methodology. The final classified map will give planners a management tool for decision-making concerning the sustainable urban development and will be the basis for implementing hydrology studies, planning, land use change detection and other.
Introduction
Urbanization is considered as a spatial transformation of the economy, where the population moves through migration from an agricultural, rural based existence to one where production occurs in cities of endogenous numbers and size [1]. This transformation of the landscape, due to urban expansion, alters the natural land cover types, to impervious urban land. As a consequence, this produce both short and long term changes in the urban environment. Long term changes deal with impacts of urbanization on climate conditions of urban and suburban areas [2]. Departments of urban planning need these information in order to take the right decisions and address all the arise problems [3]. Monitoring the urban environment, recording details about the spatial changes of land cover and land use in a level of a few square meters pixel size can be implemented with the use of remote sensing images and image processing techniques. Landsat satellite was one of the first satellite systems that was used for change detection in urban studies [4], [5]. The most critical factors in monitoring the urban environment from space are the cost of the satellite image and how easily these data can be found. The spatial analysis, the temporal and spectral resolution of the sensor are also very important parameters since they can detect different type of changes. In recent years a term called open data has been widely used and the growth of "openess" movement has attracted the scientific community. Open data are data that can be freely used, re-used and redistributed by anyone. In this study, open satellite data from Sentinel-2 mission of Copernicus Earth Observation Programme are being used in order to extract urban land cover characteristics like green space, urban (impervious) land and other cover types, in the municipality of Thessaloniki, through supervised classification of the satellite image [6]. The contribution of a radar image of Sentinel-1 satellite, of the same acquisition date is being investigated in the classification accuracy. Finally, the NDVI indice of the presence of vegetation is compared with the similar class from the classifications. Qiu et al [7] studied the performance of Vegetation Red-Edge bands on improving land cover classification. Comparing classified satellite images of different dates, the loss and gain of different land covers can be monitored. The processing of the satellite images was done in an open-source software. Both the satellite images and the software were downloaded from the site of European Space Agency (ESA).
Data
Two different satellite products were used in this study. A Sentinel-2 level 1C product, with acquisition date 18/02/2019 ( Figure 1) and a Sentinel-1A, C-band synthetic aperture radar (SAR) level-1 GRD product, of the same date, in dual polarisation (VV+VH) and descenting mode ( Figure 2). The Sentinel-2 product is a multispectral image with 13 spectral bands and 3 different spatial resolutions (Table 1). Figure 1 shows part of the region of Central Macedonia of Greece, including the prefectures of Serres, Kilkis and Thessaloniki. The city of Thessaloniki and Thermaikos Gulf are at the south part. Figure 2 shows a broader area of Central Macedonia with no orientation, due to satellite passage. Data provision is available for downloading from Copernicus Open Access Hub (https://scihub.copernicus.eu/dhus/home). Table 1. Spatial and spectral characteristics of Sentinel-2 sensor. spatial resolution Band number (central wavelength) Level-1C processing of the Sentinel-2 product is both radiometrically and geometrically corrected, including orthorectification and spatial registration on a global reference system, with sub-pixel accuracy [8]. The Sentinel-2 image is a top of atmosphere (ToA) image.
On the other hand, the Sentinel-1 instrument is able to transmit horizontal (H) or vertical (V ) linear polarizations. The instrument is able to receive, on two separate receiving channels, both H and V signals simultaneously. Dual-polarisation products are provided in the form of two images. The images have the same product characteristics and are co-registered [9]. The range pixel spacing is 10m.
Preprocessing of Sentinel-1 and Sentinel-2 images
The preprocessing of the Sentinel-1 and Sentinel-2 images was done in Sentinel Application Platform (SNAP) software. SNAP is a free software which can be downloaded from ESA website and is composed of toolboxes that can manage, process, analyse and visualize both multispectral and radar images. For the Sentinel-2 image the processing steps, included an atmospheric correction with the AT2COR routine in SNAP, in order to reduce the top of atmosphere (ToA) image to Bottom of Atmosphere (BoA). The atmospheric correction is an effort made to reduce the noise from the interaction (diffuse, attenuation) of the radiance path with the atmosphere. Then, the resampling of the resolution of the different bands image pixels to 10m pixel size followed. The image was cropped to a smaller area of interest so as to have a better management of the process. A reprojection was followed to the Greek Reference System of 1987. The final multispectral (MS) image is presented in Figure 4. As mentioned in § 2.1, Sentinel-2 level-1C includes radiometric correction and orthorectification and for that reason only the above processing steps were implemented. In the final multispectral (MS) image, a principal component transformation was applied. The second band (PC2) from the transformed image ( Figure 6) distinguished the impervious land better. On the other hand, the vegetation areas have an important role in land surface temperature [10]. For the detection of the vegetation areas the NDVI indice was applied to the final MS image, using the Red and NIR Bands and the equation 1 (Figure 7).
The processing steps of Sentinel-1 image included a radar multilooking in order to produce square pixels, radiometric calibration and finally terrain correction with the use of global digital elevation model (SRTM). The terrain correction left blank the sea, due to DEM absence in that area. Finally, the image was reprojected to the Greek Reference System of 1987. The final Radar image is presented in Figure 5. A critical task, for the continuing of the study, was the classification of the final images and the extraction of land cover. More specifically there were tested two images, the final MS image and a synthetic one, that was composed of the bands of MS image, the PC2 Band and the Radar image bands [11]. Both images were classified through a supervised procedure, using the Random-Forest algorithm [12], in SNAP software. Random Forest classifier is a non parametric machine learning technique, capable of using continuous and categorical data sets. It is also, statistically unbiased that it can handle an increasing data sample size and has several advantages over traditional remote sensing classification algorithms. In Random Forests several decision trees are created (grown) and the response is calculated based on the outcome of all of the decision trees, [13], [14]. The processing chain of Sentinel-1,2 satellite images is presented in Figure 3.
Supervised classification
For the classification of the two images (LS and MS), the number and type of the classes had to be selected, taking into consideration the urban environment and the spatial analysis of 10m of the Sentinel images [15]. Nine classes were selected, one for urban/impervious surfaces (Urban), two for vegetation (Healthy vegetation and Shrub vegetation), one for soil (Bare soil), one for rooftops (Concrete/metal roof), one for tiles (Tile roof), one for shadow and two for water (Shallow and Deep water). After the determination of number and type, the selection of the training data for each class from the MS image followed, in order to proceed to the supervised classification with the Random Forest algorithm. The predictor variables for the classification of the MS image were the 13 bands and for the LS image were the 13 bands from MS, the PC2 band and the 4 bands from the radar image [11]. The final classified images are presented in Figures 8, 9. The accuracy of the classified images was estimated with the calculation of the Accuracy Matrix and the Kappa coefficient. For the Accuracy Matrix, 10 samples per class were chosen and the corresponding statistics were calculated (Tables 2, 3). Finally, the classified images were cut exactly at the boundaries of the municipality of Thessaloniki (Figures 10, 11). The digital boundaries of the Municipality exported from the OpenstreetMap dataset in shapefile format. Based on the results by the supervised classifications with the Random Forest algorithm, both positive and negative comments can be observed by the two classified images (MS and LS). The classification of MS image was better, with an overall accuracy equal to 95.06%. Especially, the green areas were better classified. The presence of "salt and pepper" effect was strong, due to pixel based approach and the heterogeneous urban environment. Finally, the remaining unclassified samples were 5. On the other hand, the overall accuracy of the LS classified image, was equal to 91.46%, the classification of urban areas had lower accuracy than the MS image, the effect of "salt and pepper" was slightly improved and the unclassified samples were 2.
Land cover analysis of the classified images
From the classified images of the Municipality of Thessaloniki (Figures 10, 11), it was easy to compute the corresponding areas of classes, exporting values of land cover in square meters. These values can be used either as grouped, in order to discriminate the impervious and non impervious land of the municipality, or as standalone for the calculation of indices, like square meters of green areas per capita. More specifically, from the pie chart of Figure 12, it is clear that impervious surfaces that composed of the classes Urban, Tile roofs, Concrete/metal are equal to 18. Census was 352182. With a simple division of the areas of Healthy and Shrub vegetation by the population, it appears that the green area per capita is equal to 7.4 m 2 . It should be mentioned that as green areas are referred both the part of the suburban forest of Seich Sou, that is within the boundaries of the Municipality and the football stadiums. The exclusion of these areas will decrease the green areas per capita indice. The impervious areas and the land cover are also vital to hydrological studies. Bhaskar and Suribabu [16] estimate surface run-off for urban area with the use of remote sensing, while Guo et al [17] produced land cover maps and assess the impacts of land cover change with L-THIA model.
Vegetation areas from NDVI image
In order to evaluate the results of the classification and especially the vegetation areas a simple rule was applied on the NDVI image. The range values of the NDVI is between −1 to +1. As stated by Gonçalves et al [18], values between 0.64 to +1 represent dense vegetation, 0.56 to 0.64 open vegetation, 0.26 to 0.56 herbaceous vegetation, −0.14 to 0.29 urban and below −0.14 water. For the present study values greater than 0.4 are considered as vegetation. Thus a threshold was applied on NDVI image of figure 7 (N DV I > 0.4). The resulting image is presented in Figure 14, where only the vegetation areas of Municipality of Thessaloniki appear. The vegetation area from Figure 14 is 3370649m 2 . The same area from the supervised MS image is 2620073m 2 . The difference between the two vegetation areas is 749637m 2 or 7191 pixels. This difference expresses the ability of the Random Forests algorithm to classify the vegetation areas. The misclassification of vegetation, according to the chosen NDVI threshold results, is due to the complex and heterogeneous urban environment and the resolution of 10m pixel size. Small areas of vegetation were not correctly classified, because of mixing with other urban classes. Larger areas had better performance. Other parameters of the discrepancy, between NDVI and classification, is because of the chosen threshold value of 0.4, that has to be tested with in situ data. Perhaps another value would give results, closer to the classification. Also, the Random Forests classification parameters have to change, then should run the classification and compare the results again.
Conclusions
In this study, a methodology was presented, where with no cost and with the use of open satellite data, in a free software environment, advanced spatial analysis of the land cover of an urban area, was performed. More specifically, the open satellite data of Sentinel-1,2 mission from Copernicus Earth Observation Programme under the European Commission and the European Space Agency were used. Both multispectral and radar images, which were processed in a free software and supervised classifications of images with the Random Forests classifier, were produced. All the processing chain was done in the SNAP environment, in a short time, with great reliability, giving the opportunity to non expert users to implement such analysis. With the current methodology different indices and maps can be extracted such as the impervious surfaces of a city and the green areas. Monitoring of these parameters in short and long time basis and correlating with in situ data (meteorological etc), might lead to valuable information about climate change in an urban and suburban environment. The accuracy of the proposed methodology, can be improved. More precisely, the supervised classification of the satellite images, with the Random Forests algorithm, can be done using more training areas with more careful choice. Also, in the design of the accuracy matrix, the samples in each class has to be 40 − 50, in order to be statistical significant and a random sampling approach should be chosen. From the current results, the classification of MS image was better, with an overall accuracy equal to 95.06% than 91.46% of the LS image. The green areas of MS image were better classified than LS, and the presence of "salt and pepper" effect was slightly improved in LS image. The NDVI threshold approach showed that in both classified MS and LS images the mixed urban green areas weren't classified correctly. This was due to the complex and heterogeneous urban environment, where small areas of vegetation were mixed with the neighbouring land cover, as a result of 10m pixel size. Larger green areas were classified better. However that problem can be overcome with additional data input in the classification process (digital surface model etc) this methodology can help understand the impervious and non impervious surfaces, which are extremely important for the rise of the temperature and other problems in an urban environment.
|
2020-01-30T09:04:40.515Z
|
2020-01-24T00:00:00.000
|
{
"year": 2020,
"sha1": "a7ee8ecd2b8377119d14798a3cce87203b2d0adc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/410/1/012062",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "05b33a5628e19411c1229b5d1680bb9cea3c6b02",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
}
|
228910621
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Transport of Comparative Transport of Legionella and and E. coli through Saturated through Saturated Porous Media in a Two-Dimensional Tank Porous Media in a Two-Dimensional Tank
: This study investigated bacterial transport in a two-dimensional (2-D) tank to evaluate the bacterial behavior of Legionella pneumophila as compared to Escherichia coli under saturated flow to simulate aquifer conditions. The experiments were performed in a 2-D tank packed with 3700 in 3 (60,632 cm 3 ) of commercially available bagged play sand under saturated conditions. The tank was disinfected by backwashing with 10% chlorine solution and subsequently neutralized by backwashing with tap water containing sodium thiosulphate (Na 2 S 2 O 3 ) to ensure no chlorine residual. Bacterial transport was measured using samples collected from ports located at vertical transport distances of 5, 15 and 25 inches (12.7, 38.1 and 63.5 cm, respectively) below the sand surface along two vertical sections in the tank. An influent concentration of 10 5 CFU / mL was used for bacterial cells and the vertical fluid transport rate was 10.3 in / day (26.2 cm / day), controlled using a peristaltic pump at the bottom outlet. Legionella breakthroughs were recorded at 8, 22 and 35 h for the ports on the right side and 9, 24 and 36 h for the ports on the left side, at 5, 15 and 25 inch depths, respectively. At the same depths, E . coli breakthroughs were recorded at 5, 17 and 30 h for the ports on the right side and 7, 19 and 31 h for the ports on the left sides. The delay in Legionella transport compared to E . coli is homologous to Legionella ’s pleomorphic nature. This study provides evidence of the mobility of both E . coli and Legionella in saturated aquifer conditions at a scale more representative of actual aquifer conditions. This study also provides a substantive basis for the premise that cell characteristics a ff ect transport characteristics under those conditions.
Introduction
Worldwide, many cities are heavily dependent on groundwater for their water supplies, with more than one billion people living in such locales.In particular, most large cities in arid and semiarid regions are highly dependent on extracting groundwater to meet their rising water demand.This practice is not restricted to arid regions; some large cities in humid regions (e.g., Tokyo, Osaka, Taipei, Manila, and Jakarta) also depend on large extractions of groundwater as part of their water supply [1].In the United States (U.S.), over 150 million people are estimated to directly depend on groundwater for their water supply [2].
Consumption of untreated groundwater has been associated with increased risk of infection by Escherichia coli O157:H7 [3].Microbiological and water quality data for 30 public water-supply wells in Water 2020, 12 Worcester and Wicomico Counties, Maryland, showed bacterial presence in all wells at some point during a year-long sampling campaign [4].In a nationwide study in the U.S., 15% of groundwater wells tested positive for a broad range of bacteria, including coliforms, enterococci, and Clostridium [5].
A study investigating microbial contamination in groundwater wells found that 29.6% (347/1174) of wells tested positive for fecal indicator bacteria (total coliform, E. coli, or fecal coliform) [6].Meta-analysis of data from 12 international groundwater studies of 718 public drinking-water systems located in a range of hydrogeological settings found that 36%, 12%, 15%, 52% and 26% of wells positive for total coliform, E. coli, Enterococci, aerobic spores, and anaerobic spores, respectively [7].
Depleting groundwater aquifers are naturally and/or artificially replenished by infiltration and percolation, which may transport microbial and chemical contaminants.Pathogen transport in soil aquifers has been extensively studied using different groups of microbial surrogates.However, the relevance of using surrogate organism transport data for predicting the transport of pathogens, e.g., Legionella, is uncertain [8].The factors contributing to the transport of pathogens to groundwater aquifers include hydrogeology and characteristics of microbial cells such as survival, size, shape and mobility [6].The significance of these factors is further complicated by transient flow and transport conditions, including seasonal, annual and long-term variability in recharge as well as transient interactions between climatic factors [9].In addition, disparities between longitudinal and lateral dispersion in aquifers and transport of contaminants have been reported [10,11].In the context of the discussion above, the unique features of Legionella warrant investigation of its transport properties in aquifers.
Legionella is ubiquitous in water systems, and Legionella pneumophila is responsible for the majority of the waterborne disease outbreaks (drinking water and non-recreational) in the U.S. [12].Legionella has a high proclivity for growing in reclaimed water [13] and is not only present in reclaimed water, but also surface water [14].For example, Legionella was detected in 60% of the samples (3/5) collected from recharge basin samples in California [15,16].In Arizona, which is among the leading states practicing groundwater recharge, the presence of Legionella is increasing the risk of exposure and public health concerns.
Legionella is a unique bacterium with high lipopolysaccharide content in the cell membrane [17] and a pleomorphic nature, which can potentially impact its fate and transport.Under normal conditions, Legionella is a Gram-negative bacterium measuring 2 to 20 µm, but it can transform both its size and shape under different environmental conditions.For example, Legionella can transform into a facultative intracellular stage, wherein it requires a host such as an amoeba, which serves as a protective shell for Legionella cells.This plasticity of Legionella cells warrants in-depth study of their transport through different types of aquifers.
The objective of this study was to investigate and compare the transport characteristics of Legionella (human opportunistic pathogen) and E. coli (bacterial surrogate) in a two-dimensional packed porous media tank under saturated conditions.These bacteria were selected based on the public health significance of Legionella and immense value of E. coli as a surrogate of bacterial pathogens (based on the availability of an extensive amount of historical data).This approach more closely resembled aquifer conditions with intrinsic, multidimensional heterogeneities in comparison to more common one-dimensional column testing.
Preparation of Bacterial Stocks
Legionella pneumophila (ATCC ® 33153 TM ) and E. coli (ATCC ® 25922 TM ) were obtained from the American Type Culture Collection (ATCC ® , Rockville, MD, USA) and propagated using the methods recommended by the ATCC.The pure cultures of frozen stocks (stored at −80 • C) were thawed at 37 • C. A sterile inoculation loop was used to streak Legionella and E. coli onto their selective media: buffered charcoal yeast extract (BCYE, Thermo Fisher Scientific) for Legionella and brilliant green agar (Sigma Water 2020, 12, 3170 3 of 11 Aldrich, St Louis, MO, USA) for E. coli.Legionella was incubated at 37 • C for 96 h before harvesting by flooding the uniform bacterial lawn with 1X PBS buffer containing 10% glycine and scraping the colonies using a cell scraper.E. coli was incubated at 37 • C overnight before transferring an isolated colony to 15 mL tryptic soy broth (TSB) (Sigma Aldrich) and incubating at 37 • C for 24 h.
Configuration, Preparation and Operation of the 2-D Tank
The tank used for this study was a 72 inch (182.88 cm) tall, 24 inch (60.96 cm) wide, 4 inch (10.16 cm) deep, rigid stainless steel frame that held 72 inch (182.88 cm) tall, 24 inch (60.96 cm) wide, 0.5 inch (1.27 cm) thick clear acrylic panels, front and back.The stainless-steel side panels and base of the tank were ported to allow flow control as desired.For this study, bilateral ports at 42 inches (106.7 cm) from the base of the tank were used for influent feed control and a French drain at the base of the tank provided uniform drainage to a single port for outlet flow control.Lateral bracing across the acrylic panels was used to prevent the panels from bowing under the pressure of a packed, saturated tank.The front acrylic face of the tank was fitted with two vertical series of sampling ports, 18 inches (45.72 cm) apart and centered across the tank.Vertical spacing of the ports in each column was on 2 inch (5.08 cm) centers (Figure 1) through the packed zone of the tank.To maintain a uniform head and influent across the horizontal aspect of the tank, a 1.5 inch (3.8 cm) pool of water was maintained above the gravel surface.To ensure the delivery of fresh, dechlorinated, microbially enriched media, a peristaltic pump was used to deliver 15 mL/min from a carboy containing the influent water spiked with 10 5 colony forming units (CFU) per mL of Legionella and E. coli, the suspension of which was maintained by continuous stirring on a magnetic stir plate.Excess influent was allowed to passively drain from the tank at the same vertical level as the influent.A schematic (not to scale) and a photo of the 2-D tank packed with Sakrete© play sand media with sampling ports.Note: A 3 inch gravel pack is not visible at the base of the sand pack but is present to provide uniform effluent drainage across the tank.
Sample Collection, Sample Processing and Assay Methods
Along with the influent (from the carboy), tank sampling was performed at ports 5 inches (12.7 cm), 15 inches (38.1 cm) and 25 inches (6.35 cm) below the sand surface along both the right (R) and left (L) columns of sampling ports to determine the mobility of the bacteria in-line with flow and to determine migration/transport times.Since the tank had both a left and right sampling array, sample locations were identified by their depth beneath the sand surface and whether that port was along the R or L sampling array (Figure 1).Hence, a sample taken from the right array at the 15 inch depth was denoted as 15R and from the left array as 15L.
Preliminary data regarding tank flow velocity, collected during basic tank operation, were used The tank was packed with approximately 3750 in 3 (61,451 cm 3 ) of dry Sakrete© play sand (7-64661-15650-5).Sieve analysis for the product is shown in Table 1.The product presented as a fairly heterogeneous mix of filter media consisting of particles ranging from10 to 80 mesh (>2.0 mm to <0.18 mm).Porosity of the product was 31%.The total depth of sand pack was 35 inches (88.9 cm).Packing the tank involved the introduction and packing of sequential layers of dry sand to consolidate the sand pack and minimize settling during testing.Since play sand is not a uniform sieve grade, this packing technique resulted in a heterogeneous pack with some visually apparent, isolated areas of horizontal layering of fines and/or coarse materials.The character of the sand and the heterogeneous nature of the pack were considered advantageous as being more representative of natural conditions.The top of the sand pack was covered with a 3 inch (7.62 cm) gravel pack to preserve the integrity of the packed sand surface while allowing a uniform influent across the lateral profile of the tank.The base of the tank was packed with a 3 inch (7.62 cm) gravel pack to create a French drain to provide uniform drainage across the base of the tank to a single port for outlet flow control.
Prior to use, the tank was disinfected with a 10% (by volume) bleach solution followed by 4 flushes with dechlorinated tap water (containing 600 mg/L sodium thiosulfate).During each tank fill, water was delivered through the base of the tank at a rate of 3.6 mL/min to minimize air entrapment in the saturated media.
Tank flow was vertically downward for both experiments.Vertical transport during tank operation was managed via an influent pool at the packed media surface and a flow controlled outlet at the base of the tank.Tank flow was outlet controlled with a peristaltic pump operated at a flowrate of 3.5 mL/min, providing a fluid transport velocity for the tank of 10.3 in/day (26.2 cm/day) if undisturbed, uniform flow is assumed.However, the process of sampling created focused points of water withdrawal, creating disturbances in the uniform flow field at each sampling point and increasing the flow.While it was not possible to specifically define the flow field disturbance, the overall increase in flowrate during the period of tank operation can be defined and was 3.7 mL/min for a fluid transport velocity for the tank of 10.9 ft/day (27.7 cm/day) for the period of operation.
To maintain a uniform head and influent across the horizontal aspect of the tank, a 1.5 inch (3.8 cm) pool of water was maintained above the gravel surface.To ensure the delivery of fresh, dechlorinated, microbially enriched media, a peristaltic pump was used to deliver 15 mL/min from a carboy containing the influent water spiked with 10 5 colony forming units (CFU) per mL of Legionella and E. coli, the suspension of which was maintained by continuous stirring on a magnetic stir plate.Excess influent was allowed to passively drain from the tank at the same vertical level as the influent.
Sample Collection, Sample Processing and Assay Methods
Along with the influent (from the carboy), tank sampling was performed at ports 5 inches (12.7 cm), 15 inches (38.1 cm) and 25 inches (6.35 cm) below the sand surface along both the right (R) and left (L) columns of sampling ports to determine the mobility of the bacteria in-line with flow and to determine migration/transport times.Since the tank had both a left and right sampling array, sample locations were identified by their depth beneath the sand surface and whether that port was along the R or L sampling array (Figure 1).Hence, a sample taken from the right array at the 15 inch depth was denoted as 15R and from the left array as 15 L. Preliminary data regarding tank flow velocity, collected during basic tank operation, were used to estimate expected breakthrough times for each sampling depth to guide sample collection.Sampling for each interval was on an hourly basis until breakthrough was achieved at both ports.Samples were collected by placing an 18G needle directly into the tank media through the port septa.Prior to sample collection, 5-10 mL of water was flushed, followed by the collection of a 10 mL sample.For this study, the first detection of target bacteria at a sampling port was considered the breakthrough point.
Legionella Analysis by Spread Plate Method
Samples for Legionella were analyzed by spread plate method using the BCYE media containing antibiotics Polymyxin B (100 units/mL), Vancomycin (5 µg/mL), Cycloheximide (80 µg/mL) and L-cysteine HCl (0.4 g/L).The samples were collected at specified time intervals, and 0.1 mL from each sample was transferred onto a petri dish and uniformly spread throughout.The petri dish was incubated at 37 • C for 96 h.Legionella on the BCYE media appeared as gray-white colonies.
E. coli Analysis by Spread Plate Technique
E. coli samples were analyzed using the spread plate method on selective media (Brilliance media, Oxoid CM1046 or Brilliant media, Sigma Aldrich 27815).Each sample was collected in a 15 mL tube and 0.1 mL was transferred onto a petri dish containing selective agar.A flame sterilized spreader (first dipped in Ethanol and flamed) was used to evenly spread the inoculum throughout the petri dish, which was then incubated at 37 • C for 24 h.The E. coli colonies appeared pink on the Brilliant media.
E. coli Transport Experimental Results
E. coli transport was studied by collecting hourly samples at the 5 inch ports between 4 and 9 h, the 15 inch ports between 16 and 20 h, and the 25 inch ports between 28 and 32 h.In order to accurately capture breakthrough, sampling began 1 h prior to the expected breakthrough time, and in all cases, those samples were negative for the target cells.The accuracy of the reported breakthrough times was within one hour.All experiments were repeated 5 times and variations in breakthrough concentrations and times at the respective ports were observed.Breakthrough transport times for E. coli are shown in Table 2. Based on the intervals 0 to 5 inches, 0 to 15 inches, and 0 to 25 inches, the number of pore volumes for breakthrough at 5, 15, and 25 inches was 0.46, 0.52, and 0.53, respectively.Breakthrough concentrations for E. coli were also tracked at all of the ports, with sampling for several hours after initial breakthrough.Concentration vs. time is shown in Figure 2. Note that concentration vs. time showed increasing concentration with time after breakthrough followed by a decline.This behavior is consistent with a previous study in saturated porous media, which showed E. coli concentrations initially increasing after breakthrough followed by a decline [18].
Legionella Transport Experimental Results
Following E. coli tests, Legionella transport was studied by collecting hourly samples at the 5, 15, and 25 inch ports between 5 and 11 h, 17 and 26 h, and 28 and 38 h, respectively.As with E. coli testing, Legionella sampling began 1 h prior to the expected breakthrough time, and in all cases, those samples were negative for target cells.Again, the accuracy of the reported breakthrough times was within one hour and all experiments were repeated 5 times and variations in breakthrough concentrations and times at the respective ports were observed.Breakthrough transport times for Legionella are shown in Table 3.Based on the intervals 0 to 5 inches, 0 to 15 inches, and 0 to 25 inches, the number of pore volumes for breakthrough at 5, 15, and 25 inches was 0.54, 0.54, and 0.53, respectively.2 and 3, respectively.These graphics indicate that, for each time period tested Legionella concentrations continuously increased with time, unlike the decrease observed for E. coli.This might be due to the difference in the behavior of E. coli and Legionella transport in the plume.In plume form, bacteria move in high concentrations, which allows greater cell-to-cell interaction.Bacterial clumping is a well known phenomenon which can occur in culture or in plume environments.Some of the factors that can influence bacterial clumping are: a) number of autoagglutinins (agglutinating proteins) on the cell surface and the presence divalent cation such as Ca 2+ .It is known that E. coli cell can have up to
Legionella Transport Experimental Results
Following E. coli tests, Legionella transport was studied by collecting hourly samples at the 5, 15, and 25 inch ports between 5 and 11 h, 17 and 26 h, and 28 and 38 h, respectively.As with E. coli testing, Legionella sampling began 1 h prior to the expected breakthrough time, and in all cases, those samples were negative for target cells.Again, the accuracy of the reported breakthrough times was within one hour and all experiments were repeated 5 times and variations in breakthrough concentrations and times at the respective ports were observed.Breakthrough transport times for Legionella are shown in Table 3.Based on the intervals 0 to 5 inches, 0 to 15 inches, and 0 to 25 inches, the number of pore volumes for breakthrough at 5, 15, and 25 inches was 0.54, 0.54, and 0.53, respectively.Breakthrough concentrations vs. time for E. coli and Legionella are shown in Figures 2 and 3, respectively.These graphics indicate that, for each time period tested Legionella concentrations continuously increased with time, unlike the decrease observed for E. coli.This might be due to the difference in the behavior of E. coli and Legionella transport in the plume.In plume form, bacteria move in high concentrations, which allows greater cell-to-cell interaction.Bacterial clumping is a well known phenomenon which can occur in culture or in plume environments.Some of the factors that can influence bacterial clumping are: a) number of autoagglutinins (agglutinating proteins) on the cell surface and the presence divalent cation such as Ca 2+ .It is known that E. coli cell can have up to eight different autoagglutinins compared to only one type on Legionella cells [19].The difference in the eight different autoagglutinins compared to only one type on Legionella cells [19].The difference in the number of surface-active agglutinating proteins might account for the difference in retention because of cations in the filter media.
Tank Characteristics
As indicated previously, the tank was packed with play sand and the heterogeneities associated with the pack were visually apparent as isolated areas of horizontal layering of fines or coarser sand.While no dye or chemical tracer tests were performed to determine overall flow characteristics, previous tank experience would indicate that while such packing idiosyncrasies introduce some irregularity to flow profile, no significant vertical flow heterogeneities would exist laterally across the tank.Via calculations based on the controlled flowrate and volume and porosity of the sand pack, the overall tank velocity assuming uniform flow across the tank was 10.3 in/day (26.2 cm/day), but sampling created an increase in tank velocity to 10.9 ft/day (27.7 cm/day) for the period of operation.Using the increased flowrate, fluid travel times from the top of the sand pack to the 5, 15, and 25 inch sampling ports were 11, 33, and 55 h, respectively, with an overall travel velocity of 0.45 in/h (1.14 cm/h; see Table 4).Travel time between the 5 and 15 inch (or 15 and 25 inch) ports was 22 h.
Of additional note regarding tank characteristics is the apparent lag in the microbial transport rate for left array versus right array ports.This is most likely related to the tank sand pack or sample timing issue.The lag of up to 2 h from 0 to 5 inches for E. coli was not much different than the cumulative lag of 2 h from 0 to 15 inches for Legionella (±1 h for both), and might simply be an artifact associated with sample timing.This is compelling when one considers the overall differential in travel time from 0 to 25 inches of only one hour.However, since the lag is consistently noted on the left array, there could be some issue associated with left side tank flow.Given that the transport lag predominates in the top 5 inches of travel, the most plausible explanations include the following:
Tank Characteristics
As indicated previously, the tank was packed with play sand and the heterogeneities associated with the pack were visually apparent as isolated areas of horizontal layering of fines or coarser sand.While no dye or chemical tracer tests were performed to determine overall flow characteristics, previous tank experience would indicate that while such packing idiosyncrasies introduce some irregularity to flow profile, no significant vertical flow heterogeneities would exist laterally across the tank.Via calculations based on the controlled flowrate and volume and porosity of the sand pack, the overall tank velocity assuming uniform flow across the tank was 10.3 in/day (26.2 cm/day), but sampling created an increase in tank velocity to 10.9 ft/day (27.7 cm/day) for the period of operation.Using the increased flowrate, fluid travel times from the top of the sand pack to the 5, 15, and 25 inch sampling ports were 11, 33, and 55 h, respectively, with an overall travel velocity of 0.45 in/h (1.14 cm/h; see Table 4 Of additional note regarding tank characteristics is the apparent lag in the microbial transport rate for left array versus right array ports.This is most likely related to the tank sand pack or sample timing issue.The lag of up to 2 h from 0 to 5 inches for E. coli was not much different than the cumulative lag of 2 h from 0 to 15 inches for Legionella (±1 h for both), and might simply be an artifact associated with sample timing.This is compelling when one considers the overall differential in travel time from 0 to 25 inches of only one hour.However, since the lag is consistently noted on the left array, there could be some issue associated with left side tank flow.Given that the transport lag predominates in the top 5 inches of travel, the most plausible explanations include the following:
•
The surface of the sand pack was irregular and was only 4 inches from the 5 inch port on the right side.That one inch difference would yield a 2.3 h differential in transit time.
•
The influent pool was filled from the right side of the tank with outflow on the left, with a residence time was 253 min.This ponding could have resulted in preferential deposition of microbes on the right side of the tank, ultimately affecting transit times to the 5 inch port.
In either case, this idiosyncrasy is irrelevant towards the overall purpose of this study, which was to investigate microbial transport characteristics.
Microbial Transport Characteristics
In vertically downward, saturated flow conditions in sand media, both E. coli and Legionella showed significant mobility and were readily transported across the 25 in (63.5 cm) zone of observation.Most notably, both E. coli and Legionella showed transport rates that were greater than the apparent fluid velocity (see Table 4).While preferential flow and/or mobility of the species could be argued, the consistency of travel times between the right and left arrays and from one port to the next reduces the likelihood of this explanation.In any case, both E. coli and Legionella showed significant mobility in simulated aquifer conditions, which is a major point of significance for this study.
Legionella transport characteristics differed from those of E. coli, with Legionella exhibiting retarded breakthrough times for the 0 to 5 and 5 to 15 inch zones.While the E. coli rate of transport was relatively consistent from 0 to 25 inches at 0.80 to 0.83 in/h (2.03 to 2.11 cm/h), Legionella showed increasing rates of transport at 0.59 in/h (1.50 cm/h) from 0 to 5 inches, 0.69 in/h (1.75 cm/h) from 5 to 15 inches, and 0.80 in/h (2.03 cm/h) from 15 to 25 inches.Several factors can influence the transport of bacteria through saturated porous media.Attachment of bacterial cells to media surfaces is influenced by cell surface electrostatic charge and hydrophobic interaction with the media, size, and the presence of surface structures such as flagella, fimbriae, and extracellular lipopolysaccharides [20].Each of these parameters adds complexity to building a deeper understanding of bacterial transport through the subsurface, such that microbial subsurface transport is not fully understood [21].For example, electrostatic charges are influenced by not only the microorganisms' surface (e.g., the presence and configuration of proteins, and lipopolysaccharides), but also the granular media characteristics and the water matrix itself.Specifically, the pH and the ionic strength of the solution affect the surface charge of the bacterial cell and soil particle, thereby dictating electrostatic interactions.Direct assessment of surface charges was beyond the scope of this study.However, reported that the electrophoretic mobility of some Legionella pneumophila serogroups varies with solution pH while other serogroups remain constant between pH 6 and 9 [22].These pH values encompass common water ranges, including the water tested in this study).
Different types of lipopolysaccharides (LPSs) are located on the outside of bacterial cells and they are considered a key factor in cell attachment to mineral surfaces and microbially induced precipitation/dissolution reactions.The E. coli LPSs are composed of three different components: (i) hydrophobic lipid A anchored in the outer membrane, (ii) phosphorylated, nonrepetitive hetero-oligosaccharide known as the core oligosaccharide, and (iii) polysaccharide that extends from the cell surface and forms the O antigen detected in serotyping [23].Alternately, LPSs on Legionella pneumophila cells are composed of a very hydrophobic lipid A acylated by long chain fatty acids and an O-antigen-specific chain consisting of homopolymeric legionaminic acid [24].The variation in transport profiles between E. coli and Legionella might be due to the significant differences in the LPSs on cell surfaces resulting in reversible or irreversible attachment of the Legionella cells to the tank media.There has not been any study investigating a direct link between bacterial cell transport in porous media and the different types of lipopolysaccharides on bacterial cell membranes.
In addition, filamentation in response to different environmental stresses has been observed for numerous bacterial species [25][26][27].Under stress and nutrient deficient conditions, Legionella tends to become long and filamentous [28].The size and shape of the bacteria are known to be determining factors in their transport under saturated conditions [29,30].Therefore, it is hypothesized that the pleomorphic nature of Legionella might be a factor in its slower transport characteristics as compared to E. coli, which does not exhibit pleomorphic characteristics.
Regarding Legionella's apparent increase in transport for all zones, this could simply be a function of filtration or size selection, with preferential transport rates favoring those microbes with smaller profiles or less filamentous development.
Conclusions
While it is accepted that E. coli is mobile in saturated aquifer environments, the scale of this study provided a basis for the premise that Legionella is also mobile in saturated aquifer conditions.In addition, data from this study suggested that microbial cell types and characteristics in conjunction with aquifer characteristics might have impacted the transport of those pathogens.Legionella's pleomorphic nature and/or the differences in the LPSs on cell surfaces, which may result in reversible or irreversible attachment of the Legionella cells to the media of the tank, may both have affected its transport properties.Further, vadose zone conditions are typically heterogeneous, ranging from the micro to the macro scale, which can lead to preferential transport affecting dispersion in both the lateral and vertical directions.
Future studies should investigate the extent of Legionella contamination in groundwater impacted directly or indirectly by aquifer recharge practices and Legionella's transport characteristic in the vadose zone.An assessment of the overall data indicated parallel trends in transport of E. coli and Legionella under the experimental conditions studied.Considering the delay in Legionella transport, historical data for E. coli may serve as a useful, reasonable prediction of Legionella's transport under recharge conditions.
Figure 1 .
Figure 1.A schematic (not to scale) and a photo of the 2-D tank packed with Sakrete© play sand media with sampling ports.Note: A 3 inch gravel pack is not visible at the base of the sand pack but is present to provide uniform effluent drainage across the tank.
Figure 1 .
Figure 1.A schematic (not to scale) and a photo of the 2-D tank packed with Sakrete© play sand media with sampling ports.Note: A 3 inch gravel pack is not visible at the base of the sand pack but is present to provide uniform effluent drainage across the tank.
Figure 2 .
Figure 2. Breakthrough time (+/− 1 h) and concentration of E. coli as a function of time in three time periods.
Figure 2 .
Figure 2. Breakthrough time (±1 h) and concentration of E. coli as a function of time in three time periods.
surface-active agglutinating proteins might account for the difference in retention because of cations in the filter media.
Figure 3 .
Figure 3. Breakthrough time (+/− 1 h) and concentration of Legionella as a function of time in three time periods.
Figure 3 .
Figure 3. Breakthrough time (±1 h) and concentration of Legionella as a function of time in three time periods.
). Travel time between the 5 and 15 inch (or 15 and 25 inch) ports was 22 h.
Table 2 .
Breakthrough times for E. coli.
Table 3 .
Breakthrough times for Legionella.
Table 3 .
Breakthrough times for Legionella.
Table 4 .
Travel times for fluid vs. E. coli vs. Legionella.
|
2020-11-19T09:14:23.220Z
|
2020-11-13T00:00:00.000
|
{
"year": 2020,
"sha1": "9fb332cfe04bf532cb3f62ab8372d6d3a7a6ce55",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/11/3170/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "37f372efd5c8b0e6a0b96a53f53cc21dd9513c5b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
245859530
|
pes2o/s2orc
|
v3-fos-license
|
Detection and Quantification of Stagonosporopsis cucurbitacearum in Seeds of Cucurbita maxima Using Droplet Digital Polymerase Chain Reaction
Stagonosporopsis cucurbitacearum is an important seedborne pathogen of squash (Cucurbita maxima). The aim of our work was to develop a rapid and sensitive diagnostic tool for detection and quantification of S. cucurbitacearum in squash seed samples, to be compared with blotter analysis, that is the current official seed test. In blotter analysis, 29 of 31 seed samples were identified as infected, with contamination from 1.5 to 65.4%. A new set of primers (DB1F/R) was validated in silico and in conventional, quantitative real-time PCR (qPCR) and droplet digital (dd) PCR. The limit of detection of S. cucurbitacearum DNA for conventional PCR was ∼1.82 × 10–2 ng, with 17 of 19 seed samples positive. The limit of detection for ddPCR was 3.6 × 10–3 ng, which corresponded to 0.2 copies/μl. Detection carried out with artificial samples revealed no interference in the absolute quantification when the seed samples were diluted to 20 ng. All seed samples that showed S. cucurbitacearum contamination in the blotter analysis were highly correlated with the absolute quantification of S. cucurbitacearum DNA (copies/μl) in ddPCR (R2 = 0.986; p ≤ 0.01). Our ddPCR protocol provided rapid detection and absolute quantification of S. cucurbitacearum, offering a useful support to the standard procedure.
INTRODUCTION
Pumpkin, squash, and gourds (Cucurbita spp.) are grown throughout the world with a total production of 22.9 million tonnes, which 20.9% is produced in Europe in 2019 (FAOSTAT, 2021). Italy is ranked 9th in the world with 569.120 tonnes in terms of total production (FAOSTAT, 2021).
In the Mediterranean areas as well in Asia, S. cucurbitacearum was described as the main pathogenic fungus of Cucurbita maxima seedling related to GSB, able to reduce both squash yield and quality (Moumni et al., 2019(Moumni et al., , 2020Zhao et al., 2020). In warm and humid environments, infections of these fungi can result 15 to 50% in reduction of yields and rapid death of the cucurbit plants (Keinath et al., 1995;Boughalleb et al., 2007;Keinath, 2011;Li et al., 2015;Yao et al., 2016;Moumni et al., 2019;Zitter and Kyle, 1992). These diseases are known to spread in the greenhouse during the growth season through airborne ascospores and by conidia transported in water on the plant surfaces, with further spread by contact between plants, or between plants and man, and onto the host plant (De Neergaard, 1989;Keinath, 1996Keinath, , 2011. Thus, under favorable environmental conditions, a low level of latently infected seedlings can potentially result in a major disease epidemic (Ling et al., 2010). Moreover, S. cucurbitacearum is both an external and internal seedborne pathogen, which can thus carry inoculum from the seeds to the plants, although it is found mainly on the seed coat (Sudisha et al., 2006).
For these reasons, the production and the movement of seeds represent a particularly efficient vehicle to disperse such seedborne pathogens (Zhang et al., 2018;Moumni et al., 2020;Zhao et al., 2020). Also, as global trade has increased, so has the movement of seeds and other plant materials between countries, thus increasing the risk of the transport and transfer of plant pathogens (Epanchin-Niell and Hastings, 2010;Mancini and Romanazzi, 2014;Mancini et al., 2016). The introduction of exotic pathogens in this way can have catastrophic effects on both natural and agricultural ecosystems, and can result in large economic losses from lost ecosystem services and reduced crop yields (Atallah et al., 2010;Short et al., 2015;Cunniffe et al., 2016;Epanchin-Niell, 2017). Many countries have formulated legislation to limit or prevent the introduction of exotic pathogens into new areas, and these are generally supported by detection techniques (Choudhury et al., 2017). However, the low inoculum levels and the varied distribution of a pathogen within seed lots make the testing of seeds a difficult task. Moumni et al. (2020) reported that the selection of healthy fruit is not sufficient to reduce the infection by this seedborne pathogen. On the other hand, cultural practices and fungicide application can have important roles in GSB management, although this pathogen has developed resistance to many fungicides that were previously very effective (Finger et al., 2014;Gimode et al., 2020;Newark et al., 2020). Therefore, early diagnosis of this pathogen is a fundamental step in the management of these crop diseases (Ora et al., 2011).
The most common methods currently used for rapid detection of agent related to GSB are based on the use of molecular detection tools. These include conventional polymerase chain reaction (PCR), PCR-enzyme-linked immunosorbent assays (PCR-ELISA) (Somai et al., 2002;Keinath et al., 2003), and quantitative real-time PCR (qPCR) (Somai et al., 2002;Ha et al., 2009;Ling et al., 2010). Recently, loop-mediated isothermal amplification assays have also been designed to detect S. cucurbitacearum in cucurbit seeds (Tian et al., 2016) and for infections in young muskmelon leaves with suspected early symptoms of GSB (Yao et al., 2016).
Droplet digital PCR (ddPCR) is a recent technology that provides both detection and quantification of DNA targets. Unlike other methods, ddPCR does not require standard curves of known concentrations for quantification (Huggett and Whale, 2013). Additionally, ddPCR has been shown to be more sensitive for the detection of low level of inoculum or unevenly distribution of pathogens in infected plants Selvaraj et al., 2018;del Pilar Martínez-Diz et al., 2020). ddPCR also shows high resistance to inhibitors compared to qPCR (Rački et al., 2014). Thus, ddPCR represents an ideal choice for infection testing of nursery propagation materials and seeds (Dingle et al., 2013;Rani et al., 2019).
Considering the emerging importance for squash seeds of S. cucurbitacearum the focus of our research, we set up a rapid and sensitive protocol, based on conventional PCR, validated also in qPCR and ddPCR. Such systems would speed up the diagnosis of GSB infection, which to date has generally been carried out using the blotter test, the official diagnostic method and requires several weeks before the results can be obtained.
Dot Blot Analysis and Morphological Identification
Thirty-one seed samples of squash (Cucurbita maxima Duchesne, cv. Bjaoui) were collected from the same number of fruits showing clear symptoms, mild symptoms of GBS, and asymptomatic samples. The fruit samples were collected in nine fields in the northwest of Tunisia in October 2015, 2016, and 2017. Each seed sample was analyzed to verify the presence of S. cucurbitacearum using the standard blotter method of the International Seed Testing Association (Mathur and Kongsdal, 2003). Two hundred seeds per sample were soaked for 5 min in 1% sodium hypochlorite solution, and then triple-rinsed with sterile distilled water. The seeds were dried for 2 min on sterile paper towels under a laminar flow hood. They were then placed in Petri dishes (diameter, 110 mm) on eight overlapping sterile filter paper layers (Whatman N • 4) that were moistened with 5 ml sterile distilled water, and incubated at 25 • C under 12 h/12 h day/night artificial light cycles (Master TL-D Super 80 58W/830).
For each sample of seeds, 20 Petri dishes were used, each of which contained 10 seeds. From days 7 to 15 after plating, the Petri dishes were examined daily under a stereomicroscope for the presence of S. cucurbitacearum fungal structures. The pycnidia on the seeds were excised and morphological identification was carried out under a compound microscope. For each sample, the proportion (%) of the seeds infected by S. cucurbitacearum was calculated. Statistical analyses were performed using the software SPSS (version 20; IBM, Armonk, NY, United States). The data were first tested for normality and homogeneity of variance by Levene's test. Welch's ANOVA was performed to determine any differences in seed samples, and means were separated using the Games-Howell post hoc test (P < 0.05).
The fungal structures on the seeds that were morphologically identified as S. cucurbitacearum were transferred onto potato dextrose agar (Liofilchem Srl, Roseto degli Abruzzi, Italy) in Petri dishes. After 10 days on the potato dextrose agar at 23 ± 2 • C, morphological identification was carried out according to the color and shape of the colonies, combined with the characteristics of the pycnidia and spores.
DNA Extraction and Molecular Characterization of Natural Seed Inoculum
Total DNA was initially extracted from the mycelia of isolate D33 that was previously identified as S. cucurbitacearum by morphological and internal transcribed spacer sequence analysis (Moumni et al., 2019(Moumni et al., , 2020. In order to have a preliminary molecular characterization of the natural inoculum, the colonies, picked up from blotter test, grown in PDA and identified by morphological characters, were subjected to DNA extraction, and analyzed by a multiplex PCR able to distinguish the three morphologically similar species (S. cucurbitacearum, S. citrulli, and S. caricae) .
For the seeds, according to the results from the blotter analysis and molecular characterization of the natural inoculum, the following were selected for further analysis: 17 seed samples as representative of S. cucurbitacearum infection; 2 seed samples (T18 and T101) that appeared not to be infected using the conventional method; and 1 seed sample certified as healthy (IHS) (Seminis, Monsanto Agricoltura, Italy). From each of 19 samples, 100 seeds/sample were pulverized in liquid nitrogen, with DNA extraction carried out starting from 300 mg seed tissue homogenized in 5 ml extraction buffer (3% cetyl trimethylammonium bromide, 100 mM Tris-HCl, 1.4 M NaCl, 20 mM EDTA, 2% polyvinyl pyrrolidone, and 2% sodium metabisulfite) in tissue extraction bags (12 × 15 cm; Bioreba, Switzerland). The lysates were washed with phenol/chloroform (1:1), and then chloroform. The total nucleic acids were precipitated in 1 vol. cold isopropanol, and immediately centrifuged at 13,000 × g for 25 min. The pellets were dried at room temperature, and then resuspended in 60 µl sterile water. The quality and quantity of the extracted DNA were evaluated using a biophotometer (Eppendorf, Hamburg, Germany).
Validation of Stagonosporopsis cucurbitacearum Identification in Conventional Polymerase Chain Reaction and Sequencing
In a previous study based on random amplification of polymorphic DNA markers, Ling et al. (2010) developed primers based on sequence-characterized amplified regions (i.e., the DB17 primer set) with broad-spectrum specificity for S. cucurbitacearum.
Here, the two 559 bp nucleotide sequences from S. cucurbitacearum that are representative of the RGI and RGII molecular types (GenBank accession Nos. GQ872461 and GQ872462) were downloaded in the FASTA format and aligned using ClustalX (version 1.83) (Thompson et al., 1994). Based on the conserved sequence region common to both of these genotypes (i.e., RGI and RGII), we designed a new set of primers using the Primer3 Plus software, defined here as DBF1 (5 -TCGAATGGCTCAGAGAAGGT-3 ) and DBR1 (5 -AAGTCCACGTCAGACCCATC-3 ), which were then synthesized (Sigma Aldrich Merck, Darmstadt, Germany). The primers were first validated in silico using NCBI Primer-Blast (Ye et al., 2012). The specificity of the primers was then confirmed and tested in PCR with reference strains, previously identified by multisequence analysis of calmodulin,β-tubulin, histone H3, translation elongation factor and internal transcribed spacer regions and available in NCBI database (Moumni et al., 2019(Moumni et al., , 2020: Preliminary conventional PCR tests were set up to define the optimal concentrations of MgCl 2 (0.8, 1.0, 1.2, 1.5, and 2 mM), primers (0.5 and 1.0 µM), and template. The amplification cycling was: 95 • C for 2 min, followed by 35 cycles of 95 • C for 30 s, 55-60 • C for 30 s, and 72 • C for 30 s, and with a final extension cycle at 72 • C for 7 min. The PCR products were visualized on 1.5% agarose gels in Tris-acetate buffer (40 mM Tris-acetate, 1 mM EDTA, and pH 8.0) after staining with GelRed (Biotium, United States), to confirm the specificity, expected size of the PCR product, and the sensitivity of the diagnostic system.
The specific amplicons were sequenced in both directions by Genewiz (Germany) according to Sanger Method. Nucleotide sequences were carefully checked reading the chromatogram in order to exclude ambiguous peaks, and then formatted in fasta format in order to carry out Megablast, optimized for highly similar sequences, in NCBI database nucleotide collection (nr/nt).
The total DNA extracted from S. cucurbitacearum strain D33 was diluted to different concentrations (285−2.9 × 10 −5 ng). In addition to this determination of the right quantity of DNA for the amplification, serial dilutions were set up for seed sample T85 (285, 57, and 20 ng), as a seed sample that was highly contaminated by S. cucurbitacearum (65.4%) according to the blotter analysis.
Molecular Survey of Stagonosporopsis cucurbitacearum on Naturally Infected Seeds by Conventional Polymerase Chain Reaction
The total DNA extracted from 19 of the seed samples with different S. cucurbitacearum infestations were processed through the conventional PCR using the DBF1/R1 primer pair at the optimized mix concentration and thermocycling conditions according to the preliminary tests. DNA of isolate D33 was 0.0 ± 0.0 g Yes a For each sample, two replicates of 100 seeds were tested using the blotter analysis. Means followed by different letters indicate significant deviation based on Welch's ANOVA and post hoc means separation using the Games-Howell test (P < 0.05).
used as the positive control. After the amplification, five specific amplicons were sequenced and compared with the database.
Validation, Molecular Survey, and Absolute Quantification of Stagonosporopsis cucurbitacearum on Naturally Infected Seeds by Digital Droplet Polymerase Chain Reaction The primer pair DBF1/R1, previously validated in conventional and qPCR, were tested in ddPCR with the same samples previous analyzed for qPCR validation. The ddPCR inhibitors, the optimal concentration of DNA template, the LOD and limit of quantification (LOQ) were determined.
Here, 20 µl of the reaction mixture, containing 1 × QX200 ddPCR EvaGreen supermix (Bio-Rad), 150 or 300 nM each primer, and 20 ng of template, was transferred to a DG8 cartridge for droplet generation (QX200 droplet generator; Bio-Rad, Hercules, CA, United States). Then 70 µl droplet generation oil (Bio-Rad) was added to the cartridge, which was placed into the droplet generator. Droplet generation of 40 µl was carefully transferred to the ddPCR 96-well PCR plates (Bio-Rad), and the plates were sealed at 180 • C using a PCR plate sealer (PX1; Bio-Rad).
The amplification was performed in a thermal cycler (ICycler; Bio-Rad), with a ramp rate of 2 • C/s, with the following protocol: initial denaturation at 95 • C for 5 min, then 45 cycles of denaturation at 95 • C for 30 s, annealing at 60 • C for 45 s (temperature ramp, 2 • C/s), and finally, incubation at 98 • C for 10 min, and storage at 4 • C. After the cycling, the 96-well plates were fixed into a plate holder and positioned in the droplet reader (QX200; Bio-Rad). The droplets of each sample were analyzed sequentially, and the fluorescent signals of each droplet were measured individually by a detector. The droplets were read in the droplet reader, and then the ddPCR data were analyzed using Quantasoft version 1.7, which defined an automatic threshold or with a selected manually defined threshold applied. This incorporated the calculation of the basic parameters of the ddPCR (e.g., concentrations, mean amplitudes of positive, and negative droplets), and the mean copies per partition and total volume of the partitions measured, as defined by the digital MIQE guidelines . Two positive droplets were enough to determine a sample as positive, and only the reactions with more than 10,000 accepted droplets were used for analysis.
After a preliminary validation of the technique, the primers DB1F/R1 were used at the concentration of 150 nM and 20 ng of total DNA extracted from 13 of the seed samples, which were representative of the different levels of pathogen contamination (as previously defined with the blotter analysis and conventional PCR) were processed by ddPCR (QX200 system; Bio-Rad, Hercules, CA, United States), according to the manufacturer instructions.
Incidence of Stagonosporopsis cucurbitacearum Using the Blotter Method
The level of squash seed infection detected for S. cucurbitacearum using the blotter test ranged from 0 to 65.4% (Table 1). S. cucurbitacearum was detected for 29 of the 31 seed samples collected (93.6% of samples). More than 20% incidence of seedborne S. cucurbitacearum was detected for 12 of these seed samples ( Table 1).
The molecular approach allowed to corroborate the data emerging during the morphological characterization, and speed up the process of identification. By a multiplex PCR able to distinguish the three morphologically similar species, the natural inoculum of seeds in our study was represented only by S. cucurbitacearum (data not shown). DNA of healthy seed (0, 20, 57, and 285 ng/qPCR reaction) spiked with serial dilutions of fungal DNA. The experiments were assessed in duplicate over three independent experiments (n = 6). Cq, quantification cycle; SD, standard deviation; na, not amplified, at these dilutions the probability of replicates detection of last dilution was absent or lower than 50%; *, two replicate amplified of six performed.
Optimization of Amplification Conditions and Primer Specificity by Conventional Polymerase Chain Reaction
The preliminary tests defined the optimal analysis conditions as 20 ng total DNA, 200 µM dNTPs mixture, 0.5 µM each primer, 1.2 mM MgCl 2 , 1.25 U Taq polymerase (Promega Corporation, Madison, WI, United States), and 20 ng template, in a total reaction volume of 25 µl. We set up the following cycling conditions: 95 • C for 3 min., 35 cycles with 95 • C for 30 s, 58 • C for 30 s, and 72 • C for 30 min, followed by a final extension step at 72 • C for 5 min. The specificity tests amplified a 208-bp specific fragment in isolates D33, D12, D49, ID1, and ID3, which had been previously identified as S. cucurbitacearum by multilocus sequence analysis (
Limit of Detection for Conventional Polymerase Chain Reaction
The sensitivity of the DBF1/R1 primers was evaluated using serial dilution of DNA extracted from S. cucurbitacearum mycelia. The minimum concentration of target DNA that could be detected with these primers was 9.12 × 10 −2 ng (Supplementary Figure 2). An ambiguous amplification was recorded at 1.82 × 10 −2 ng. The PCR amplification fragments were strongly visualized when the DNA concentrate of S. cucurbitacearum ranged from 40 to 10 ng, and weakly visualized with 1 ng template. Very low DNA concentration (<1.82 × 10 −2 ng) were not amplified by the primers (Supplementary Figure 2).
Once the specificity and sensitivity of the primers for S. cucurbitacearum were established, the 19 seed samples were analyzed. In 17 of these 19, a specific fragment of ∼208 bp was detected using the DBF1/R1 primer pair, as for the reference strain D33 of S. cucurbitacearum (positive control). These primers amplified the DNA from the seeds naturally contaminated by S. cucurbitacearum down to a threshold of 4.5% incidence in 200 seeds, as indicated in the blotter analysis. Faint band was obtained when the seed samples were contaminated at ∼1.5% in the blotter analysis (Figure 1). The PCR primers consistently showed strong bands for the seed samples with incidence from 44.0 to 65.4%, moderate bands between 21.5 and 29.1%, weak bands between 4.2 and 4.5%, and a very faint band with 1.5% infection (Figure 1). No amplification was detected in samples T18 and T101, as also for the water control and the certified healthy seed sample (IHS).
The amplicons obtained from the DNA extracted from the mycelia (ID3 and ID9) and seed samples (T4and T7) were sequenced, and have been deposited with the NCBI database as MZ218113-MZ218116.
Validation of Stagonosporopsis cucurbitacearum Identification by qPCR
The reliable detection of DNA, extracted from S. cucurbitacearum isolate D33, ranged from 57 ng/reaction (Cq 22.01 ± 0.16) to 0.0182 ng/reaction (Cq 33.91 ± 0.8) ( Table 2). The spiked sampled obtained with 20 ng/reaction of healthy seed DNA showed consistent results. For these concentrations, the Cq values was correlated with those obtained during the analysis of the DNA extracted from the mycelia of S. cucurbitacearum ( Table 2). A single peak in positive samples suggests a single size product, with the melting temperature (TM) of 85.5 • C ( Supplementary Figure 3). At these conditions, it is reasonable to indicate as LOD in qPCR at Cq < 33 ( Table 2). Not consistent results were observed for S. cucurbitacearum mycelia DNA spiked with 57 and 285 ng of DNA from healthy seed.
Detection of Stagonosporopsis cucurbitacearum by Droplet Digital PCR
The limit of detection of S. cucurbitacearum by ddPCR was 3.6 × 10 −3 ng, below which that there was no linear quantification of S. cucurbitacearum (Figures 2A,B). From the spiked samples analysis, a concentration of DNA > 20 ng had a negative influence on the detection of S. cucurbitacearum and on the accuracy of the absolute quantification (Figures 2A,B). The optimum condition was seen for 20 ng DNA from healthy seeds, at which the quantification of S. cucurbitacearum was highly related to the concentration of S. cucurbitacearum DNA added.
After optimization of the amplification conditions, the ddPCR was applied to 13 seed samples that were naturally infected with S. cucurbitacearum and had been analyzed previously using the blotter analysis and the conventional PCR. The number of total events corresponding to the amount of droplets generated by the ddPCR ranged from 8,375 (sample T93) to 16,574 (sample T8) (Figures 3A,B). These values were related to the number of events, and ranged from 7.5 copies/µl (sample T85) to 0.2 copies/µl (sample T90). For sample T101, the water control and the healthy seeds sample (IHS), no amplifications were recorded (Figures 3A,B).
Comparisons Between the Blotter Analysis, Conventional Polymerase Chain Reaction, and Droplet Digital PCR
The data from the blotter analysis were analyzed in relation to the data obtained for the ddPCR (Table 2), with high correlation seen (R 2 = 0.986, p ≤ 0.01). The concentration of the amplified DNA target expressed as copies/µl was in accordance with seed contamination, in terms of the proportions of seeds infected by S. cucurbitacearum, recorded in blotter analysis. For the water control and the healthy seeds sample (IHS), where no infection was detected in the blotter analysis, no amplicons or positive events were seen for either conventional PCR or ddPCR. In sample T90, which showed seed infection of 1.5% in the blotter analysis, and where conventional PCR showed a light, and ambiguous band, the ddPCR showed an absolute quantification of 0.2 copies/µl. Finally, sample T101 that was negative in the blotter analysis and conventional PCR showed only 1 positive event for ddPCR, so this was also considered as negative in the ddPCR, along with the water control and the healthy seeds sample ( Table 3).
DISCUSSION
Healthy seeds are the start of healthy plants, and this is an essential requirement to safeguard the productivity of crops. GSB is a widespread disease and leads to significant losses in yield and quality for cucurbit crops worldwide (Keinath, 2011;Li et al., 2015;Yao et al., 2016;Zhao et al., 2020). In addition, the use of grafted cucurbits further increases the risk of GSB development from seedborne inoculum. Indeed, GSB was observed for grafted watermelon in Tunisia, which caused severe yield losses (Boughalleb et al., 2007). Stagonosporopsis cucurbitacearum was recovered most frequently from Cucurbita spp. (Rennberger and Keinath, 2018;Zhao et al., 2018), and for squash seed the only fungal pathogen related to GSB in Tunisia and Italy (Moumni et al., 2019(Moumni et al., , 2020. Many other studies have demonstrated that infected seeds are the primary inoculum for GSB (De Neergaard, 1989;Sudisha et al., 2006;Keinath, 2011). The use of highly sanitized quality seeds decreases the primary inoculum in the field (Ciampi-Guillardi et al., 2020). Therefore, the detection of seedborne fungal pathogens is an important aspect for disease management.
The blotter method was appropriate in this study for the detection of S. cucurbitacearum in these seed samples. However, a drawback of this conventional method is that the morphological identification requires mycological skills and is time consuming; also, fungal contaminants can often mask the development of a pathogen. Indeed, the morphological characterization of S. cucurbitacearum, as well S. citrulli, S. cariacae and Phoma spp. are similar, and so distinguishing between these can be difficult (Keinath et al., 1995;Rennberger and Keinath, 2018). For this reason, in our study after the blotter tests and morphological identification, a preliminary molecular analysis carried out on the S. cucurbitacearum, as proposed by Brewer et al. (2015), allowed to confirm the identity. Rapid and accurate detection of pathogens transmitted by seeds should improve integrated disease management strategies, to control and prevent the spread of diseases caused by these pathogens. Several molecular methods have now been reported to detect pathogens on seeds (Lee et al., 2001;Pryor and Gilbertson, 2001;Samac et al., 1998). To set up efficient substitutes for the more traditional techniques, these methods need to be specific, sensitive, rapid, and adaptable to routine analysis. In the present study, the DBF1/DBR1 primers were designed to detect S. cucurbitacearum in these squash seeds. This set of primers successfully amplified the predicated size of the DNA fragment in infected material. One specific advantage of this PCR detection protocol is that it requires 1 day for completion, compared to the 10 days required for the blotter method. Thus, it can be used to examine both greater numbers and larger sample sizes with high reliability. Indeed, several studies have already reported primer pairs for the identification of S. cucurbitacearum in plant fragments after isolation to purity (Somai et al., 2002;Keinath et al., 2003;Brewer et al., 2015). Our analysis showed that the ddPCR method had a higher sensitivity than qPCR, and it is more reliable for the detection of the pathogen even at lowest titer.
Droplet digital PCR represents an innovative application in the diagnostic field, which is a user-friendly quantification technology, which does not require standard curve for the calibration (Bustin et al., 2009) that can be broadly used in several scientific fields, and its application to plant pathology is growing . The present study represents the first approach to assess ddPCR as a reliable tool to detect and quantify pathogenic fungi associated with seeds. Many studies have reported that ddPCR is beneficial in terms of improved sensitivity of pathogen detection, and reduced effects of PCR inhibitors on PCR efficiency (Rački et al., 2014;Dupas et al., 2019;Bae et al., 2020).
The present study was thus designed to assess the diagnostic potential and sensitivity of ddPCR for absolute quantification of S. cucurbitacearum in the seeds of squash, as also compared to conventional PCR. ddPCR and the blotter test showed a high degree of correlation (R 2 = 0.986, p ≤ 0.01) here. del Pilar Martínez-Diz et al. (2020) showed that ddPCR was more sensitive than qPCR for detection and quantification of the fungus Ilyonectria liriodendri. Dupas et al. (2019) demonstrated that ddPCR can improve the detection of the bacterium Xylella fastidiosa at low levels of infection and identified positive samples in those defined as negative by real-time PCR. Liu et al. (2020) suggested the use of ddPCR for detection of the fungus Tilletia controversa in soil samples and demonstrated that ddPCR was 100 times more sensitive than conventional PCR. Similarly, the data in the present study show that this ddPCR assay is a reliable alternative for quantification of S. cucurbitacearum on squash seeds. At the moment, for several important seedborne pathogens of vegetable crops, included S. cucurbitacearum, the International Seed Health Initiative and the American Seed Trade Association are attempting to compile pragmatic minimum thresholds based on experimental data, as well as empirical evidence and experience, that can be applied to seed in commerce. Considering that S. cucurbitacearum is transmitted by seeds and even low seed infection can cause medium-high economic losses in the field (Keinath, 2011), in our study we applied on one hand the blot test, an official method suggested by ISTA, on the other hand we explore the application of an absolute quantification method (ddPCR). To the best of our knowledge, this study is the first report of ddPCR for detection of such seedborne fungi. Further studies are required to evaluate and validate this new technology for routine use in the diagnosis of this and other seedborne pathogens. In addition, knowledge improvement of pathogen epidemiology initiated from seeds could lead to improve the management of such disease.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository and accession number(s) can be found below: https://www.ncbi.nlm.nih.gov/; MZ218113-MZ218116.
FUNDING
This project was partially funded by UNIVPM Strategic Project 2016 "Control of plant diseases by natural compounds with quantification of plant pathogens and microbiological biodiversity, for a sustainable production of high fruit quality" and by the project "Strategies for management of diseases of seed-bearing vegetable crops for integrated pest management and organic agriculture (CleanSeed)" funded by Marche Region.
|
2022-01-12T14:19:27.833Z
|
2022-01-11T00:00:00.000
|
{
"year": 2021,
"sha1": "a1bccf5e2f45ec3ec3820ed0863e8b47c69c0f77",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.764447/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "a1bccf5e2f45ec3ec3820ed0863e8b47c69c0f77",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271224640
|
pes2o/s2orc
|
v3-fos-license
|
Spinal Infections? mNGS Combined with Microculture and Pathology for Answers
Introduction This study evaluates the efficacy of metagenomic next-generation sequencing (mNGS) in diagnosing spinal infections and developing therapeutic regimens that combine mNGS, microbiological cultures, and pathological investigations. Methods Data were collected from 108 patients with suspected spinal infections between January 2022 and December 2023. Lesion tissues were obtained via C-arm assisted puncture or open surgery for mNGS, conventional microbiological culture, and pathological analysis. Personalized antimicrobial therapies were tailored based on these findings, with follow-up evaluations 7 days postoperatively. The sensitivity and specificity of mNGS were assessed, along with its impact on treatment and prognosis. Results mNGS showed a significantly higher positive detection rate (61.20%) compared to conventional microbiological culture (30.80%) and PCT (28%). mNGS demonstrated greater sensitivity (79.41%) and negative predictive value (63.16%) than cultures (25% and 22.58%, respectively), with no significant difference in specificity and positive predictive value. Seven days post-surgery, a significant reduction in neutrophil percentage (NEUT%) was observed, though decreases in white blood cell count (WBC), erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were not statistically significant. At the last follow-up, significant improved in Visual Analogue Scale (VAS) scores, Oswestry Disability Index (ODI), and Japanese Orthopaedic Association (JOA) scores were noted. Conclusion mNGS outperforms traditional microbiological culture in pathogen detection, especially for rare and critical pathogens. Treatment protocols combining mNGS, microbiological cultures, and pathological examinations are effective and provide valuable clinical insights for treating spinal infections.
identification of pathogens in cases of infection, and has been reported to allow for unbiased sampling, broad and rapid identification of known pathogens, and even the discovery of new microorganisms. 9,10It has been shown that mNGS has applications in the diagnosis and treatment of infectious diseases, including spinal infections. 11mNGS helps spine surgeons at the diagnostic stage, helping them to identify appropriate treatment options as early as possible. 12However, research on pathogen detection using mNGS in spinal infections remains limited, and the therapeutic value needs further clarification.Therefore, our study aimed to assess the capability of metagenomic next-generation sequencing (mNGS) to identify the etiology of spinal infections, explore its impact on treatment planning when combined with microbiological culture and pathology, and investigate post-treatment changes in blood test markers and clinical efficacy.
Methods of Study
Inclusion criteria: (1) Patients preliminarily diagnosed with spinal infection based on clinical signs, laboratory findings, and imaging studies. 7,13(2) Samples obtained using C-arm X-ray guided puncture or surgery.(3) At least two different diagnostic methods used for analyzing tissue samples.Exclusion criteria: (1) Only one diagnostic method used for sample testing.(2) Samples evidently contaminated during submission.(3) Incomplete clinical data or patient lost to follow-up.(4) Follow-up period less than three months.According to the above inclusion and exclusion criteria, a total of 46 patients were included in this study (Figure 1).The study consisted of 23 males and 23 females, with the median age being 63 years old (interquartile range: 12-86 years).This study was approved by the Ethics Committee of the Second Affiliated Hospital of Guangxi Medical University.
Detailed clinical data were collected from the patients, including WBC, neutrophil ratio, CRP, ESR, PCT, initial Visual Analogue Scale (VAS) scores, and imaging findings.Lesion tissue, peri-lesional soft tissue, or pus samples were obtained via C-arm X-ray assisted puncture or open surgery, then sealed in sterile culture tubes and sent for immediate postoperative examination for mNGS, routine microbiological culture, or pathological analysis, respectively.Customized antibacterial treatment plans were devised for infected patients based on their clinical symptoms, imaging results, mNGS, bacterial culture, or histopathological findings.A follow-up assessment was conducted on day 7 postoperatively.The sensitivity and specificity of mNGS for detecting spinal infection pathogens were evaluated, as well as its impact on the treatment process and prognosis, according to the final clinical outcomes.Demographic and clinical information was sourced from the electronic medical records of the Second Affiliated Hospital of Guangxi Medical University.mNGS testing, routine microbiological cultures and pathological analyses were performed in-house by our laboratory.
mNGS Testing and Analysis
The samples were stored at low temperature, and the DNA was extracted and purified by magnetic bead method according to the Microbial DNA Extraction Kit (Yugo Zhizhi Technology Co., Ltd., China), and then the macrogenomic library was constructed according to the Library Construction Kit (Yugo Zhizhi Technology Co., Ltd., China) (library size: 330-350bp), and quantified by using the Nucleic Acid Quantification System Qubit 4.0 (Thermo Fisher Scientific, USA).The libraries with different sequence tags were mixed in equal quantities, and high-throughput sequencing was completed using the Illumina NextSeq CN500 (Illumina Inc., USA) sequencing platform.The data were basically filtered by Fast QC software, including removing sequences containing sequencing junctions, sequences containing more than 10% of data, and sequences containing more than 50% of low-quality bases (Q value ≤ 10), and then the filtered data were used to perform BWA comparison with the human genome reference sequences, removing human-related sequences.Microbial sequences were then compared and annotated against an optimized pathogen database provided by Yugo Zhizhi Technology Co., Ltd., completing the result analysis.The laboratory procedure for the sample is shown in Figure 2.
Statistical Analysis
The final clinical diagnosis was used as the gold standard.The McNemar test was employed to assess significant differences in sensitivity, specificity, PPV, and NPV.The detailed calculations are provided in Supplementary (Supplement 1).Data adhering to a normal distribution were presented as mean ± standard deviation and compared
General Clinical Data Comparison Were Shown inTable 1
A total of 46 patients suspected to have spinal infection in our hospital between January 1, 2022 and December 30, 2023 were included.They included 23 males and 23 females, aged between 12 and 86 years, with a mean of (61.67±14.48)years.Pus or tissue specimens were obtained by X-ray C-arm underguided puncture in 26 cases, purulent tissue or pus specimens were obtained by open surgery in 18 cases, and specimens were obtained by spinal endoscopy in two other cases.By evaluating the history, clinical symptoms, physical examination findings, laboratory test data, imaging data, and surgical findings, 32 cases were diagnosed as spinal infections, while 11 were diagnosed as noninfectious, 1 as a tumor, and 2 could not be diagnosed.Fourteen of the included cases had been treated with antibiotics within 30 days prior to admission.TB-33 eventually died.At the final follow-up, all other patients demonstrated favorable recovery outcomes. 14,15mparison of mNGS, Microbial Culture, and PCT A total of 52 samples were analyzed, including 10 pus and secretion samples and 42 tissue samples.Of the 49 samples submitted for mNGS, 30 tested positive and 19 negative, yielding a positivity rate of 61.2% (30/49) (Table 2).Seven samples were only tested with mNGS and not with conventional microbial culture due to limited tissue and pus availability.The positivity rate of conventional microbial culture was 30.8% (12/39).The positivity rate of mNGS was notably greater compared to the conventional microbial culture and PCT, with a statistically significant difference.In clinically diagnosed specimens, the positivity rates for mNGS of tissue and pus samples were 79.3% and 80% respectively, showing no statistically significant difference (p > 0.99).The positivity rates for conventional microbial culture of tissue and pus samples were 25% and 62.5% respectively (p = 0.088), which is not a statistically significant difference.These Results indicate that the type of sample did not affect the positivity rates of mNGS and conventional microbial culture.Detailed test results are contained in Supplement 2.
Comparison of Diagnostic Efficacy
Comparative analysis showed mNGS with a sensitivity of 79.41% and specificity of 80%, outperforming the conventional microbial culture's sensitivity of 25%, albeit with a specificity of 100% (Table 3).This indicates mNGS's superior sensitivity.Out of 32 patients diagnosed with spinal infection, mNGS identified 27 positive cases (accounting for 62.8%), while conventional culture detected only 8 positives (18.6%).Complete concordance was observed in 4 patients.Additionally, 4 patients showed partial concordance, with at least one pathogen identified by mNGS also confirmed by culture, with no discrepancies between the two methods' results (Figure 3A).Furthermore, the chart depicts the time required to acquire results from mNGS, culture, and pathology (Figure 3B).
Detection Outcomes
A total of 49 mNGS tests were performed in 46 patients (in 2 of these patients, tissue specimens obtained by puncture on admission were negative for mNGS testing, but tissue specimens taken during open surgery turned out to be positive again).Surprisingly, among these microorganisms, Mycobacterium tuberculosis was detected in the highest number, 6 times, accounting for 20% (6/30) of the total number of positive detections.Among purulent bacteria, Gram-positive and Gram-negative bacteria were predominantly Staphylococcus aureus (detected 5 times) and Escherichia coli (also detected 5 times).Additionally, Brucella ovis were identified 4 times.Other less common bacteria, fungi, and viruses such as Aspergillus fumigatus, Malassezia furfur, Hepatitis E virus, and Human herpesvirus 5 were also detected (detailed data available in Figure 4A and B).In routine microbiological assays, the highest detection rate was for the Gram-positive bacterium Staphylococcus aureus.Out of 7 patients diagnosed with Mycobacterium tuberculosis infection, all tested positive via T-spot.Six underwent histopathological examination, with all six yielding positive results.mNGS results were positive in six cases and negative in one, with no positive outcomes from culture.A patient with Malassezia furfur infection initially tested negative in routine microbial culture, but after a positive mNGS result, subsequent samples confirmed positive with targeted microbial culture.
Follow-Up Status
In this study, of the patients with confirmed spinal infections, 26 patients underwent surgical treatment.Based on mNGS results, microbial cultures, and pathological analysis, we tailored antimicrobial treatment plans for the patients (Figure 5).Specific imaging and pathology of typical cases can be found in Supplement (Supplement 3 and Figures S1-S6).Patients infected with Mycobacterium tuberculosis were treated with "quadruple therapy" (isoniazid, rifampicin, pyrazinamide, ethambutol) for at least 12 months. 16Patients infected with Staphylococcus aureus were treated with cefotaxime sodium, vancomycin, linezolid or moxifloxacin.Patients infected with Brucella suis received doxycycline, streptomycin, or rifampin.One additional patient with Aspergillus fumigatus infection was treated with voriconazole.Patients with Streptococcus suis infection were treated with ceftriaxone sodium, levofloxacin tablets linezolid tablets, while patients with Serratia mucinosa and human cytomegalovirus infection were treated with meropenem, compound sulfamethoxazole tablets.The duration of antibiotic therapy was maintained for at least 6 weeks in all patients.Patients infected were followed up at seven days of drug or surgical treatment.As shown in figure (Table 4), there was a decrease in leukocytes, sedimentation and C-reactive protein values in patients with spinal infection after seven days of treatment.The decrease in neutrophil ratio was statistically significant.All patients were followed up after treatment.Follow-up period was (8.5 ± 5).These scores improved significantly after treatment and the changes were statistically significant.
Discussion
Conclusive evidence of spinal infection is predicated upon the successful isolation of pathogens via conventional microbiological culturing techniques.Nonetheless, the efficacy of these cultures is compromised by their low yield, the extended duration required for pathogen identification, and the possible influence of preceding antibiotic treatments, 17,18 Even with the methodological advancements proposed by Peel 17 and Schafer 19 , such as prolongation of culture duration and refinement of detection techniques, certain pathogens continue to evade identification.Pathological examination is regarded as the "gold standard" for the confirmation of spinal infection diagnoses but is insufficient for detecting low-virulence microbial infections and cannot provide specific information of pathogenic bacteria. 20Specific PCR assays have been reported to have high sensitivity but are unable to cover rare and emerging pathogens. 10,21This poses a challenge to spine surgeons: how to identify pathogens early and quickly, enabling rapid and precise treatment.5][26] In this paper, the positive detection rate of mNGS (61.2%) was significantly higher than that of routine microbiological culture (30.8%) and calcitoninogen assay (28%), in line with the findings of a previous study. 27Notably, 17 culture-negative patients presented positive results by mNGS.mNGS detected Mycobacterium tuberculosis, Brucella suis and Aspergillus fumigatus, which confirms that mNGS surpasses conventional microbiological cultures in terms of detection efficacy.In terms of detection time, conventional microbiological culture took (3.09±1.16)days and pathology took (2.68±1.85)days, while mNGS took (1.54±0.75)days, showing its obvious advantage in time efficiency.Specific spinal infections include tuberculosis, Brucella, fungal, and viral infections.However, in this study, only one case of infection with Malassezia sympodialis was culture-positive, indicating the low sensitivity of conventional culture methods in diagnosing specific spinal infections.
Fortunately, mNGS detection rates exceed 90%, demonstrating its significant potential as an effective tool for diagnosing specific spinal infections.Additionally, the research highlights Staphylococcus aureus as the most common pathogen in pyogenic spinal infections, with other opportunistic pathogens such as Stenotrophomonas maltophilia and Streptococcus intermedius also detected, previously reported in spinal infections. 28,29It is noteworthy that in this study, multiple patient samples were found to contain Veillonella parvula, which is considered to be associated with spondylodiscitis, 30 Metagenomic Next-Generation Sequencing (mNGS) identified it as a Background microbial infection, the results underscore the unique advantage of mNGS in identifying rare pathogens.mNGS can be used as an aid to provide additional information on the pathogen to help doctors make more comprehensive diagnosis and treatment decisions.In the early stages of disease diagnosis, mNGS offers rapid and accurate information, assisting clinicians in devising early, targeted antibiotic treatments to prevent antibiotic misuse. 31,32In this study, antibiotic regimens were adjusted for 27 patients diagnosed with infections based on mNGS results.Furthermore, studies indicate that combining mNGS with microbiological culture and pathological examination can more effectively bolster clinicians' confidence when making decisions, as compared to relying solely on mNGS results.Initially suspected of tuberculosis, patient T-30 tested negative in both puncture culture and mNGS, while pathological examination revealed signs of metastatic prostate cancer.Consequently, relevant tumor markers were further investigated, leading to the final diagnosis of the tumor.T-31 postoperative culture results were negative, pathological examination showed the presence of inflammatory cell infiltration in the bone marrow cavity, and mNGS test suggested Brucella infection.Based on these results, we ultimately developed a targeted antibiotic regimen for the patient with a combination of doxycycline and rifampicin.In a case of Malassezia furfur infection, initial microbiological culture was negative, mNGS testing positive, and histopathology showed extensive plasmacytic and lymphocytic infiltration in bone tissue.Following these outcomes, cultures were redone under specific conditions, ultimately confirming a positive result.This study also examined the effectiveness of treatment regimens combining mNGS, microbiological culture and pathological findings in the management of spinal infections and their impact on prognosis.For these reasons, all patients in the study were followed up, revealing significant improvement in prognostic indicators.The application of mNGS currently has certain limitations.Firstly, the method and location of sample collection during the preparation stage may affect the outcomes.T18 and T46 had negative mNGS results on the first puncture for tissue samples and positive results after the second open surgery for tissue samples.This suggests that sampling method and site may have a potential impact on mNGS outcomes, a topic not yet thoroughly investigated in the literature.This study attempted to explore the impact of pus and tissue sample types on the positive rate of mNGS, but the findings were not statistically significant, aligning with previous research. 33Additionally, the high sensitivity of mNGS may also lead to a higher false-positive rate, which might contribute to the lower specificity of mNGS compared to traditional microbial cultures observed in this study.Finally, there is a time lapse from sample collection to result analysis, rendering mNGS potentially unsuitable in certain urgent scenarios.Despite current limitations in mNGS application, ongoing technological advancements and improvements are expected to progressively resolve these issues.
Conclusion
mNGS demonstrates enhanced sensitivity in detecting pathogens in spinal infections, especially rare and critical ones, compared to traditional microbial culture.Based on these findings, we propose that mNGS can serve as a valuable adjunct in enhancing the diagnostic process of spinal infections.Although mNGS cannot fully replace microbial culture, its integration with conventional methods offers a comprehensive approach to diagnosing and treating spinal infections.This combined strategy can lead to more personalized and effective therapeutic regimens, ultimately improving patient outcomes.This study's single-center, retrospective design with a limited sample size may introduce bias.Future research across multiple centers with larger sample sizes is anticipated to corroborate our findings, and an extended follow-up period is desired to more comprehensively assess the impact of mNGS-guided treatment on patient recovery and clinical outcomes.
Data Sharing Statement
The datasets used and/or analyzed during the present study are available from the corresponding author on reasonable request.
Figure 2
Figure 2 Flow chart of mNGS.
Figure 3 (
Figure 3 (A) The concordance of mNGS and microbial culture in detecting pathogenic microorganisms.(B)Time cost of mNGS, culture and pathology.
Figure 4
Figure 4 Pathogenic microorganisms detected by mNGS.(A) Pathogenic microorganisms detected in all samples(B)Background microbial distribution.
Figure 5
Figure 5 Application of mNGS in Clinical Diagnosis and Therapeutic Decision-Making.In 32 confirmed infection cases, treatment plans were formulated or adjusted for 27 patients based on mNGS results.
Infection and Drug Resistance 2024:17 https://doi.org/10.2147/IDR.S466738 DovePress 3027 Dovepress Chen et al using
the t-test.Conversely, data not following a normal distribution were described by the median and interquartile range and assessed with the Mann-Whitney U-test for comparison.A p-value of <0.05 was set for statistical significance.Data analyses were conducted using SPSS software version 23.0, GraphPad Prism 10, and R version 4.3.2.
Table 1
Baseline Characteristics of Participants by Infection Status
Table 2
Comparison of mNGS, Microbial Culture, and PCT Positivity Rates
Table 4
Comparison of Patients with Confirmed Infection Pre-Treatment and After Treatment
|
2024-07-17T15:17:58.289Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "e0fe413c35549d6f34008e0356a29ae748771d8f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2147/idr.s466738",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5abdfa291632608b5f6d48c1939408888f148589",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
207983838
|
pes2o/s2orc
|
v3-fos-license
|
Epidermolysis Bullosa-Associated Squamous Cell Carcinoma: From Pathogenesis to Therapeutic Perspectives
Epidermolysis bullosa (EB) is a heterogeneous group of inherited skin disorders determined by mutations in genes encoding for structural components of the cutaneous basement membrane zone. Disease hallmarks are skin fragility and unremitting blistering. The most disabling EB (sub)types show defective wound healing, fibrosis and inflammation at lesional skin. These features expose patients to serious disease complications, including the development of cutaneous squamous cell carcinomas (SCCs). Almost all subjects affected with the severe recessive dystrophic EB (RDEB) subtype suffer from early and extremely aggressive SCCs (RDEB-SCC), which represent the first cause of death in these patients. The genetic determinants of RDEB-SCC do not exhaustively explain its unique behavior as compared to low-risk, ultraviolet-induced SCCs in the general population. On the other hand, a growing body of evidence points to the key role of tumor microenvironment in initiation, progression and spreading of RDEB-SCC, as well as of other, less-investigated, EB-related SCCs (EB-SCCs). Here, we discuss the recent advances in understanding the complex series of molecular events (i.e., fibrotic, inflammatory, and immune processes) contributing to SCC development in EB patients, cross-compare tumor features in the different EB subtypes and report the most promising therapeutic approaches to counteract or delay EB-SCCs.
Introduction
Inherited epidermolysis bullosa (EB) comprises a clinically and genetically heterogeneous group of rare genetic diseases characterized by skin fragility and blister formation following minor trauma [1]. Four major EB types are distinguished based on the level of blister formation within the skin: EB simplex (EBS), junctional EB (JEB), dystrophic EB (DEB), and Kindler syndrome (KS) (Figure 1). According to the current "onion skin" classification approach, each EB type then comprises several subtypes characterized by different modes of inheritance, causative genes and mutations, phenotypic and molecular features [1]. EB clinical spectrum ranges from early-lethal forms with extensive cutaneous and extracutaneous involvement to mild phenotypes with localized skin lesions only. Disease-causing variants in at least 20 different genes account for the genetic heterogeneity of EB [2] (Figure 1).
Figure 1.
Schematic representation of the epidermis depicting levels of cleavage sites and mutated proteins for each epidermolysis bullosa (EB) type. Epidermal cells layers, from the stratum basale to the stratum corneum (flat, orange boxes), and the underlying papillary and reticular dermis are depicted. Basal keratinocytes are attached to the dermis by multiprotein complexes linking keratin intermediate filaments to anchoring fibrils through hemidesmosomes and the epithelial laminin isoform, laminin-332. Focal adhesions also contribute to stabilizing the cutaneous basement membrane zone (BMZ). The skin level where blisters arise in each epidermolysis bullosa (EB) type (red lines and dots), and the corresponding mutated proteins are indicated. Inset magnification shows the BMZ, with proteins mutated in Kindler syndrome, junctional EB (JEB) and dystrophic EB (DEB) shown in red. In EB forms compatible with survival to adulthood, the risk of cutaneous squamous cell carcinoma (SCC) occurrence correlates with EB severity (green arrow turning into red. Green = low/mild severity EB type and a low risk to develop SCC, red = severe EB type and high risk to develop SCC).
In recessive DEB, JEB and KS blistering occurs within or below the cutaneous basement membrane zone (BMZ) and leads to chronic wounds with fibrosis and inflammation healing with scarring sequelae. These patients have an increased risk to develop one or more cutaneous squamous cell carcinomas (SCCs) [1]. Usually, the tumors are localized at sites of chronic, hard-to-heal wounds and scarring, they are frequently multiple, and, specific to recessive DEB, highly aggressive representing the first cause of death in this EB subtype. Though the age of onset, cancer localization and carcinogenetic processes may be to some extent different across the various EB (sub)types, EBrelated SCC (EB-SCC) represents an important model towards a more complete understanding of mechanisms responsible for carcinogenesis occurring within a fibrotic and inflamed microenvironment. In this review, we will discuss SCC clinical and molecular features in each cancerprone EB type, focusing on the latest findings unveiling novel and potentially relevant pathomechanisms underlying tumor development in EB patients and highlighting gaps in the current literature. In addition, the most promising direct and indirect therapeutic strategies to counteract EB-SCC, with special reference to the devastating SCCs occurring in recessive DEB, will be addressed.
SCC in the General Population
Cutaneous SCC (hereafter indicated as SCC) represents the second most common nonmelanoma skin cancer (NMSC) after basal cell carcinoma (BCC). While the incidence ratio of BCC to SCC is traditionally considered to be 4:1, a recent study reported an age-weighted incidence ratio of 1.4:1 in the USA population [3]. However, statistics on the incidence of SCC are difficult to calculate In EB forms compatible with survival to adulthood, the risk of cutaneous squamous cell carcinoma (SCC) occurrence correlates with EB severity (green arrow turning into red. Green = low/mild severity EB type and a low risk to develop SCC, red = severe EB type and high risk to develop SCC).
In recessive DEB, JEB and KS blistering occurs within or below the cutaneous basement membrane zone (BMZ) and leads to chronic wounds with fibrosis and inflammation healing with scarring sequelae. These patients have an increased risk to develop one or more cutaneous squamous cell carcinomas (SCCs) [1]. Usually, the tumors are localized at sites of chronic, hard-to-heal wounds and scarring, they are frequently multiple, and, specific to recessive DEB, highly aggressive representing the first cause of death in this EB subtype. Though the age of onset, cancer localization and carcinogenetic processes may be to some extent different across the various EB (sub)types, EB-related SCC (EB-SCC) represents an important model towards a more complete understanding of mechanisms responsible for carcinogenesis occurring within a fibrotic and inflamed microenvironment. In this review, we will discuss SCC clinical and molecular features in each cancer-prone EB type, focusing on the latest findings unveiling novel and potentially relevant pathomechanisms underlying tumor development in EB patients and highlighting gaps in the current literature. In addition, the most promising direct and indirect therapeutic strategies to counteract EB-SCC, with special reference to the devastating SCCs occurring in recessive DEB, will be addressed.
Wound Healing and SCC
In the general population, alongside the most common risk factors underlying SCC development, severely burned areas and hard-to-heal, chronic wounds (e.g., long-standing venous stasis ulcers) represent susceptibility sites for the development of a rare and aggressive form of skin cancer, defined in the early 1800s as Marjolin ulcer (MU). Although MU embraces a histologically heterogeneous set of malignancies, well-differentiated SCCs account for the majority of them (≈ 70% of cases). Of note, MU-SCCs are more aggressive than skin SCCs of different etiology, and metastasize in more than 27% of cases [13]. Though a number of theories have been proposed to explain the relation between skin damage and MU development [13], a common denominator is the complex series of events determined by the derailed/prolonged wound healing process at the lesional area [14,15]. Indeed, the aberrant activation of wound healing pathways is a relevant, well-known event able to establish an inflamed, fibrotic stroma which represents the scaffold for tumor initiation and progression [14,16]. In this regard, the connection between the risk of occurrence of epithelial cancer and the unremitting and altered wound healing process of severe EB patients is clear-cut. Despite the primary genetic defects underlying each inherited EB type, all EB patients share skin and/or mucosal fragility and blistering. However, in the most severe and disabling EB (sub)types mutations in genes coding for specific components of the epidermal-dermal junction strongly compromise the healing process, and, in turn, skin erosions and blisters evolve in chronic wounds, inflammation and fibrosis [17]. These events are responsible for the onset of highly disabling and even life-threatening disease complications, such as esophageal strictures, deformities of hands and feet, and aggressive epithelial cancers [18].
Clinical Features
DEB is the second most common EB type, and the most disabling one. The estimated prevalence of DEB ranges from approximately 6 per million in the USA and Spain to 20 per million in Scotland, the latter probably reflecting greater capture rather than a true higher prevalence [19][20][21]. DEB is caused by mutations in the COL7A1 gene that encodes collagen VII (COL7), the major component of anchoring fibrils, ensuring adhesion of stratified epithelia to the underlying mesenchyme. Loss of the structural function of COL7 causes lifelong blistering and impaired wound-healing, leading to chronic wounds characterized by increased bacterial colonization, fibrosis and inflammation and to progressive scarring, which in turn can evolve as a systemic disease with secondary multiorgan involvement and propensity to early skin cancer development [1,17,[22][23][24].
In particular, the recessive DEB subtype termed severe generalized (RDEB-SG) strongly predisposes patients to the development of multiple SCCs. RDEB-associated SCCs (RDEB-SCCs) are more aggressive than UV-SCCs in the general population and characterized by high morbidity and mortality: SCC represents the first cause of death in patients suffering from RDEB-SG. The cumulative risk of developing at least one SCC for patients with RDEB-SG increases with age, being already 67.8% by age 35 and attaining 90.1% by 55 years in the USA National EB Registry [25]. The risk of developing SCC is also increased in DDEB and in other RDEB subtypes, but they are less common than in severe RDEB and occur later in adulthood.
Typically, SCCs develop at sites of chronic wounds and scarring, in particular, the extremities [18,25]. Though the large majority of EB-SCC are histologically well-differentiated, they have a high propensity to local relapse and metastasis [18]. Early detection is relevant towards effective surgical excision, which remains the treatment of choice [26]. However, early diagnosis of SCC in RDEB patients remains a challenge, since the presence of numerous large chronic wounds and scar sites, together with a not straightforward choice of biopsy site, can require histopathologic evaluation of multiple biopsies [26]. In addition, by histopathology RDEB-SCC may be difficult to differentiate from granulation tissue or pseudoepitheliomatous hyperplasia [26]. All these criticalities contribute to the delay in diagnosis and management of RDEB-SCC. Late diagnosis and SCC aggressive features are the major determinants of the poor prognosis in these patients. Indeed, the cumulative risk of death from SCC in RDEB-SG who developed at least one SCC was 57.2% by age 35 and raised to 87.3% by age 45 in the USA National EB registry [25].
DEB-SCC Genetics
The skin is the body's outermost barrier and represents the main target for a variety of external challenges, ranging from chemical to physical, mechanical and biological insults. As a result, genetic and epigenetic hits accumulate into the keratinocyte DNA as part of a physiological, naturally occurring process. In particular, the exposure to UV rays determines a specific signature of C > T and CC > TT mutations, which represent the majority of the somatic mutations in the skin [27]. However, UV-derived mutations do not necessarily lead to malignant transformation of keratinocytes in chronically sun-exposed skin areas [28].
This evidence highlights that the acquisition of the hallmarks of cancer [29] is a complex process where multiple mutation-dependent and independent events, such as the skin microenvironment, cooperate to determine tumor development and aggressiveness. In this respect, the case of RDEB-SCC molecular etiology is striking.
Although RDEB-SCC is typified by a surprisingly early age of onset and aggressiveness as compared to UV-SCC affecting non-RDEB patients, the genetic profiles of these tumors are quite similar and only partly explain their different features [30,31]. Indeed, whole-exome sequencing analyses revealed that RDEB-SCCs share with UV-SCCs a heterogeneous set of mutated genes and a number of cytogenetic alterations. In particular, RDEB-SCCs display mutations in TP53, NOTCH1, NOTCH2, CDKN2A, HRAS, and FAT1, a set of genes also mutated in aggressive cutaneous SCCs and considered as potential drivers of the tumor [32,33] (Table 1). The genetic landscape of RDEB-SCCs presents a high occurrence of inactivating mutations in NOTCH family members, suggesting a relevant role for the NOTCH pathway in RDEB keratinocyte transformation. Though loss-of-function mutations in NOTCH1 are the most represented in RDEB patients and play a well-established role in mouse skin tumorigenesis [34], mechanistic studies in RDEB-SCC models are missing.
RDEB-SCCs show a very high mutational burden in relation to the early age of onset. Moreover, changes are not related to UV exposure. Recently, mutations in RDEB-SCC have been shown to be endogenously determined by the cytosine deaminase activity of APOBEC (apolipoprotein B mRNA editing enzyme, catalytic polypeptide-like) family of enzymes [31]. APOBEC enzymes are relevant gene editors in accordance to their ability to deaminate cytidines into uridines in 5 -TCW contexts (where W = A or T), which determinates C-to-T and C-to-G mutations (APOBEC signature). In RDEB-SCC, APOBEC activity is strongly enhanced and contributes to a high percent of mutations in typical RDEB-SCC-associated driver genes (e.g., HRAS, NOTCH1, TP53). Although the APOBEC signature has been found, to a different extent, in several cancer types, in RDEB-SCC the amount of mutations determined by APOBEC activity is significantly higher than that detected in non-RDEB SCCs (42% vs. 1.7-2%) and even in HPV-positive SCCs that have abundant APOBEC-driven mutations (≈ 30%) [7,31].
In physiological conditions, APOBEC-derived nucleotide changes are important in keratinocyte differentiation [35], lipid metabolism, adaptive immunity and anti-viral defense [36]. However, if dysregulated, APOBEC activity leads to genomic instability and contributes to cancer development. APOBEC members are over-expressed in RDEB-SCC and other cancer types [37] in response to several environmental factors, such as microbial insults and skin-injury dependent cell stress and inflammation. Interestingly, in RDEB patients, the up-regulation of APOBEC3A, APOBEC3B and APOBEC3H members is particularly prominent in areas of chronic tissue damage [31]. These findings expand our knowledge on RDEB-SCC pathomechanisms and could trigger the development of genomically-driven treatments, such as anti-APOBEC therapies. [7]: Significantly mutated genes were identified by at least two out of three algorithms used (MutSigCV, Oncodrive-FM and Oncodrive-CLUST). [33]: Targeted sequencing using the OncoPanelv2 platform. Significantly mutated genes were determined by MutSigCV (q-value ≤ 0.1). N/P = Not Profiled. [32]: Genes were identified as significant by MutSigCV algorithm or by at least two out of three other algorithms.
The Wound-Healing Process
The wound healing is a complex biological phenomenon able to re-establish tissue integrity after an injury. A plethora of cell-types, extracellular matrix (ECM) proteins, and soluble factors (cytokines, growth factors, hormones) are involved in a well-orchestrated cascade events that can be classically resumed in three sequential stages: (1) Inflammation, (2) new tissue formation, and (3) remodeling [38]. In the later "tissue formation" phase dermal fibroblasts and other precursors cells are stimulated to differentiate into a cell type called myofibroblast, typified by contractile and secretory abilities. Myofibroblasts are well-discriminated from tissue fibroblasts by production of specific contractile proteins (the "myo" attribute), such as α-smooth muscle actin (α-SMA) or transgelin (TAGLN), stress fibers assembly, and secretion of specific matricellular proteins, such as the fibronectin isoform ED-A, cellular communication network factor 2 (CCN2), periostin (POSTN) and tenascin-C (TNC) [39]. A central role in myofibroblast development and maintenance is played by the prototypic fibrotic cytokine, the transforming growth factor-β1 (TGF-β1), whose activation depends on ECM composition and mechanical state (matrix stiffness), as well as on ECM interaction with different cell-types, including the same myofibroblasts. Following wound re-epithelialization, myofibroblasts are physiologically cleared through cell death via apoptosis or de-activated and converted in a different cell lineage [40]. Finally, the complex processes underpinning wound healing must be finely tuned to be properly completed and to avoid pathological states as fibrosis.
Fibrosis
In RDEB patients, fibrosis is a regular and severe disease complication resulting from the impaired healing of chronic wounds [41]. Dermal fibroblasts in RDEB patients are continuously exposed to the detrimental effects of several pro-inflammatory cytokines and growth factors that alter the transcriptional profile [42], and force fibroblasts to remain into the "myofibroblast state". Indeed, myofibroblasts chronically reside in the dermis of RDEB patients and contribute to ECM stiffness, an event fueling the fibrotic process in a self-renewing cycle. TGF-β1 signaling plays a key role in establishing and maintaining the fibrotic process in RDEB patients [43], and animal disease models [44,45]. TGF-β1 enhances fibroblast-to-myofibroblast conversion and promotes dermal contractility and ECM stiffness via the activation of both SMAD-dependent and SMAD-independent signaling cascades. The molecular mechanisms underlying the continuous activation of TGF-β signaling in RDEB can be found in the complex series of events mainly driven by COL7 loss, and determining the enzymatic or mechanical release of latent TGF-β1 from ECM-bound complexes. Notably, the matricellular proteins decorin (DCN) [43,46] and thrombospondin 1 (TSP1) are arising as important regulators of TGF-β1 activity in RDEB patients and could represent relevant targets for innovative therapeutic approaches to counteract fibrosis progression [47]. DCN is an interstitial proteoglycan characterized by multiple binding partners and multifaceted activities in the context of ECM [48]. In particular, it plays a key anti-fibrotic role through the blockade of TGF-β1 bioavailability and activation by direct sequestration and indirect mechanisms. DCN expression levels are reduced in RDEB patients and COL7 hypomorphic mice (RDEB mice), negatively correlate with disease severity and strongly affect disease manifestations in RDEB mice (e.g., survival and development of mitten deformities) [43,46]. In addition, numerous studies demonstrate that DCN regulates a variety of cancer-related processes in a context-dependent fashion, playing a dual role of pro-and anti-tumorigenic factor. In the tumor microenvironment, DCN is a potent anti-angiogenetic molecule, and its levels are reduced in the stromal of many solid malignancies [48], including RDEB-associated SCC [49]. Contrarily to DCN, the glycoprotein TSP1 is a TGF-β1 activator up-regulated in fibroblasts from non-tumoral and tumoral stroma of RDEB patients in response to COL7 deficiency [47].
A growing body of evidence [50] supports the concept that fibrosis plays a crucial role in SCC development in RDEB patients by creating a permissive tumor microenvironment. Indeed, injury-driven fibrosis and inflammation lead to RDEB fibroblast conversion into cells resembling carcinoma-associated fibroblasts (CAFs) (Figure 2). CAFs represent a heterogeneous cell-type similar to myofibroblast, but able to promote the development of cancer through the production of a set of cytokines, chemokines, signaling molecules and ECM proteins sustaining tumor cells growth and migration [14,51]. Besides their role in wound healing, typical markers of activated fibroblasts/myofibroblasts, such as α-SMA, ED-A fibronectin and TNC, possess a pro-tumorigenic function and their expression levels can be used as prognostic factors in several cancer types [14]. associated fibroblasts (CAFs) ( Figure 2). CAFs represent a heterogeneous cell-type similar to myofibroblast, but able to promote the development of cancer through the production of a set of cytokines, chemokines, signaling molecules and ECM proteins sustaining tumor cells growth and migration [14,51]. Besides their role in wound healing, typical markers of activated fibroblasts/myofibroblasts, such as α-SMA, ED-A fibronectin and TNC, possess a pro-tumorigenic function and their expression levels can be used as prognostic factors in several cancer types [14]. Basal keratinocytes firmly adhere to the basement membrane zone (BMZ) through hemidesmosome protein components (blue ovals) and are also the main producers of laminin-332, an essential component of epithelial BMZs, and of type VII collagen (COL7) that assembles into anchoring fibrils (AFs). AFs extend from the lower part of the BMZ into the upper dermis (papillary dermis), ensuring dermal-epidermal cohesion. Middle panel. In RDEB patients, COL7 deficiency impairs anchoring fibrils formation and leads to skin fragility and blistering (red asterisk) after minor traumas. At sites of chronic blistering, the dermis is enriched in inflammatory cells (neutrophils, macrophages and T-cells) and myofibroblasts: Both cell types produce high amounts of transforming growth factor (TGF)-β1, a master regulator of fibrosis, in an unremitting and self-renewing cycle. In addition, myofibroblasts abundantly produce extracellular matrix components, contributing to dermal stiffening. Chronic wounds (black asterisks) also show high levels of bacterial colonization that contribute to exacerbating inflammation. In the dermis of RDEB patients, the derailed production of cytokines, growth factors and ECM members create the permissive environment for keratinocyte transformation. Right panel. RDEB-SCC microenvironment. Stromal inflammation and fibrosis represent the scaffold for tumor development and progression. Cells with features of cancer-associated fibroblasts (CAFs-like cells) populate tumor stroma and contribute to tumor growth. Keratinocytes undergo epithelial-mesenchymal transition (EMT) and convert to carcinoma cells. SCC microenvironment is characterized by huge inflammation and fibrosis.
A seminal study published by Ng and coll. in 2012 analyzed gene expression in dermal fibroblasts from healthy subjects, RDEB patients and CAFs from RDEB-SCC and UV-SCC tumor matrices. The mRNA profiling showed that in all disease conditions, genes involved in ECM and celladhesion are the most deregulated. In addition, fibroblasts from non-tumor RDEB are characterized by a transcriptome profile similar to that of CAFs from UV-SCC rather that normal fibroblasts, suggestive of a stromal-driven predisposition to SCC development in RDEB patients. On the other hand, gene expression analysis failed to identify a signature of deregulated genes in CAFs from RDEB Basal keratinocytes firmly adhere to the basement membrane zone (BMZ) through hemidesmosome protein components (blue ovals) and are also the main producers of laminin-332, an essential component of epithelial BMZs, and of type VII collagen (COL7) that assembles into anchoring fibrils (AFs). AFs extend from the lower part of the BMZ into the upper dermis (papillary dermis), ensuring dermal-epidermal cohesion. Middle panel. In RDEB patients, COL7 deficiency impairs anchoring fibrils formation and leads to skin fragility and blistering (red asterisk) after minor traumas. At sites of chronic blistering, the dermis is enriched in inflammatory cells (neutrophils, macrophages and T-cells) and myofibroblasts: Both cell types produce high amounts of transforming growth factor (TGF)-β1, a master regulator of fibrosis, in an unremitting and self-renewing cycle. In addition, myofibroblasts abundantly produce extracellular matrix components, contributing to dermal stiffening. Chronic wounds (black asterisks) also show high levels of bacterial colonization that contribute to exacerbating inflammation. In the dermis of RDEB patients, the derailed production of cytokines, growth factors and ECM members create the permissive environment for keratinocyte transformation. Right panel. RDEB-SCC microenvironment. Stromal inflammation and fibrosis represent the scaffold for tumor development and progression. Cells with features of cancer-associated fibroblasts (CAFs-like cells) populate tumor stroma and contribute to tumor growth. Keratinocytes undergo epithelial-mesenchymal transition (EMT) and convert to carcinoma cells. SCC microenvironment is characterized by huge inflammation and fibrosis.
A seminal study published by Ng and coll. in 2012 analyzed gene expression in dermal fibroblasts from healthy subjects, RDEB patients and CAFs from RDEB-SCC and UV-SCC tumor matrices. The mRNA profiling showed that in all disease conditions, genes involved in ECM and cell-adhesion are the most deregulated. In addition, fibroblasts from non-tumor RDEB are characterized by a transcriptome profile similar to that of CAFs from UV-SCC rather that normal fibroblasts, suggestive of a stromal-driven predisposition to SCC development in RDEB patients. On the other hand, gene expression analysis failed to identify a signature of deregulated genes in CAFs from RDEB patients with SCC, but revealed a stepwise progression in gene dysregulation magnitude from healthy fibroblasts to RDEB-SCC CAFs [49]. Proteomic studies confirmed that loss of COL7 in RDEB fibroblasts alters the extracellular proteome and its post-translational modification status [52].
In addition, RNA-seq analysis showed that primary skin fibroblasts from patients affected with three cancer-prone genodermatoses (i.e., KS, RDEB and xeroderma pigmentosum complementation group C, XPC) share a similar transcriptional profile despite the unrelated primary genetic defects [53]. Deregulated genes allow the acquisition of an activated and synthetic fibroblast phenotype resulting in fibrosis and related complications, as tumor growth [53].
The key role of dermal changes in RDEB-SCC pathogenesis has also been highlighted by a proteomic study which evaluated the biological processes commonly deregulated in two high-risk tumors, i.e., RDEB-SCC and metastasizing UV-SCC as compared to low-risk, non-recurrent and non-metastatic, UV-SCC. Quantitative mass spectrometry analysis showed that in RDEB-SCC and metastasizing UV-SCC the proteomic profile is enriched in proteins related to bacterial challenge and ECM remodeling, in accordance with their invasive and metastatic abilities [54]. Specifically, in RDEB-SCC the Gene Ontology (GO) analysis of the differentially expressed proteins showed: (i) The enrichment of terms correlated to tissue inflammation and humoral immunity, two known drivers of tumor transformation [55,56], and (ii) the increased expression of stromal proteins, such as collagen I, XII, XIV and lumican. Notably, a work by Thriene and collaborators investigated the transcriptomic and proteomic profiles of primary keratinocytes from RDEB patients, filling the gap of literature in which studies on the fibroblast contribution in RDEB-related fibrotic processes substantially underestimated the "epithelial" role [57]. Moreover, in this case, alterations in the expression levels of specific ECM components (e.g., genes encoding laminin-332 and LTBP1, respectively down-and up-regulated) appear to be relevant to disease progression. Despite the heterogeneity, due to the inter-individual variability, ECM produced by RDEB keratinocytes includes a cluster of up-regulated proteins related to inflammation and response to wounding, and a cluster of down-regulated proteins made of COL7 interactors [57].
Intracellular Signaling
Notwithstanding the ever-growing number of studies, very little is known on the molecular mechanisms responsible for the development of SCC in RDEB patients and for its unusually aggressive course. As summarized above, the genetic landscape of the RDEB-SCC is similar to that found in the less-aggressive UV-SCC in the general population, and, mutations in the COL7A1 gene represent the only, major difference between these two tumor types. For this reason, the absence of COL7 in RDEB patients is considered one of the main molecular determinants to disentangle RDEB-SCC behaviors. Several in vitro and in vivo studies contributed to unveil how COL7 loss in RDEB keratinocytes, fibroblasts and extracutaneous tissues leads to a complex series of molecular events determining the progressive cancerization of skin microenvironment via the activation of typical pro-tumorigenic processes, such as inflammation, angiogenesis and tumor cell invasion (Figure 3).
In detail, COL7 loss in non-RDEB SCC keratinocytes enhances migration and invasion, impairs epithelial differentiation, and promotes epithelial-mesenchymal transition (EMT) and vascularization through different mechanisms [58,59], including the activation of TGF-β1 signaling, a known regulator of tumorigenic processes [60]. Indeed, lack of COL7 in non-RDEB-SCC cells (SCC-IC1 cell line) xenografted onto SCID mice determines an increased expression of the active form of TGF-β1, its receptor TβRI, and its downstream targets. Mittapalli and collaborators [61] highlighted the role of COL7 deficiency in carcinogenesis by demonstrating that RDEB mice treated with DMBA/TPA develop skin lesions highly reminiscent of invasive RDEB-SCCs, while wild-type littermates show non-invasive, benign papillomas. The skin of tumor-primed RDEB mice is typified by stiffness and activation of several pro-tumorigenic pathways [61]. COL7 deficiency in RDEB-SCCs impairs front-to-rear keratinocyte polarity in 2D cultures and 3D spheroids through the deregulation of SLCO1B3, a gene encoding for the membrane-bound organic anion transport polypeptide OATP1B3, and other members of adhesion complexes [58,62]. In addition, OATP1B3 levels in RDEB-SCCs negatively correlate with COL7 abundance. Of the two OATP1B3 transcripts, the cancer-related isoform (cancer-type, Ct) is up-regulated in variety of cancers, where it modulates the clinical phenotype. Ct-OATP1B3 is overexpressed in RDEB-SCC keratinocytes as compared to non-tumoral RDEB and healthy cells. Recently Ct-OATPB1B3 mRNA has been found in extracellular-vesicles (EVs) released in the culture medium by RDEB-SCC cells and in the bloodstream of tumor-bearing immunodeficient mice upon injection with human RDEB-SCC cells. These findings draw attention to the role of SCC-derived EVs and their molecular cargo in RDEB-SCC pathogenesis, and to their potential usage as diagnostic and prognostic factors of the disease [63].
Recently, it has been reported that COL7 deficiency perturbs keratinocyte lysosomal activity and determines the accumulation of S100A8 and S100A9, two markers of acute and chronic inflammation, into the pathological ECM of RDEB patients. Lack of COL7 also increases the levels of cathepsin B and Z, two lysosomal proteases, and boosts lysosome activity and autophagic flux, all events potentially able to weaken ECM at the BMZ [57]. In contrast, Kuttner and coll. showed that the reduction of transglutaminase 2 (TGM2) lessens autophagic flux in primary fibroblasts from RDEB patients, and correlated this event to the enhanced fibrogenesis [64]. Both in physiological and pathological states autophagy plays a multitude of functions, even opposite, in relation to the cell context. Defective autophagy predisposes cells to different diseases, such as malignant transformation [65], and fibrosis of the skin and other organs [66]. Taken together, these findings indicate a role of lysosomes and autophagy in RDEB pathogenesis and foster further investigations in the field of RDEB-associated SCC.
Finally, microRNAs (miRNAs), a class of small non-coding RNAs with pleiotropic functions, are emerging as novel players in RDEB fibrosis [42,67], and potential regulators in tumor stroma cancerization. Deregulation of miRNAs expression and activity has been demonstrated to play a significant role in a variety of human diseases, including fibrotic skin disorders and SCC [68,69], but their involvement in RDEB and its complications are almost unexplored. Recently, we demonstrated the pro-fibrotic role of miR-145-5p in primary skin fibroblasts from RDEB patients (RDEBFs). In RDEBFs, miR-145-5p is up-regulated as compared to cells from healthy subjects, and its inhibition determinates the reduction of typical fibrotic behaviors (i.e., contractile force, proliferation and migration) and fibrotic markers by direct and indirect modulation of multiple and partially overlapping signaling cascades, such as the NOTCH pathway [42].
Inflammation
An increasing amount of experimental evidence, obtained in animal disease models and RDEB patients, pinpoints the relevance of inflammatory processes in EB pathomechanisms and disease complications. An imbalance in cytokine levels has been described in vitro in cells derived from EB animal models and patients, as well as in vivo and suggests that inflammation deeply alters the dermal microenvironment, and at the same time, contributes to worsening systemic disease manifestations. In detail, COL7 hypomorphic mice show an unremitting inflammatory state in the upper dermis [44], and col7−/− mice display increased serum concentrations of the pro-inflammatory cytokine interleukin (IL)-6 [70]. Moving towards RDEB patients, a study of Esposito and coll. demonstrated that circulating levels of antibodies against skin proteins and pro-inflammatory cytokines, in particular, IL-6, are significantly higher with respect to healthy controls, and correlate with disease severity [71]. Of note, IL-6 levels were significantly associated with EB extension (localized or generalized disease), disease severity and anti-skin antibodies levels [71]. Accordingly, comparative analysis of primary fibroblasts from a couple of RDEB monozygotic twins with different disease manifestations showed increased levels of IL-6 in conditioned medium of fibroblasts derived from the individual with the more severe phenotype [43]. IL-6 has been implicated in the pathogenesis of systemic scleroderma (SSc), an autoimmune disease leading to fibrosis of the skin and internal organs, and of fibrosis in two mouse models, the bleomycin model (BLM) and the tight-skin mouse (Tsk-1). IL-6 signals through STAT3, a transcription factor up-regulated in SSc patients and BLM/Tsk-1 mice and able to control fibroblast-to-myofibroblast conversion, in cooperation with TGF-β [72].
Beyond its well-known role in fibrosis, IL-6 mediates cross-talk between CAFs and tumor cells [73], and represents a key player in the growth and metastatic evolution of several epithelial tumors, such as head and neck SCC and esophageal SCC [74][75][76]. Interestingly, preliminary findings show that the protein signal transducer and activator of transcription 3 (STAT3), a downstream effector of IL-6, is constitutively activated in untransformed RDEB-derived keratinocytes and in RDEB-derived SCC, both in basal conditions and after stimulation with TGF-β [70]. The hyper-activation of the IL-6 signaling cascade could at least partly explain the increased risk of RDEB patients to develop aggressive SCCs, and, in turn, represents a novel prognostic and therapeutic target in RDEB-SCC [72,77].
High mobility group box 1 (HMGB1) is a non-histone DNA binding protein with multifaceted functions in relation to the context. Secreted HMGB1 exerts a cytokine-like function, regulating inflammatory state and immunity by different modalities. HMGB1 also has a tumor-promoting role and has been found at high levels in different types of cancer, including SCC [78][79][80]. Serum levels of HMGB1 are elevated in RDEB patients and correlate with disease severity [81]. In RDEB, circulating HGMB1 is likely released by COL7-deficient keratinocytes in response to skin injury with the aim to recruit bone marrow cells at skin lesional sites and promote epithelial regeneration [82]. In accordance with the elevated serum levels and pro-inflammatory function of its extranuclear form, cytosolic HMGB1 is strongly up-regulated in RDEB lesional skin and even more in RDEB-SCC as compared to control skin [83]. Finally, the observation that TLR5, the leukocyte receptor for flaggelin, induces HGBM1 in a mouse model of wound-induced cancer introduces a recently identified topic in RDEB-SCC development: The microbial infection of the wound [83].
Microbial Infection
Experimental evidence suggests a relation between inflammatory processes, microbial infection and SCC development [83]. A typical feature of skin and mucosal lesions in RDEB patients is their colonization with Staphylococcus aureus and one or more additional commensal bacteria [84,85]. The presence of large chronic wounds in RDEB patients favors bacterial colonization, but it cannot represent the only cause of the high bacterial burden in RDEB patients; additional factors, such as an impaired immunity response in RDEB patients, must be involved in the aberrant wound microbial colonization. Indeed, non-RDEB patients with large, severe burn wounds resembling those of RDEB patients display a considerably lesser infection rate by Staphylococcus aureus as compared to RDEB patients [85,86]. In addition, RDEB mice show an elevated Staphylococcus aureus colonization of the unwounded skin and an increased, though ineffective, antimicrobial response as compared to wild-type mice [84]. Nyström and coll. recently demonstrated that the increased susceptibility to bacterial colonization in RDEB wounds is independent of skin integrity, but results from the absence of COL7 in the ECM of lymphoid conduits of spleen and lymph nodes. In lymphoid conduits, COL7 binds and sequesters cochlin (COCH), a modulator of innate immunity. In response to bacterial infection, COCH is processed by aggrecanase to release the circulating LCCL domain, which activates macrophages and neutrophils and stimulates bacterial clearance at infection sites. Thus, COL7 loss in lymphoid ECM of RDEB patients impairs COCH localization and determines a reduction of the LCCL domain, an event that results in an increased bacterial burden [84] (Figure 3).
In contrast, human papillomaviruses (HPV) infections, that represent a well-known risk factor for mucosal and cutaneous SCC development in the normal population, do not seem related to SCC onset in RDEB patients [87].
Immunity
The relation between immunity and cancer is well-established. Cancer cells express an abnormal set of proteins or abnormal levels of normal cellular proteins that can function as tumor antigens and can be detected and eliminated by immunosurveillance. At the same time, some cancer cells typified by low immunogenicity can get away from the immunosurveillance, and, during a more or less long period are shaped by their intrinsic genetic instability and by dynamic interactions with immune cells to, then, proliferate and create a permissive tumor microenvironment. Given the role of immunity in selecting and "sculpting" tumor cells, this process is defined as immunoediting. Several studies have described the role of the immune system in cutaneous SCC development [88], but knowledge about the role of immunity in RDEB-SCC is scanty and mainly indirect. Within this fragmented scenario, Riihilä and coll. recently described the up-regulation of the complement system members C1r and C1s in non-RDEB-SCC and RDEB-SCC compared to normal skin, in situ SCCs and actinic keratoses, and demonstrated their role in cell viability, apoptosis resistance and migration [89]. Of note, the augmented C1s staining in RDEB tumor cells could correlate with the activation of phosphoinositide 3-kinase (PI3K) and mitogen-activated protein kinase (MAPK) pathways, and contribute to the elevated migratory and metastatic abilities of cancerous cells in RDEB patients [89]. In detail, COL7 loss in non-RDEB SCC keratinocytes enhances migration and invasion, impairs epithelial differentiation, and promotes epithelial-mesenchymal transition (EMT) and vascularization through different mechanisms [58,59], including the activation of TGF-β1 signaling, a known regulator of tumorigenic processes [60]. Indeed, lack of COL7 in non-RDEB-SCC cells (SCC-IC1 cell line) xenografted onto SCID mice determines an increased expression of the active form of TGF-β1, its receptor TβRI, and its downstream targets. Mittapalli and collaborators [61] highlighted the role of COL7 deficiency in carcinogenesis by demonstrating that RDEB mice treated with DMBA/TPA develop skin lesions highly reminiscent of invasive RDEB-SCCs, while wild-type littermates show non-invasive, benign papillomas. The skin of tumor-primed RDEB mice is typified by stiffness and activation of several pro-tumorigenic pathways [61]. COL7 deficiency in RDEB-SCCs impairs front-to-rear keratinocyte polarity in 2D cultures and 3D spheroids through the deregulation [42,44,49,52,[57][58][59][61][62][63][64]67,84]. Red up arrows indicate increase/up-regulation, green down arrows indicate decrease/down-regulation. Abbreviations: ECM, extracellular matrix; ELMO2, engulfment and cell motility 2; FAK, focal adhesion kinase; ITGA6, integrin subunit alpha 6; LM332, laminin-332; LOX, lysyl oxidase; LTBP1, latent-transforming growth factor beta-binding protein 1; MMP2, metalloproteinase 2; OATP1B3, organic anion transporting polypeptide 1B3; PAR3, partitioning defective 3; PI3K, phosphoinositide 3-kinase; PLC-β4, phospholipase C-β4; RDEB, recessive dystrophic epidermolysis bullosa cutaneous; SCC, squamous cell carcinoma; SLCO1B3, solute carrier organic anion transporter family member 1B3; RDEB-SCC, RDEB-related SCC; TNC, tenascin-C; TβR1, transforming growth factor β receptor 1; TGF-β1, transforming growth factor-β1; TGM2, transglutaminase 2, VEGF, vascular endothelial growth factor.
Current SCC Therapies
According to current best clinical practice guidelines for EB-SCC treatment, a strict follow-up of skin wounds and scars, biopsies of clinically suspicious lesions and wide local surgical excision of tumors represent the standard of care in EB patients [26]. On the other hand, there is no evidence that radiotherapy and conventional chemotherapy are definitively effective, and their side effects may overweight benefits in these fragile patients. Thus, they are only recommended as palliative modalities for locally advanced inoperable and metastatic EB-SCC [26]. In addition, the transitory progression-free disease has been reported in very few RDEB patients with locally advanced or metastasized SCCs treated with cetuximab, a monoclonal antibody against epidermal growth factor receptor (EGFR) approved in Europe and USA as an adjuvant treatment for locally-advanced and metastasized head and neck SCCs [90][91][92][93]. On the other hand, the complexity of oncologic therapies in RDEB-SCC is well-testified by the ineffectiveness and even the paradox effect on new SCCs development of immunotherapy with an anti-PD1 molecule, pembrolizumab, which instead is FDA-approved for advanced progressing head and neck SCC [93]. Overall, current figures which report a mean survival time after the first SCC of 4 years in RDEB-SG clearly show the urgent need of novel, more effective therapeutic approaches for SCCs in RDEB patients [94].
Therapeutic Perspectives
The clinical and experimental evidence, concerning the pivotal role of injury-and inflammation-driven fibrotic process in severe EB complications, focused scientific and clinical EB community efforts on anti-fibrotic and anti-inflammatory therapeutic strategies able to lessen disease manifestations (symptom-relief therapies). Of course, the most severe, cancer-prone EB subtypes (RDEB and JEB) represent the main targets of experimental drugs aimed at contrasting disease symptoms, and, in turn, at delaying SCC onset.
(1) Symptom-relief therapies As for RDEB, the current anti-fibrotic treatment approaches mainly converge in the attenuation of TGF-β signaling cascade, and the majority is under investigation at the preclinical level [45,46,95]. On the other hand, losartan, a repurposing drug already approved for the treatment of hypertension, represents the most advanced investigational molecule for the treatment of RDEB fibrosis and entered a phase I/II clinical trial [45] (Reflect study, EudraCT no. 2015-003670-32). Though losartan's anti-fibrotic properties have been demonstrated in primary fibroblasts from RDEB patients and in RDEB mice [45], it is too early to draw up a balance about its efficacy and safety in fragile RDEB patients. However, regardless of its outcome, the "losartan experience" focuses our attention on the relevance of drug repositioning as a fast, poor-risk, and efficient approach to find and propose novel therapeutic approaches for rare diseases, such as EB, with an urgent need for therapies [96].
(2) Curative therapies Alongside the above symptom-relief therapies, curative interventions based on molecular (geneand protein-therapy), and cellular approaches have been developed.
(i) Gene therapies Gene therapies aim at replacing or correcting disease-causing gene mutations in ex vivo patient cells, including induced pluripotent stem cells (iPSCs) [97], fibroblasts [98,99], and keratinocytes with high growth potential, termed holoclones. Different strategies ranging from retroviral-mediated gene transfer to genome editing (e.g., TALENs and CRISP/CAS9 systems) [100][101][102][103][104][105] can be used for gene correction in patient cells. In the approaches which are already in trials, primary keratinocytes from patients are transduced in vitro with retroviral vectors encoding a normal protein, expanded in the laboratory, and transplanted as gene-corrected epidermal sheets (i.e., autologous cultured epidermal grafts) in patients [102][103][104][105]. These therapeutic interventions entered clinical trials for RDEB (ClinicalTrials.gov Identifier: NCT01263379 and NCT02984085) [103,104], and JEB (ClinicalTrials.gov Identifier: NCT02984085) with exciting results in the latter [105]. However, viral-based strategies for gene correction encompass experimental challenges, e.g., methodology for gene delivery, editing at off-target sites and duration of the therapeutic effects [102][103][104][105], and safety issues such as the risk of malignancies, due to adverse mutagenic events.
(ii) Cell therapies Cell-based therapies aim at restoring dermal-epidermal adhesion mainly through intradermal/intravenous injections of the following healthy allogeneic cell-types: (i) Fibroblasts, (ii) mesenchymal stromal stem cells (MSCs), and (iii) bone marrow (BM)-derived stem cells. They are able to localize in the skin and correct the disease-specific biochemical defect, by producing COL7. Intradermal injection of wild-type fibroblasts and MSCs in RDEB mice increases COL7 along cutaneous BMZ and improves skin integrity and resistance to mechanical forces with minimal adverse effects [106,107]. Similarly, preclinical studies showed that BM cells infused in RDEB mouse model migrate to sites of injury and contribute to lessening disease manifestations [108,109]. As for other cells with stem functionality, human umbilical cord blood-derived unrestricted somatic stem cells (USSCs) have shown anti-fibrotic effects and amelioration of disease phenotype in col7a1−/− mice through the up-regulation of two relevant TGF-β1 antagonists: DCN and TGF-β3 [110][111][112]. However, pilot cell therapy trials on RDEB patients revealed modest to absent clinical efficacy and improvement in patient's quality of life, low-tolerability or severe side effects [113][114][115][116][117].
(iii) Protein-and RNA-based therapies Protein-based therapeutics (PBTs) have their roots in the immediate advantage to administer to the patient, regardless of cell-and vector-based delivery, the correct form of the defective protein. The major problem of PBTs is their immunogenicity: The tendency to generate an immune response against the exogenous proteins, with loss of effectiveness and potential systemic complications. As for RDEB, the recombinant type VII procollagen (PCOL7) has been successfully intradermal/intravenous injected in col7a1−/− mice [118]. PBTs in RDEB patients are challenging, due to the high amount of clinical-grade recombinant protein needed for lifelong administration and its potential immunogenicity [119]. However, some interesting answers are expected from PTR-01: A human recombinant COL7 for the treatment of RDEB that recently entered a phase I/II clinical trial (ClinicalTrials.gov Identifier: NCT03752905).
RNA-based therapies (RBTs) can be applied for RDEB patients bearing COL7A1 mutations in specific in-frame exons, whose deletion does not lead to major structural changes at the protein level. In vitro and in vivo preclinical studies demonstrated that mutated exons could be skipped/deleted by antisense oligonucleotides (AONs) leading to the synthesis of a COL7 protein, similar to the wild-type, but lacking the defective region [120,121]. A phase I/II multicenter clinical trial is assessing safety and effects of the topical administration of QR-313, an AON determining the exclusion (skipping) of exon 73 from COL7A1 mRNA, in DEB patients bearing at least one pathogenic mutation in exon 73 (ClinicalTrials.gov Identifier: NCT03605069).
(iv) PTC read-through strategy Another curative approach for RDEB and JEB patients bearing nonsense mutations consists in forcing premature termination codons (PTCs) read-through by the topic or systemic administration of molecules with non-sense mutations suppression activity, such as aminoglycoside antibiotics (e.g., gentamycin B1) [122] or specific anti-inflammatory drugs (e.g., amlexanox) [123]. Of note, gentamycin B1 treatment of RDEB patients has been investigated at a clinical level, with encouraging results (ClinicalTrials.gov Identifier: NCT03012191). Antibiotic toxicity and immunogenicity of the newly-formed COL7 are the main potential drawbacks of this type of intervention.
(3) SCC-targeted therapies The omics-sciences and the strong translation approach in the EB research field represent the ground of an experimental ferment aiming to identify novel deregulated pathways and therapeutic targets in RDEB-SCC. Among them, polo-like kinase-1 (PLK-1) is emerging as a promising candidate. PLK-1 is a member of the serine/threonine protein kinases family, which has important roles in the mitotic process; and its over-expression is a common feature in a great number of tumors types, including RDEB-derived SCC [124]. For these reasons, PLK-1 represents a well-established target for cancer therapy [125]. Recently, Atanasova and coll. demonstrated that rigosertib, a PLK-1 inhibitor, exerts a strong and selective pro-apoptotic role on RDEB-derived SCC keratinocytes [126]. These experimental findings supported rigosertib admission to a phase II clinical trial to evaluate its safety and efficacy in RDEB patients with unresectable/standard care unresponsive, locally advanced or metastatic SCC (ClinicalTrials.gov Identifier: NCT03786237).
Clinical Features
Junctional epidermolysis bullosa (JEB) is less common than DEB and EBS: Its incidence has been estimated just over 2 per million live births in the USA [127]. The two commonest JEB subtypes, JEB generalized intermediate (JEB-GI) and JEB generalized severe (JEB-GS), are recessively-inherited and due to mutations in any of the three genes, LAMA3A, LAMB3, LAMC2, encoding the three chains of the major epithelial laminin isoform, laminin-332 (LM332), or, for JEB-GI, to mutations in the COL17A1 gene encoding a structural component of hemidesmosomes, collagen XVII (COL17), also known as 180-kD bullous pemphigoid antigen (BP180). JEB-GS is early lethal, usually within the first 12 months, due to extensive skin and mucosal involvement leading to failure to thrive, upper airway obstruction and sepsis [128]. It is characterized by mutations resulting in a complete lack of LM332, which is essential for adhesion of stratified and also simple epithelia. On the other end, JEB-GI is compatible with life and associated with variably reduced LM332 amounts or with absent or reduced COL17, which is expressed in stratified epithelia. JEB-GI presents with phenotypes of variable severity as to the extent of skin and mucous membrane involvement. In adulthood, development of chronic wounds which heal with atrophic scarring is typical. Data from small patient cohorts have suggested that adult JEB patients with defective LM332 are at increased risk of developing SCC starting from their third decade of life [129].
LM332 and COL17 in SCC in the General Population
LM332 is a multidomain glycoprotein and the major adhesion ligand of epithelial cells. In the skin, it is synthesized and assembled as high-molecular-weight heterotrimeric precursor within the endoplasmic reticulum of basal keratinocytes. The LM332 heterotrimer is composed of α3A, β3 and γ2 polypeptides, encoded by LAMA3A, LAMB3, LAMC2 genes, respectively. The precursor molecule is secreted and deposited into the ECM, where the α3 and γ2 chains undergo proteolytic maturation to smaller forms. C-terminal processing of the α3 chain can be mediated by different enzymes and consists of cleavage of the laminin globular (LG) domains 4 and 5 (LG45) within the linker region between LG3 and LG4 [130]. Outside the cell, LM332 simultaneously binds cell surface receptors and ECM components, such as integrins α6β4 and α3β1, syndecans-1 and -4, COL17 and COL7, exerting a critical role in skin integrity, as well as in multiple biological processes, including keratinocyte survival and migration [24]. Important functions have been assigned to the α3 chain and its processing, in both physiological and pathological conditions. The processed LM332 lacking LG45 (LM332-α3 165 ) is mainly found in mature BMZs, where it orchestrates the formation of anchoring structures through α3β1 and α6β4 interactions [130]. Increased synthesis and processing can be detected in chronic wounds in response to inflammation and infection [131]. In contrast, LM332 with unprocessed LG45 (LM332-α3 200 ) is detectable in migratory/remodeling situations, such as wound repair [132], and in SCCs from the general population [133].
Importantly, lack of LM332 halts SCC tumorigenesis of HRAS/IkBα-transformed human epidermis grafted onto immunodeficient mice, while restoration of its expression in the same model raises SCC tumorigenesis [134]. In this process, LM332 interactions with its ECM ligand COL7 and cell receptor integrin α6β4 are crucial for tumor invasion via activation of PI3K/AKT pro-tumorigenic signaling [135,136]. Subsequently, in vitro and in vivo studies revealed that the LG45 subdomain of LM332-α3 200 promotes invasion of transformed human keratinocytes by activating the matrix metalloproteases MMP-9 and MMP-1, and triggers PI3K and ERK pathways [133,137]. Interestingly, targeting LG45 with a specific antibody counteracts tumorigenesis in vivo [133].
The role of LM332 in cancer is also illustrated by its ability to promote CAFs differentiation and maintenance [138], as well as tumor spreading as shown by the presence of specific LM332 chains, mainly the γ2, at the leading edge of invading carcinomas and their relationship with tumor invasiveness and patient prognosis [139][140][141]. However, it remains to be clarified if the increased staining of specific LM332 chains in cancer specimens reflects a disease-specific mechanism of synthesis and processing.
Interestingly, COL17 is also enhanced in carcinogenesis similarly to its ligand LM332 [142]. Increased expression and shedding of its ectodomain from the cell surface have been observed at the tumor-stroma interface during SCC invasion and metastasis, while shedding inhibition prevents SCC progression [142].
LM332 and COL17 in SCC in JEB Patients
Since the expression of LM332 and COL17 positively correlates to tumorigenesis of non-EB SCC the role of LM332 and COL17 in JEB-related SCC tumorigenesis is not easily interpretable: In JEB-GI patients with COL17A1 mutations COL17 is often absent, and in JEB-GI patients with mutations in either LAMB3, LAMC2 and LAMA3 genes LM332 expression is reduced. Nevertheless, data from case reports and case series indicate that adult JEB patients have an increased risk (1:4) of developing SCC starting from their third decade of life [18,129,143]. Reported cases more frequently harbor mutations in genes encoding LM332 chain subunits, more rarely in COL17A1. The first SCC develops at a younger age compared to non-EB individuals [18]. They can be multiple, histologically well or moderately differentiated, and can have an aggressive course. Notably, SCCs almost exclusively arise on lower extremities mostly in the pretibial region and within areas of chronic blistering, long-standing erosions/ulcers, or atrophic scarring [129]. This suggests that in JEB, as in RDEB, chronic wounds induced by repeated mechanical traumas lead to tissue inflammation, subsequent ECM remodeling/dermal fibrosis and skin microenvironment alterations fueling SCC development and recurrence (see above) [54]. However, research in these fields, at least with regard to JEB, is almost lacking.
The pathogenesis of SCCs might also be related to the induction of cell migration and/or increased integrin-mediated signaling consequent to LM332 reduced levels and altered functions. Indeed, the amount of deposited LM332 inversely correlates with the rate of keratinocyte migration [144]. Notably, a reduction of LM332 is detected in SCC developed in RDEB individuals [57]. Lack of COL17 also enhances both keratinocyte propensity to migrate and PI3K signaling [145,146]. Primary keratinocytes from a JEB-GI patient with a naturally occurring mutation that truncates the LG45 subdomain increase their migration in vitro [143]. In this patient, the secreted and deposited mutant LM332 from skin and keratinocytes is reduced by about 50%. Interestingly, this individual developed an extensive number of keratoacanthomas and well-differentiated locally invasive SCCs, which did not metastasize over 20 years. Thus, the maintenance of sufficient amount of protein (≈ 50% or more) together with its qualitative defects might allow intrinsic pro-tumorigenic properties of LM332 to be conveyed, promoting SCC progression and recurrence. This case study, however, indicates that LM332 with truncated LG45 promotes, rather than inhibit, cell migration. Overall, these data clearly show the need for further investigations of the effects on cell signaling by LM332 mutations associated with SCC tumorigenesis in humans.
Clinical Features
Kindler Syndrome (KS) is the rarest EB type, with a few hundred patients described worldwide. It is caused by biallelic mutations in the FERMT1 gene that encodes for kindlin-1, a cytoplasmic component of focal adhesions involved in integrin signaling and linkage of the actin cytoskeleton to the ECM [147]. The majority of FERMT1 mutations lead to premature termination of translation and to loss of the kindlin-1 protein [147]. In addition to skin fragility, the hallmark of the disease is photosensitivity not present in other EB types. With advancing age, KS patients show an improvement skin blistering, but develop progressive and generalized skin atrophy and a mixture of skin atrophy, dyspigmentation and telangiectasia, known as poikiloderma, at photoexposed areas (face and neck), as well as hand and foot pseudosyndactyly [147].
Several case reports and a case series indicate that KS patients in adulthood have an increased susceptibility to SCC development [147][148][149]. Recently, Guerrero-Aspizua and coll. analyzed a cohort of 91 KS patients, 69 previously published [147,149], and 22 unpublished cases, in order to evaluate the incidence of SCC in KS syndrome at different ages [150]. 14.3% of the patients (13 out of 91) developed 1 or more well-differentiated SCC, for a total of 26 SCCs (25 in the skin and 1 in the oral mucosa). Cumulative risk of developing at least one SCC for patients with KS increases with age, and reaches the 66.7% by age 60. Seven out of 13 KS patients with SCC presented metastases. Similar to other EB-related SCC, KS-SCCs are aggressive and represent the cause of death in 38.5% of patients [150].
Pathways Involved in KS-Related SCC Development
As for KS, the molecular mechanisms underlying SCC development are very peculiar, since KS represents the only EB type in which a contribution to tumor onset could be given by UV-induced photodamage. Emmert and coll. [151] demonstrated that loss of kindlin-1 in SCC cells from a mouse model determines an unbalanced endogenous oxidative state, as shown by the reduced glutathione/glutathione disulphide ratio (GSH/GSSG ratio) and by the increased levels of reactive oxygen species (ROS) as compared to wild-type, kindlin-1 expressing SCC cells. Absent kindlin-1 sensitizes keratinocytes to oxidative stress-and UV-induced damage, determining an impaired activation of the ERK pathway. In addition, preliminary findings show that in primary human keratinocytes, kindlin-1 deficiency leads to cyclin-dependent kinase-1 (CDK-1) inhibition, and DNA damage in response to oxidative stress [152]. In the context of cancer, ROS have been reported to have both pro-and anti-survival functions, but the possible relation between SCC onset in KS patients and ROS-induced mutagenesis following UV exposure remains to be established.
Keratinocytes from KS patients exhibit premature senescent features [153]. Senescent cells may modify stromal microenvironment and influence the redox state of neighboring cells through paracrine signaling [154]. Notably, senescence-associated with oxidative damage could represent a tumor-promoting mechanism in epithelial cells [155]. Recently, Michael and coll. demonstrated that the absence of kindlin-1 in primary keratinocytes from KS patients is responsible for the increased targeting of EGFR for lysosomal degradation. This process leads to a marked reduction in EGFR protein levels, its mislocalization, and an impaired response to EGF stimulation, as shown by the decreased phosphorylation of EGFR and its downstream target ERK1/2 [156]. In keratinocytes, the attenuation of EGFR signaling cascade has implications in multiple biological processes, such as migration [156], immunity [157], and inflammation [158].
On the other hand, the pathomechanisms responsible for SCC development in KS patients could be recapitulated at least in part by fibrosis-and inflammation-driven alterations in the stromal microenvironment similar to those described in the other EB-derived SCC. Indeed, in vitro studies revealed that KS keratinocytes express increased amounts of growth factors and pro-inflammatory cytokines, in particular, IL-20 and IL-24, in response to stress agents, such as UVB irradiation [159]. Soluble factors secreted by KS keratinocytes target dermal fibroblasts and activate them to express α-SMA and to produce high amounts of collagen I and tenascin [159]. Of note, this fibrotic and inflammatory background was confirmed in KS skin in vivo [159]. In addition, loss of kindlin-1 in a mouse model of KS promotes αvβ6 integrin-mediated TGF-β activation and inhibits Wnt-β-catenin signaling, enlarging different stem cell (SC) compartments and increasing SC proliferation [160].
Finally, preliminary findings on molecular features and genetic profiles of 48 SCCs from patients affected with RDEB (n = 10), JEB (n = 1) and KS (n = 7) [161] show a common molecular signature in all SCCs samples. EB-related SCC were typified by the up-regulation of EGFR and cytochrome c oxidase subunit II (COX2), a marker of inflammation, and by the expression of at least one immune checkpoint among CTLA-4, PD-1 and PD-L1. Mutational signatures resulted very similar between EB-SCCs and UV-SCCs. However, KS-SCCs showed mutational burden and profiles distinct from those found in RDEB-SCCs [161]. Overall these findings point to the existence of partly shared pathomechanisms in EB-SCC development which could be relevant for the identification of common therapeutic targets.
Conclusions
Inherited EB is a group of rare and life-threatening skin blistering disorders, for which no curative therapies are still available. The most severe EB subtypes expose patients to highly disabling disease complications, including the development of aggressive cutaneous SCCs at lesional skin sites (EB-SCCs). In RDEB patients, SCCs are recurrent, metastasizing and therapy-resistant and represent the first cause of death and reduced life expectancy in these fragile subjects. The unique behaviors and the adverse outcome make RDEB-SCC the most investigated EB-related tumor at the expense of JEBand KS-SCC, which are poorly explored both clinically and molecularly. As for RDEB, the last ten years of basic research and omics-studies (e.g., genomics, transcriptomics and proteomics) in primary cells from patients, skin biopsies and mice models revealed the key role of chronic tissue damage in creating a permissive tumor microenvironment and brought out a consistent number of molecules deregulated in RDEB-associated fibrosis and inflammation. However, despite the growing amount of data, the knowledge scenario on RDEB-SCC is often not completely informative as the validation of results in tumor models is missing. Alongside the need for better understanding genetics and molecular bases of all EB-SCCs, also in view to obtain efficient and patient-tailored therapies, resources and efforts should be directed on the already got findings, planning long-lasting, multidisciplinary and translational SCC-focused studies.
In conclusion, we highlight the potentially relevant impact of the findings concerning (i) the mutagenic process driven by APOBEC family members in response to chronic tissue damage; (ii) the role of NOTCH1 mutations/NOTCH pathway in SCC development; (iii) the action of inflammatory mediators, in particular, IL-6, in tumor progression and spreading; (iv) the impact of wound bacterial colonization and immunity in carcinogenesis; (v) the use of circulating molecules and extracellular-vesicles as novel, minimally-invasive diagnostic and prognostic factors of the disease.
|
2019-11-14T13:52:09.819Z
|
2019-11-01T00:00:00.000
|
{
"year": 2019,
"sha1": "8ac163b806ae5ea7c2fdf480fc969b525559ce2b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/20/22/5707/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ac163b806ae5ea7c2fdf480fc969b525559ce2b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13238528
|
pes2o/s2orc
|
v3-fos-license
|
Evolution of Flavone Synthase I from Parsley Flavanone 3 b -Hydroxylase by Site-Directed Mutagenesis 1[W][OA]
Flavanone 3 b -hydroxylase (FHT) and flavone synthase I (FNS I) are 2-oxoglutarate-dependent dioxygenases with 80% sequence identity, which catalyze distinct reactions in flavonoid biosynthesis. However, FNS I has been reported exclusively from a few Apiaceae species, whereas FHTs are more abundant. Domain-swapping experiments joining the N terminus of parsley ( Petroselinum crispum ) FHT with the C terminus of parsley FNS I and vice versa revealed that the C-terminal portion is not essential for FNS I activity. Sequence alignments identified 26 amino acid substitutions conserved in FHT versus FNS I genes. Homology modeling, based on the related anthocyanidin synthase structure, assigned seven of these amino acids (FHT/ FNS I, M106T, I115T, V116I, I131F, D195E, V200I, L215V, and K216R) to the active site. Accordingly, FHT was modified by site-directed mutagenesis, creating mutants encoding from one to seven substitutions, which were expressed in yeast ( Saccharo- myces cerevisiae ) for FNS I and FHT assays. The exchange I131F in combination with either M106T and D195E or L215V and K216R replacements was sufficient to confer some FNS I side activity. Introduction of all seven FNS I substitutions into the FHT sequence, however, caused a nearly complete change in enzyme activity from FHT to FNS I. Both FHT and FNS I were proposed to initially withdraw the b -face-configured hydrogen from carbon-3 of the naringenin substrate. Our results suggest that the 7-fold substitution affects the orientation of the substrate in the active-site pocket such that this is followed by syn - elimination of hydrogen from carbon-2 (FNS I reaction) rather than the rebound hydroxylation of carbon-3 (FHT reaction).
Flavones and flavonols are the predominant flavonoids found in tissues of Apiaceae species (Harborne, 1971;Harborne and Williams, 1972;Harborne and Baxter, 1999). Significant functions were ascribed to these metabolites for growth and propagation of plants, as well as for adaptation to ecological niches. Flavonoids have been shown to protect from UV radiation, provide pigmentation, mediate the plant's interaction with insects or microbes, and act as feeding deterrents and phytoalexins (Harborne and Williams, 2000;Martens and Mithö fer, 2005). Flavones (i.e. apigenin) are formed by direct 2,3-desaturation of natural flavanones such as (2S)-naringenin ( Fig. 1). In Apiaceae, this reaction is catalyzed by a soluble Fe 21 / 2-oxoglutarate-dependent dioxygenase (2-ODD), flavone synthase I (FNS I), whereas FNS II, a cytochrome P450-dependent monooxygenase, was found in all other flavone-producing plants investigated so far. Flavonols are formed from flavanones by sequential hydroxylation of carbon-3 and 2,3-dehydration involving flavanone 3b-hydroxylase (FHT) and flavonol synthase (FLS; Fig. 1), although in vitro FLS has the capability of catalyzing both steps . FHT and FNS I or II thus compete for flavanones as common substrates and the product of FHT may be delivered to the anthocyanidin branch pathway instead of desaturation by FLS (Fig. 1). FHT also belongs to the superfamily of 2-ODDs and is closely related to FNS I. Both enzymes had been proposed to attack the flavanone substrate in identical fashion and withdraw initially the b-configured hydrogen (trans-to B-ring substitution) from carbon-3 ( Fig. 1).
Considerable research has been dedicated to the mechanism of 2-ODD catalysis because of the relevance of these enzymes in the metabolism of microorganisms (antibiotics), plants (hormones, pigments), or mammals (connective tissue diseases, hypoxia-inducible factor). Two electrons are gained from the decarboxylation of 2-oxoglutarate and transferred to Fe(II) in the enzyme-active center, forming a highly reactive ferryl intermediate, which mobilizes molecular oxygen for hydroxylation, desaturation, epoxidation, ring closure, or expansion reactions (Prescott and Lloyd, 2000). Members of the 2-ODD superfamily do not always show close sequence identity, but rather appear to cluster in one of three groups of related enzymes or fall into a fourth group of unrelated sequences (Hogan et al., 2000;Prescott and Lloyd, 2000). FHT and FNS I were assigned to group I of this superfamily together with, for example, microbial isopenicillin N synthase (IPNS) and deacetoxycephalosporin C synthase (DAOCS), having in common a HXDX ;55 HX 10 RXS motif (Borovok et al., 1996;Prescott and Lloyd, 2000). These residues are particularly important for enzyme activity as revealed by site-directed mutagenesis (Lukacin and Britsch, 1997;Myllyharju and Kivirikko, 1997). Furthermore, IPNS (Roach et al., 1995 and DAOCS (Roach et al., 1995Valegard et al., 1998;Lloyd et al., 1999;Wilmouth et al., 2002), as well as anthocyanidin synthase (ANS) from Arabidopsis (Arabidopsis thaliana; Wilmouth et al., 2002) as another group I dioxygenase, were crystallized and revealed that these residues comprise the metallocenter and 2-oxoglutarate binding site. For detailed review of 2-ODDs, see Prescott and Lloyd (2000) and Clifton et al. (2006).
The evolution of FNS I in species of Apiaceae ascribes an essential role to flavones for the plant's existence and propagation. Moreover, the confinement of FNS I to one evolutionary advanced plant family, as compared to the more abundant expression of FHTs, suggested that FNS I developed much later than FHT. The close relationship of the FNS I polypeptide with those of FHTs and alignments with other 2-ODD sequences cloned from Apiaceae thus led to the hypothesis of gene duplication and subsequent change of function (Gebhardt et al., 2005). Very few conserved differences became apparent on alignment of FNS I and FHT sequences from parsley (Petroselinum crispum), which were likely to determine divergent catalytic activity. Apiaceae presumably benefit from stable maintenance of the new FNS I gene, which leads to accumulation of flavones, and the selective advantage has precluded any further diversification of 2-ODD functionality toward the destructive turnover of flavones. However, experimental evidence for this hypothesis has been lacking. To identify the amino acid residues essential for FNS I activity, we constructed chimera from fully functional parsley FHT and parsley FNS I and generated step-by-step mutants of the FHT. The chimera functionally expressed in yeast (Saccharomyces cerevisiae) revealed the primary importance of the N-terminal enzyme portion for FNS I catalysis, and site-directed mutagenesis identified the minimal requirement of three amino acid substitutions to shift the FHT toward FNS I activity.
Cloning of 2-ODDs from Apiaceae and Sequence Alignment
In addition to the FHT and FNS I sequences already identified from various species of Apiaceae (Gebhardt et al. 2005), full-length FNS I from Aethusa cynapium (DQ683350), Angelica archangelica (DQ683352), and Cuminum cyminum (DQ683349), as well as FHT from A. cynapium (DQ683351) were cloned by PCR amplification and verified by functional expression. Alignments of all these translated polypeptides corroborated the previous finding of 27 amino acids differently conserved in FNS I versus FHT, albeit A. archangelica FNS I is exceptional with Val-312 instead of the otherwise common Ile (Fig. 2). Nevertheless, this conserved exchange unlikely bears functional consequences. Alignments of Apiaceae polypeptides with FHTs from other plant families recognized some of the exchanges conserved in FNS I also in these FHTs, which, however, cannot be disregarded for further study because these residues might negligibly affect FHT activity, but essentially support FNS I activity. The most striking difference among Apiaceae enzymes is a C-terminal triplet of FHT (Gln-348/Glu-349 or Asp-349/Trp-350 or Ala-350 or Val-350) deleted in FNS I (Fig. 2). However, this triplet was not conserved in non-Apiaceae FHTs, presumably due to weak conservation of the entire C terminus, and insertion of the triplet in Daucus carota FNS I did not change the activity (data not shown). Provided that FNS I has evolved from FHT, the triplet that does not affect FHT or FNS I activity was likely deleted shortly after gene duplication. The remaining conserved differences between FNS I and FHT noted on the alignments are scattered over the entire sequence; chimeric proteins were therefore constructed by swapping about 40% of the C-terminal portion between parsley FHT and FNS I.
Significance of the C Terminus
Previous studies concerning the relevance of the C-terminal portion of 2-ODDs on enzyme activity did not provide a coherent picture. Deletion of six amino acids from the C terminus of Aspergillus nidulans IPNS (Sami et al., 1997) or Streptomyces clavuligerus DAOCS (Valegard et al., 1998) significantly diminished the activity and it was suggested that the enzymes, which accommodate the active site in a b-sheet barrel, use the C terminus as a protective lid to maintain a hydrophobic environment and to enable proper cofactor binding (Lloyd et al., 1999). In the case of chimeric GA 20-oxidases, a pronounced influence of the C terminus on product selectivity was noticed (Lange et al., 1997). However, penicillin ring expansion by Acremonium chrysogenum DAOCS/deacetylcephalosporin C synthetase was not affected by the C-terminal deletion of 20 amino acids (Chin et al., 2003). The specificity of Petunia hybrida FHT was also retained on C-terminal truncation of 5, 11, or 24 amino acid residues, albeit the specific activity dropped by 56% to 72.6% (Wellmann et al., 2004). Petunia FHT activity was nearly lost, however, on deletion of 29 C-terminal amino acids (0.4% activity of wild type) or on swapping the C-terminal portion of 52 amino acids by the corresponding sequence from Citrus unshiu FLS (0.3% activity of wild type) without a change in specificity (Wellmann et al., 2004). Thus, the contribution of the C terminus to enzyme activity is variable for different 2-ODDs and this aspect was examined for parsley FHT and FNS I.
Two FHT/FNS I chimera (Pet_criChim I and II) were constructed from pYES2.1 clones harboring fully functional parsley FNS I or FHT sequences (Martens et al., 2001Supplemental Fig. S1). Pet_criChim I was composed of the N-terminal 219 amino acids of FNS I ligated to the C-terminal 149 residues of FHT, whereas in Pet_criChim II the C-terminal 146 amino acid residues of FNS I were joined to the N-terminal portion of 219 amino acids from FHT. Notably, however, amino acids 217 to 296 are highly conserved in FNS I and FHT, except for position 231 in FHTs outside Apiaceae; therefore, the last 72 amino acids of the chimera only differed from the wild-type enzymes. The constructs were overexpressed in yeast and the FHT or FNS I activity of crude extracts was determined in standard assays employing 14 C-labeled naringenin as substrate followed by thin-layer chromatography (TLC) separation and autoradiography . The effects of these swapping experiments on FHT versus FNS I activity (Fig. 3) differed considerably because recombinant Pet_criChim I mostly retained FNS I activity converting naringenin to apigenin with little FHT side activity forming dihydrokaempferol, whereas Pet_criChim II showed weak FHT activity compared to the wild-type enzyme without a trace of FNS I activity. The data rule out an essential contribution of the C-terminal enzyme portion on FNS I activity, which is supported also by fully functional truncated FNS I reported recently from D. carota, Apium graveolens, and A. cynapium (Gebhardt et al., 2005), although an effect of the C terminus on activity cannot be neglected without kinetic evidence. It is obvious, furthermore, that the C-terminal portion of the enzyme is not strictly Figure 2. Schematic overview of conserved substitutions in FHT and FNS I sequences. Amino acids that were assigned by homology modeling to the active site are underlined. I312V is shown in gray because Ang_arcFNS I preserved Valin in this position. The BamHI restriction site used for construction of Pet_criChimI and II is marked by an arrow. Black bars indicate sequence regions that are either identical or not conserved and white bar regions mark the position of amino acids responsible for cofactor binding conserved in all 2-ODDs (Lukacin and Britsch, 1997). required for FHT activity, but the side activity of Pet_criChim I and the suppression of Pet_criChim II suggest a significant beneficial contribution on FHT activity. Taken together, a potential gain of FNS I functionality likely requires amino acid substitutions in the N-terminal portion (positions 1-216) of the FHT sequence.
Homology Modeling and Choice of Mutations
The structures of some crystallized 2-ODDs have been solved and reviewed by Clifton et al. (2006). These include mostly microbial enzymes (e.g. IPNS [Roach et al., 1995], DAOCS [Valegard et al., 1998], clavaminic acid synthase [Zhang et al., 2000], carbapenem synthase [Clifton et al., 2003] or Pro 3-hydroxylase [Clifton et al., 2001], and taurine/a-ketoglutarate dioxygenase [Elkins et al., 2002]). Factor-inhibiting hypoxia-inducible factor (Elkins et al., 2003), phytanoyl-CoA 2-hydroxylase (McDonough et al., 2005), and ANS (Wilmouth et al., 2002) are examples from mammalian and plant sources. Each of these enzymes keeps its active-site iron center in a hydrophobic environment enclosed by a double-stranded b-helix or jelly roll topology. However, the extent and the periphery of the jelly rolls may vary and the enzymes show only little sequence similarity. Regardless of these limitations, a-helices and b-sheets appear to be analogously assembled, leading to almost identical circular dichroism spectroscopic profiles for P. hybrida FHT (Lukacin et al., 2000), C. unshiu FLS (Wellmann et al., 2002), or IPNS (Borovok et al., 1996;Durairaj et al., 1996). Alignment of the 2-ODDs that have been examined by x-ray scattering with parsley FHT and FNS I polypeptides revealed the highest sequence similarity of approximately 30% with ANS.
In an attempt to denote more closely those amino acids responsible for substrate binding, model calculations were done based on the ANS structure, although sequence similarity exceeding 30% had been postulated for reliable homology modeling (Sanchez and Sali, 1997). Nevertheless, in the case of UDPglucosyltransferases from Sorghum bicolor, for example, 15% similarity was sufficient (Thorsøe et al., 2005). ANS had been cocrystallized with quercetin or naringenin (Wilmouth et al., 2002;Welford et al., 2005) because the natural leucoanthocyanidin substrates are unstable. The structure of the ANS-naringenin complex (2brt) was preferred for modeling because FHT and FNS I use naringenin as substrate. Due to little sequence similarity of FNS I and FHT with ANS in the N-and C-terminal region, model calculations are based on residues 30 to 305, excluding four short N-terminal a-helices (a-helices 1-4) and two C-terminal a-helices (a-helices 16 and 17) of ANS. ANS is characterized by a jelly roll topology (b5-b12) with a long a-helical backbone (a-helix 12) as observed earlier for IPNS or DAOCS. Furthermore, the jelly roll motif of ANS is extended by two b-sheets (b3 and b4). Homology model regions corresponding to ANS b3 to b6 are represented as almost straight loops (Fig. 4) because Protein Data Bank and SWISS-MODEL software use slightly different protocols for the assignment of secondary structure, but those regions adopt a similar orientation as the corresponding b-sheets of ANS. The positions of residues for iron binding are strictly conserved in the homology models generated and revealed His-218, His-276, and Asp-220 in parsley FHT or FNS I, as compared to His-232, His-288, and Asp-234 in ANS, for almost octahedral coordination in conjunction with the C1 and C2 carboxyls of 2-oxoglutarate. Conversion of 14 C-labeled (2S)-naringenin with crude protein extracts from yeast transformed with Pet_criFNS I, Pet_criFHT, Pet_ criChimI, or Pet_CriChimII. Enzyme assays were carried out as described previously . Substrate and product positions vary in the presentation because the assays were separated on different thin-layer plates. However, products were unequivocally identified by cochromatography with authentic standards: 1, apigenin; 2, naringenin; 3, dihydrokaempferol .
Substrate binding in ANS is facilitated through p-stacking of the naringenin A-ring with Phe-304, corresponding to Phe-292 in FHT or FNS I (Fig. 4). Furthermore, the 7-hydroxyl of naringenin can form a hydrogen bond with the side chain of Glu-306, whereas the equivalent position in FHT or FNS I is held by Asn-294, which does not engage in hydrogen bonding. ANS fixes the B-ring of the substrate through hydrophobic interaction with Phe-144 and hydrogen bonding of the 4#-hydroxyl to Tyr-142. These residues are lacking from FHT or FNS I, which encode Ala-133 and Ile-131 (FHT) or Thr-133 and Phe-131 (FNS I) instead. Moreover, Lys-213 was proposed to participate in protonation and deprotonation in ANS catalysis (Wilmouth et al., 2002), whereas this residue is lacking in FHT and FNS I due to a gap of three amino acids between residues 200 and 201 (Fig. 5). These data suggest substrate binding in the active-site cavity of FHT and FNS I different from that in ANS and corroborate the previous proposal of a-face specificity of ANS and FLS versus b-face specificity of FHT and FNS I Turnbull et al., 2004;Welford et al., 2005). High-affinity binding of substrate is essential in both FHT and FNS I, as well as in ANS catalysis because of the radical mechanisms and is supported by the narrow substrate specificity of FHT and FNS I and the absence of side reactions. Conceivably, additional side chains enforce substrate affinity, but low sequence similarity of FHT and FNS I with ANS and putative inaccuracy of modeled side chain orientations generally associated with homology modeling ruled out more informative docking calculations. Thus, the projection of naringenin in FHT and FNS I is shown as determined for ANS (Fig. 4).
Both FHT and FNS I were proposed to initiate the loss of the b-configured hydrogen from carbon-3 of naringenin and the parameters of substrate binding unlikely explain the difference in product formation. However, subtle sequence differences must determine the fate of the remaining radical. Most of the differences conserved in parsley FHT or FNS I polypeptide were recognized in the periphery, except for seven residues at or close to the active-site cavity, assigning the substitutions M106T, I115T, V116I, I131F, D195E, V200I, L215V, and K216R as potential cause of FHT-to-FNS I conversion (Fig. 5). The M106T exchange concerns a flexible loop corresponding to ANS a-helix 9 near the enzyme surface, and the substitutions I115T, Figure 4. A, Structure of ANS complexed with naringenin (2brt; Welford et al., 2005). The jelly roll motif (b-sheet 5 to b-sheet 12) and its extending b-sheets are represented in yellow. B, Homology model of Pet_criFHT based on the ANS-naringenin structure (2brt). C, Homology model of Pet_criFNS I based on the ANS-naringenin structure (2brt). a-Helices and b-sheets in the homology models were numbered according to the model template. Due to the limited sequence similarities and mechanistic differences between template and FNS I or FHT, the substrate naringenin likely adopts spatially different positions in the active-site pockets. The substrate was not fitted in B and C, but the models clearly resolve the conserved differences between FNS I and FHT concerning the active-site residues. Residues conserved differently in Pet_criFHT and Pet_criFNS I and chosen for mutagenic studies are shown as sticks.
V116I, and I131F (corresponding to Tyr-142 in ANS) are part of b-sheets 3 and 4, whereas D195E in a-14 and V200I in b-sheet 5 are exposed to the catalytic pocket. L215V and K216R are located in b-sheet 6 close to the Fe 21 /2-oxoglutarate center. Conservative exchanges (i.e. V200I) are commonly irrelevant for enzyme function, but the exchanges V116I and L215V were examined further because of their proximity to the sites of I115T and K216R substitutions. Homology models based on the structure of the ANS-quercetin complex suggested some additional impact of the F320Y and R326K exchanges on enzyme activity (data not shown), but the low reliability of C-terminal modeling and the results obtained with Pet_criChim I and II excluded these residues from further investigation.
Site-Directed Mutagenesis
Each of the amino acids Met-106, Ile-115, Val-116, Ile-131, Asp-195, Leu-215, and Lys-216 conserved in FHTs was independently replaced by the corresponding amino acid found in the parsley FNS I sequence. Single and double mutants (M2-8; Table I) were constructed from Pet_criFHT-pYES2.1 and used as templates to generate multiple mutants (M9-15; Table I). For all single and double mutants (M2-8; Table I), the expression in yeast cells resulted in extracts with FHT activity and without any significant FNS I activity. Consequently, the substitution of one or two amino acids is insufficient to shift the naringenin 3b-hydroxylase activity toward flavone (apigenin) formation. Two of the triple mutants harboring M106T-I131F-D195E or I131F-L215V-K216R substitutions (M9 and M10) showed reduced FHT activity in comparison to the wild-type parsley FHT concomitant with the formation of a second product that was distinguished by TLC (Fig. 6A). This product was identified as apigenin by cochromatography with a reference sample in three solvent systems (Martens et al., 2001). However, the triple mutant D195E-L215V-K216R (M11; Fig. 6A) did not Figure 5. Alignment of Ara_thaANS and partial FNS I and FHT sequences. Similar amino acids are indicated by dots, identical amino acids by stars. Colored stars mark conserved amino acids necessary for cofactor or substrate binding. Conserved substitutions of FNS I and FHT are shown in pink (positioned in the active site) and green (peripheral position). The BamHI restriction site used for the construction of Pet_criChimI and II is indicated by an arrow. Secondary structure elements of ANS are underlined; a-helices are labeled in black, 3 10 -helices in bold gray, b-sheets of the jelly roll motif in bold black (Wilmouth et al., 2002).
gain FNS I activity, which emphasizes the essential role of Phe-131 for FNS I activity, although this substitution on its own (M3; Table I) was inefficient. Four or five substitutions, including I131F, were introduced in mutants 12 to 14 (Fig. 6A), which predominantly exhibited FHT activity with FNS I side activity. Finally, the full set of seven mutations inferred from homology modeling was introduced in FHT (M15; Fig. 6A). This recombinant mutant enzyme showed primarily FNS I activity, albeit at a reduced level as compared to wildtype FNS I, with very little residual FHT activity detected after two-dimensional TLC in two solvent systems (Fig. 6B).
It is thus obvious that amino acid residues in the N terminus or beyond residue 305 also contribute to FNS I activity. Swapping of the C-terminal domain of parsley FNS I by that of FHT had introduced some FHT activity, suggesting FHT-relevant residues in that region. Alignments of published FNS I and FHT sequences identified Asp-331 as strictly conserved in FHTs from Apiaceae and other plants, which is replaced by His in FNS I. The D331H substitution in Pet_criFHT (M16; Table I) and Pet_criChimI (M17) confirmed the importance of this residue for FHT activity because recombinant M15 displayed no enzymatic activity (data not shown) and M17 retained FNS I activity, but lost residual FHT activity (Fig. 6B).
DISCUSSION
Flavonoids are abundant plant secondary metabolites that have been reported even from primitive taxa, such as liverworts (Conocephalum conicum; Feld et al.,
2003) and horsetails (Equisetum arvense; Oh et al., 2004).
Their classification is based on the flavane skeleton and comprises a spectrum of compounds with flavones, flavonols, and anthocyanins as major components. The principles of flavonoid biosynthesis have been thoroughly studied regarding biochemistry, genetics, and molecular biology, but there is little information concerning the evolution of committed enzymes (Harborne and Williams, 2000). Sessile plants have to cope with multiple environmental changes that act as a driving force for the adaptation and evolution of enzymes. This process is believed to follow basically one of two routes. An existing structural gene may change and gain the capability of encoding an enzyme with broader substrate/product specificity or multifunctionality. Alternatively, gene duplication can lead to cumulative mutations in one of the copies due to relaxed functional constraints and often those copies are eliminated later, after pseudogenization. In some instances, however, mutated copies might be retained provided that the expression is of particular benefit, such as dosage effects, subfunctionalization, or the creation of a completely new function (Hughes, 1994;Lynch and Force, 2000;Ober, 2005).
Both concepts received support from investigations on the enzymology of secondary metabolites. For example, multifunctional 2-ODDs or terpene synthases are able to catalyze more than one step of a given biosynthetic pathway (Steele et al., 1998;Prescott, 2000). FLS from C. unshiu or ANS from Arabidopsis and G. hybrida exhibit several activities in vitro, although the significance in vivo remained uncertain Martens et al., 2003;Turnbull et al., 2004). On the other hand, duplications have been Table I. Pet_criFHTs and preferential product formation Standard 2-ODD activity assays were carried out in duplicate with 5,000 dpm 14 C-labeled naringenin (approximately 45 pmol) and 500 or 1,000 mg total protein as described previously . The substrate specificity of wild-type and mutant FHTs is compared by the ratio of dihydrokaempferol (FHT activity) to apigenin (FNS I activity) formation, which together represent the total product in each assay. Mutant enzymes showing both enzymatic activities were verified by expression of another clone carrying the same mutation and repeated activity assays. Asterisk (*) indicates wild-type FNS produced exclusively apigenin from (2S)-naringenin. documented for 2-ODDs of glucosinolate biosynthesis in Arabidopsis and various enzymes of flavonoid biosynthesis, such as chalcone synthase from G. hybrida and Ipomoea, chalcone isomerase and dihydroflavonol 4-reductase from Lotus, and dihydroflavonol 4-reductase from Ipomoea (Helariutta et al., 1996;Hoshino et al., 2001;Kliebenstein et al., 2001;Shimada et al., 2003Shimada et al., , 2005. The phenomenon of gene duplication is not restricted to secondary metabolism because genes of primary metabolism have also been recruited (i.e. deoxyhypusin synthase for the evolution of homospermidine synthase catalyzing the first committed step in pyrrolizidine alkaloid biosynthesis; Ober and Hartmann, 1999). Also, the evolution of FNS I from FHT by gene duplication was suggested, but the prime ancestor gene remains to be identified Gebhardt et al., 2005). Due to sequence similarity and substrate specificity, flavonoid dioxygenases were grouped to into 2-ODDs with low substrate specificity, which attack the a-face of the substrate, such as ANS and FLS, and 2-ODDs with high substrate specificity like FHT and FNS I, which attack the b-face Martens et al., 2003;Turnbull et al., 2004). Both FHT and FNS I withdraw the b-configured hydrogen from carbon-3 of naringenin, but then proceed on different routes despite their high sequence similarity (Fig. 7). FHT catalyzes 3b-hydroxylation through a rebound process, whereas FNS I affords the synelimination of hydrogen from carbon-2 in a cage-like setting without intermediate hydroxylation (Fig. 7B). The proposed FNS I reaction clearly differs from the mechanisms assumed for FLS or ANS, which likely Figure 6. Enzyme assays with crude protein extracts from yeast transformed with Pet_criFNS I, Pet_criFHT, or Pet_criFHT mutants were carried out as described previously . Substrate and product positions vary in the presentation because the assays were separated on different thin-layer plates. However, Flavonoids were unequivocally identified through cochromatography with authentic standards. 1, Apigenin; 2, naringenin; 3, dihydrokaempferol. A, Radio scan of one-dimensional TLC separation in 30% acetic acid. B, Radio scan of two-dimensional TLC separation in chloroform:acetic acid:water (50:45:5; CAW) and 30% acetic acid. hydroxylate carbon-3 or -2 of the substrate followed by antiperiplanar water elimination as indicated by small amounts of dihydrokaempferol and kaempferol byproducts (Welford et al., 2001), but is compatible with the previous finding that FNS I neither converts 2-hydroxynaringenin nor dihydroflavonols to flavones (Britsch, 1990;Martens et al., 2003). Indirect experimental support for the syn-elimination mechanism was provided recently by incubation of ANS with naringenin diastereomers (Welford et al., 2005) because the selectivity of ANS for substrates with a particular carbon-2 stereochemistry is greatly diminished in the absence of a carbon-3 hydroxy group. Mostly, dihydrokaempferol and kaempferol, besides traces of apigenin, were formed from (2S)-naringenin, whereas almost equivalent amounts of dihydrokaempferol and apigenin with little kaempferol resulted from unnatural (2R)-naringenin, which exposes the 2a-and 3a-configured hydrogens to the catalytic ferryl species in ANS (Welford et al., 2005; Fig. 8A). The crystal complex revealed that the 3a-hydrogen is closer to the ferryl species and presumably attacked first to release apigenin by syn-elimination (Welford et al., 2005). Overall, the precision of naringenin fixation with respect to the ferryl species in the active-site pocket of ANS determines whether syn-elimination is preferred over hydroxylation. These findings can be extrapolated to the FNS I and FHT reactions. Both enzymes exclusively accept flavanone substrates exposing a b-face hydrogen Martens et al., 2003;Turnbull et al., 2004). Following the argument for apigenin formation from (2R)-naringenin by ANS-catalyzed syn-elimination (Welford et al., 2005), the stereoconfiguration at carbon-2 of (2R)-naringenin interferes with FNS I catalysis and, in fact, FNS I does not convert (2R)-naringenin (Britsch, 1990). Thus, FNS I and FHT conceivably approach the common substrate (2S)-naringenin from the opposite site of the ring plane (Fig. 8B) as compared to ANS (Fig. 8A), which requires a mirror-image orientation of substrate and active-site residues. Two combinations of I131F with M106T/D195E (M8) or L215V/K216R (M9) have been shown to confer FNS I side activity; hence, these substitutions likely influence product specificity by adjusting the substrate position rather than actively participating in the reaction mechanism (e.g. through Figure 7. Reaction mechanism of FHT (A) and FNSI (B). Depending on the substrate orientation, synelimination and formation of flavones may proceed via a radical mechanism with initial attack at either carbon-3 or carbon-2 or both simultaneously in a concerted mechanism.
acidic or basic amino acids). Necessarily, carbon-3 should be closer to the ferryl species than carbon-2 for hydroxylation by FHT, whereas the protons at carbon-2 and carbon-3 may be equally distant from the ferryl species in FNS I. Overall, the conserved differences in FNS I appear to fit the substrate into the active-site pocket with maximal proximity of H-2 and bH-3 to the catalytic ferryl species. The essential voluminous Phe-131 as compared to Ile-131 in FHT supports the assumption.
FHT and FNS I are phylogenetically closely related and adopt a tertiary structure similar to ANS (Martens et al., 2001Gebhardt et al., 2005). Although the detailed effects of selective amino acid substitutions on the overall parsley FHT structure are unknown, this article assigns those residues proximal to the active site that control FHT and FNS I activity. Obviously, minor mutations of parsley FHT are sufficient to shift the activity and significant FNS I activity was conferred already by a triple mutation (M9 and M10), which still retained the capacity for dihydrokaempferol formation (FHT activity). Concomitantly, however, severe loss in specific enzyme activity was observed. Replacement of seven amino acids caused a nearly complete change toward FNS I activity. The process accomplished here by site-directed mutagenesis defines the minimal conditions for directed evolution in vivo to broaden the flavonoid spectrum. Plants following this route, like Apiaceae, might have gained the capacity of flavone accumulation without losing their flavonols, provided that gene duplication had occurred. It is likely that the efficiency and selectivity of the newly formed FNS I have improved with time through additional mutations and concomitant with the complete loss of FHT activity in the gene copy. The ease of change of function by only three mutations of FHT suggests that flavone biosynthesis may have evolved independently on this route more than once; however, other enzymes exhibiting FNS I activity have not been observed outside Apiaceae. The accumulation of flavones conceivably provides an advantage to the plant because expression of FNS I has been maintained in Apiaceae and other plants developed FNS II for the same purpose.
The capacity to form flavonoids is supposed to have developed gradually because the first flavonoid enzymes were probably not as effective or selective as today and the initially low flavonoid concentrations likely served in plant signaling rather than UV protection or defense (Stafford, 1991). In any case, the conservation of flavone biosynthesis indicates an advantage that eventually led to flavone concentrations sufficient also for UV protection of the plant, and the environmental impact likely furthered the evolution of FNS I in Apiaceae (Logemann et al., 2000;Solovchenko and Schmitz-Eiberger, 2003). Early ontogenetic expression of FNS I in parsley and the gradual replacement of flavonols through flavones in more advanced members of Apiaceae (Harborne, 1971;Gebhardt et al., 2005) support this assumption. Advantage of functional FNSs for plant families and the identification of FNS II in non-Apiaceae is a clear indication of convergent evolution. It remains to be established whether FNS I has also evolved in non-Apiaceae, taking into account that few mutations are sufficient to confer this activity on a FHT. The search for non-Apiaceae FNS I is an interesting challenge.
Whereas flavonols and flavones have been isolated from spermatophytic and primitive plants, anthocyanidins are confined to the more advanced gymnosperms and angiosperms. This seems to suggest that FHT and FNS I developed early followed much later by FLS and Figure 8. A, Positioning of naringenin in the active site of ANS (Welford et al., 2005). B, Proposed positioning of naringenin in the active site of FNS I. ANS (Prescott and John, 1996). However, Prescott and John (1996) also excluded FHT as a direct progenitor of ANS because of low sequence similarity and divergent gene structure. There is no experimental evidence so far for an early evolution of FNS I because confinement to Apiaceae and high sequence similarity with FHT suggest a fairly recent duplication event. It appears more likely that flavonoid 2-ODDs developed from a common multifunctional ancestor gene because FLS and ANS show low substrate specificity, which could be attributed to either incomplete or spreading evolution. The low stringency might be explained by channeling substrates in multienzyme complexes (Winkel-Shirley, 2001), releasing the evolutionary pressure for enzymes of narrow substrate specificity. Under these premises, the physiological function of ANS must be reevaluated because in vitro ANS predominantly converted leucoanthocyanidin to dihydroquercetin, (2S)naringenin to dihydrokaempferol, and dihydroquercetin to quercetin (Welford et al., 2001). The lack of anthocyanidins in less advanced plants could thus be a consequence of poor complex formation rather than lack of ANS. The capability of catalyzing several steps in the flavonoid pathway might furthermore qualify ANS also as a progenitor candidate of other flavonoid 2-ODDs and it is essential to determine the evolutionary distance of the various flavonoid 2-ODDs. The functional flexibility of these 2-ODDs by very few mutations was demonstrated in this study and highlights the evolutionary importance of 2-ODDs for the introduction of new enzymatic functions.
Yeast Strains and Growth Conditions
Yeast (Saccharomyces cerevisiae) INV Sc1 (Invitrogen) was used for standard cloning and overexpression. Medium and growth conditions are described elsewhere . All plasmids for chimeric FHT/FNS I and mutagenesis were constructed from pYES2.1 that contains the parsley (Petroselinum crispum) FHT (Pet_criFHT) or FNS I (Pet_criFNSI; Martens et al., 2001Martens et al., , 2003.
Molecular Techniques and Cloning of 2-ODDs
Plasmid DNA was isolated as described in Engebrecht et al. (2001). For sequencing, DNA was further purified by being passed through a NucleoSpin column according to the manufacturer's instructions (Machery-Nagel). Restriction digestions were performed as described by the enzyme suppliers (MBI Fermentas). Yeast transformations were performed according to Easy-Comp protocol (Invitrogen). Ligation reactions and agarose gel electrophoresis were performed by standard procedures (Sambrook and Russel, 2001).
Further putative FHT and FNS I cDNA clones were isolated and functionally verified as described in Gebhardt et al. (2005).
Chimeric Gene Construction
Highly conserved regions of the FHT and FNS I gene were identified by multiple sequence alignment of a number of 2-ODDs (Gebhardt et al., 2005). Chimeric constructs (Pet_criChim I and II) are based on functionally verified Pet_criFNSI and FHT pYES2.1 clones. BamHI is conserved between Thr-219 and Asp-220 in both sequences and was used in combination with XbaI (located 3# of the insert in the multiple cloning site of pYES2.1) to digest the cDNA clones, resulting in a long (pYES2.1 and N-terminal part of the insert) and a short (C-terminal part of the insert) fragment. All fragments were gel purified after restriction digestion, isolated from the gel via the NucleoSpin extract kit, and added to the ligation reaction. The long fragment of the FNS I digest was combined with the short fragment of the FHT digest resulting in Pet_criChim I, and vice versa resulting in Pet_criChim II as illustrated in Supplemental Figure S1. Full-length sequencing of the inserts confirmed successful ligation of the fragments and intact open reading frames for both chimerics.
Site-Directed Mutagenesis of Pet_criFHT
Single and multiple amino acids substitutions were generated by sitedirected mutation of Pet_criFHT or its previously mutated variants in pYES2.1 by Stratagene QuickChange and QuickChange Multi system according to the manufacturer's instructions (Stratagene). All primers and templates for mutagenesis are listed in Supplemental Table S2.
In detail, the plasmid Pet_criFHT/pYES2.1 was used as a primary template together with a respective mutation primer (Supplemental Table S2). For additional specific mutations, the confirmed mutants of previous rounds were used as indicated in Table I. Some mutagenic oligonucleotides were designed with a silent mutation to generate a new or destroy a restriction site (Supplemental Table S2) to facilitate the identification of clones carrying the desired mutation. All mutant FHTs were sequenced in full length to ensure the correct residue was changed and to confirm that no other unintended mutation was introduced (MWG-Biotech).
Expression of Cloned Genes and Analysis of Catalytic Properties
Enzymatic activities of the wild-type, chimeric, and mutated proteins were determined as previously described by heterologous expression in yeast (Gebhardt et al., 2005) with 500 and 1,000 mg total protein, respectively, as double tests. Protein concentrations were determined according to Bradford (1976) with bovine serum albumin as a standard.
Sequence Comparison
Related sequences were initially detected by BLAST and PSI-BLAST (Altschul et al., 1997) analysis. Multiple sequence alignments were generated with the ClustalW algorithm (Thompson et al., 1994).
Homology Modeling
To identify putative amino acids responsible for the different catalytic behavior of FNS I and FHT, homology modeling was performed with respect to parsley FNS I and FHT. Homology models were generated using the Webbased SWISS-MODEL server (Schwede et al., 2003). The crystal structure of ANS in complex with Fe 21 , 2-oxoglutarate, and naringenin (Protein Data Bank entry 2brt) was kindly provided in advance by the authors (Welford et al., 2005) and served as template structure. Even though the homology models obtained were truncated with respect to the N as well as the C terminus, these models clearly represented the substrate-binding pocket and, accordingly, allowed for the structural location of sequence mismatches within the binding cavities. Because SWISS-MODEL recognizes only protein atoms, naringenin, 2-oxoglutarate, and the iron ion were added to the models at similar positions as observed in the template structure. However, this step has to be regarded with some care because the binding mode of naringenin to ANS is certainly different from FNS I and FHT. Thus, insertion of the substrate was performed to evaluate putative interaction sites between naringenin and its binding pockets. Amino acid residues of the two models involved in iron and cosubstrate binding were manually adjusted using the program O (Jones et al., 1991). Figures were created by means of Pymol (DeLano, 2002).
Supplemental Data
The following materials are available in the online version of this article.
|
2018-04-03T00:38:57.978Z
|
2007-05-25T00:00:00.000
|
{
"year": 2007,
"sha1": "8e70fe23d22e6f7b7792312458817c099e13b2e3",
"oa_license": "CCBY",
"oa_url": "http://www.plantphysiol.org/content/144/3/1442.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5d5916234ab8b1e9945df35ecab10202512a884e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
249300834
|
pes2o/s2orc
|
v3-fos-license
|
Discovery of Knowledge in the Incidence of a Type of Lung Cancer for Patients through Data Mining Models
This paper presents the research results on the contribution of user-centered data mining based on the standard principles, focusing on the analysis of survival and mortality of lung cancer cases. Researchers used anonymized data from previously diagnosed instances in the health database to predict the condition of new patients who have not had their results yet. Medical professionals specializing in this field provided feedback on the usefulness of the new software, which was constructed using WEKA data mining tools and the Naive Bayes method. The results of this article provide elements of interest to discuss the value of identifying or discovering relationships in apparently “hidden” information to propose strategies to counteract health problems or prevent future complications and thus contribute to improving the quality of care. Life of the population, as would be the case of data mining in the health area, has shown applicability in the early detection and prevention of diseases for the analysis of genetic markers to determine the probability of a satisfactory response to medical treatment, and the most accurate model was Naive Bayes (91.1%). The Naive Bayes algorithm's closest competitor, bagging, came in second with 90.8%. The analysis found that the ZeroR algorithm had the lowest success rate at 80%.
Introduction
Defining the causes by which a disease is generated has become a task involving different areas of knowledge such as medicine, biology, applied mathematics, and computer science, generating new studies based on large amounts of medical information that help discover causes relevant to the behavior of the disease. Lung cancer is the most common cancer affecting both men and women [1]. e incidence has increased rapidly during the last four decades. It is more frequent in men. However, the number of cases in women continues to increase. Mortality tends to be higher in men, although it can vary according to the different geographical areas. Treatment for lung cancer is multidisciplinary and varies according to histological type, mutation profile, and clinical stage. It is necessary to have a multidisciplinary evaluation with various specialists due to the complexity of the cancer patient-prevention measures such as eliminating smoking and screening methods in the population at risk [1]. About 70% of lung cancer patients are diagnosed with advanced disease at the time of diagnosis. Most patients are not suitable for curative treatment. Molecular characterization has led to the definition of new subgroups such as lung cancer mutated for epidermal growth factor receptor, lung cancer rearranged for analytic lymphoma kinase and ROS1 kinase domain, and PDL1 expression, which need specific treatments and strategies [2].
Data mining is the processing of data [3] to find behavior patterns useful for decision making; it is closely related to statistics by using sampling and data visualization and purification techniques and having databases as raw material. e data analysis can jointly provide true knowledge that helps in decision-making. In the study in 2015 [4], classification approaches were proposed to determine the degree of malignancy of small-sized lung nodules. e Lung Image Database Consortium (LIDC) database offered by the American Cancer Institute was used in the study. e LIDC database [5,6] is a database that has been evaluated by radiologists from four different institutes and includes radiographic descriptive information as well as the degree of nodule malignancy. In the statistical significance tests, it was observed that the Random Forest-(RF-) based team classifier, which had the highest classification performance, was superior to other methods [7][8][9]. In 2018, an automatic nodule region detection method based on artificial intelligence and image processing techniques was developed using lung tomography. In this context, a method and system that provide automatic detection and diagnosis of nodule regions on lung CT cross section images have been developed [10,11].
In the clinical scope, data mining results aid in identifying and diagnosing pathologies to discover possible correspondences between various diseases. e patient with lung cancer presents alterations in other health issues that must be considered; the management of pain and other symptoms represents only part of the help that can be given to improve the patient's quality of life. In this study, firstly, the data set of lung cancer disease was obtained by taking the physicians' opinions, and then models were created with this data set. As a result, the most successful algorithm among the models created was determined.
Methodology
In the study, first of all, the opinions of doctors working in the field of lung cancer were taken, and preliminary research was carried out for the data set suitable for this field. Data mining steps were applied to the appropriate data set. At this stage, preprocessing and data cleaning, data reduction, data transformation, and data mining operations were performed, respectively, and after these processes, an evaluation and conclusion were drawn, and the study was completed.
(1) Stage-1: the study was started by taking the doctors' opinions working on lung cancer and researching the data set suitable for this field. After the appropriate data set was found, data mining steps were applied before processing this data set. At this stage, data cleaning, data reduction, data conversion, and appropriate data mining software selection processes were carried out, respectively. Afterward, the feature analysis was made, the model creation part was started, and the most successful one among the compared models was selected and used in the software that produced the prediction result for diagnosis. (2) e first step in the data mining process is data collection. e personal information of some patient candidates who applied to hospitals with the suspicion of Malignant Neoplasm of Bronchus and Lung (C34) cancer, which is in the health database in Baghdad, Iraq, was obtained anonymously and provided that their personal information is kept confidential. Many tests are used to diagnose lung cancer, but in this study, it was tried to draw a conclusion based on these parameters by using the parameters also referred to as the hierogram test. (3) e next stage, the data cleaning stage, is extremely important for the success of data mining, and the success of this stage directly affects the success of the result. After the data collection phase, incompatible data, null values, and extreme values were removed from the data obtained. While the number of patient records drawn from the database is 700, the number used in the project is 404. Data with missing or outlier data were excluded from this analysis during the data cleaning phase to make the result more successful. After the cleaning process, the project was started with 404 patient data, of which 81 were C34, and 323 were not C34 (Control). (4) In the data reduction phase, due to the presence of unnecessary data such as both birth date data and age data in the data set, the data that will adversely affect the working time of the computer and reduce the quality of the results, which express the same result, have been removed from the data set. (5) In the data conversion phase, inconsistent data types encountered were corrected and incorrectly entered extreme values normalized for the data set to gain a standard structure. e data set, which was later transferred to excel format, was converted to csv format for easy data mining algorithms. In addition, at this stage, string-type data has been converted into numeric data. After these corrections were made in the data set, studies were carried out to select the appropriate data mining algorithm and the model creation phase. Various software tools are available for making data mining applications. e most commonly used ones in this field are R programming, Rapid Miner, and WEKA.
R provides superiority to S language with different applications. It has linear and nonlinear 2 Computational Intelligence and Neuroscience modeling, classical statistical tests, time-series analysis, classification, and clustering algorithms. It can run on many operating systems. (ii) RapidMiner (RM): RapidMiner also has a userfriendly interface and supports all types of files. (iii) WEKA: WEKA is the most widely used data mining program today; it is an open-code program developed on the java platform. It is preferred because of its compatibility with all operating systems. WEKA includes data processing, classification, clustering, and data association features. is study preferred WEKA is a data mining tool since the WEKA integration with Java works better.
Application and Results
In this section, information about the lung cancer data set used, the features to be used in the study, the data mining algorithms used in the model creation phase, the software developed for this purpose, and the output of this software are explained.
Summary of Lung Cancer (C34) Dataset
Used. Data mining is extracting and interpreting data from a digital environment. e data source must be created correctly and thoroughly [12]. e data of patients and patient candidates who sought treatment for lung cancer in Baghdad, Iraq, were acquired anonymously from the lung cancer database and used for data mining. Except for the governorate, municipality, gender, and age used in the study, the following attributes directly impact the findings. No matter how regular and dependable the data set is, it must be preprocessed before use. e data set was preprocessed and made suitable for model construction to improve the study's accuracy and success rate. Patient records with too many missing values were purged prior to the study to obtain a relevant result. After deleting the patient data, a new data set with 404 patient data was obtained. is component of the study includes information concerning nine patient features. Figure 1. Various software tools are available for making data mining applications. e most commonly used ones in this field are R programming, RapidMiner, and WEKA. ey have advantages and disadvantages compared to each other. It can be said that each of the data mining software tools mentioned above has its own advantages. However, some were not preferred in this study because of the following: Yale is not easily accessible, R is not widely used on UNIX machines, and it is used with the help of experts on the windows system and does not have enough algorithms. Since the WEKA data mining tool is Java-based, WEKA was preferred as data mining software in this study, thinking that its integration would be more efficient. Computational Intelligence and Neuroscience
Attribute Analysis.
After completing the basic stages such as data transformation and data cleaning, the feature analysis stage was started. Since the WEKA data mining program was preferred here, the data was converted into a format suitable for WEKA. e format suitable for WEKA is CSV and ARFF format. In this context, first of all, the cleaned data is given in Figure 1. e data saved as * xls in Excel format was opened with a notepad and converted to CSV format. * xls file is used to define the data saved with the CSV extension as WEKA's input data. "," replaces the character, and "." replaces the "," character between numbers. e conversion is provided by typing the character. e data obtained after this stage were made suitable for working with WEKA called edited data set.
Generating Appropriate Model with Data Mining
Algorithms. Various classified algorithms, frequently mentioned in the literature, which will be explained in detail in the following sections, were applied to the data set. e data set is divided into training and test sets to test the success of the method applied in data mining studies. is separation can be done in various ways. For example, one of the methods that can be used is to reserve 66% of the data set for training and 33% for testing and test the success of the test set after the training set, and the system is trained. Random assignment of these training and test sets is another method. However, in this study, 10 is the most preferred k value in the literature. e meanings of some of the lines expressing the accuracy and reliability stated here are given as follows: Obtaining and evaluating model results: algorithms frequently used in this field in the literature were applied to the data set. Screenshots and evaluation results of these will be explained in detail in the following sections. In this context, the success results for each algorithm will be compared.
Choosing the most suitable model: while choosing the appropriate model, algorithms frequently mentioned in the literature are included. e most successful algorithm was decided based on the correct prediction rate in this context. Models were created one by one with the selected algorithms, and as a result, it was decided to apply the Naive Bayes algorithm, which has the highest accuracy with 91.09%, to the data set. While choosing this algorithm, accuracy value, time, and average absolute error were taken into account.
Algorithm Design and Software Development Suitable for the Selected Model.
As a result of using the algorithms in the WEKA data mining software on the data set, first of all, detailed information was obtained about the algorithm that can make the most appropriate prediction estimation. Afterward, this algorithm was accessed from the developed software, the model was created, and a meaningful result was tried to be obtained [13]. During the development of the software, attention was paid to ensuring that it had the following criteria: (i) User-friendliness of the software (ii) User (doctor or health personnel) can enter blood value test results into the system simply. (iii) I am displaying some model values used, such as F-Measure, Precision, and Recall, on the interface for informational purposes. (iv) e user was asked to enter nine attributes to produce a result that can be understood and easily interpreted by the user according to the entered values with the software made in this context. As a result of these entered values, an estimate of the C34 disease diagnosis of the patient candidate is obtained. is is an informative result for doctors and healthcare professionals.
Creating an Interface with the Developed Software and
Operating the System. is study aims to produce the most accurate prediction by using the data mining method for the diagnosis of lung cancer. With the most successful one among the examined algorithms, a study has been made to create an interface that the health personnel will use that can produce meaningful predictions for the personnel by taking nine input values that will enable this algorithm to work. After studies and research were made for the interface to be created, it was decided to develop the interface with Java. From this interface screen, nine attribute values will be entered, and then the system will produce a prediction result for the diagnosis of lung cancer with the evaluate button. In the developed tool, a user-friendly interface has been designed so that the test can be applied easily and easily understood by each doctor or healthcare worker. After the data requested by the healthcare professionals are entered into the system, the developed algorithm will run, and the prediction about the disease will be presented to the healthcare personnel in the line indicated as C34 Status. is information is of great importance in giving doctors and healthcare professionals an idea.
Models and Performance Measures Created on Lung
Cancer Dataset with WEKA. Influencing the analysis result, such as the features selected in the data preprocessing process, the completion of missing data, the correction of extreme values, and the removal of many rows from the data set due to the large number of data being NULL directly affect the model extraction. A different preprocessing process will also affect the success of the model [14].
In this study, many models were created with different algorithms to predict whether a person has lung cancer with the data set consisting of records that have and have not been diagnosed with lung cancer. In this section, the performance levels of the models created are compared. J48 and Ran-domTree, which are decision tree algorithms in practice and based on ID3 and C4.5 algorithms, are among the Bayesian Classification algorithms, a statistical algorithm. naïve Bayes and BayesNet, sample-based classification algorithms Kstar, regression-based algorithms, Logistic Regression, and Multilayer Perceptron models were created using Bagging, One R, and Zero R algorithms, and the performance grades of these models were compared. [15]. After the data set was converted to CSV format that the WEKA program could read, model creation was carried out. e statistics and confusion matrix of the test results of the model created with the Naïve Bayes algorithm, which is a statistical algorithm that classifies datasets and classifies unknown data on the basis of probability, is shown in Figure 2, and the comparison criteria of the algorithm's model are shown in Figure 3. In Table 1, 62 of 81 pieces of C34 data were classified correctly, and 306 of 323 pieces of other data were classified correctly, resulting in an accuracy rate of 91.1% as shown in Figure 3.
Logistic Regression Algorithm and Performance
Measure. Another algorithm applied for comparison purposes is the logistic regression algorithm, which is one of the regression-based methods. After applying this algorithm to the data set, Figure 3 and Table 1 were obtained. 53 of 81 pieces of C34 data were classified correctly, and 306 of 323 pieces of other data were classified correctly, resulting in an accuracy rate of 88.9% as shown in Table 1.
(1) Comparison of Created Models. Naive Bayes, BayesNet, Logistic Regression, Multilayer Perceptron, KStar, Bagging, OneR, ZeroR, J48, and finally Random Tree algorithms were applied to the preprocessed data source, and models were created. Statistical information, confusion matrices, and comparison criteria of these models are shown in the previous section in detail in Figure 3. To make a better Computational Intelligence and Neuroscience comparison, the accuracy, precision, sensitivity, and F-Measure values of each model are shown in Figure 3. When the values in Figure 3 are examined, it can be said that the Naive Bayes algorithm produces the best result with an accuracy rate of 91.1%. e accuracy criterion is the most basic and, at the same time, the most important criterion. According to this criterion, the Naive Bayes algorithm is followed by Bagging, Multilayer Perceptron, K star and Bayes Net, Random Tree, Logistic Regression, J48, One R, and Zero R algorithms, respectively.
Multilayer Perceptron Algorithm and Performance
Criteria. Multilayer Perceptron consists of the input layer with the input neurons, the output layer with the output neurons, and one or more hidden layers. e input layer receives the inputs from the multilayer network and transmits them to the middle layer, the processing elements in this layer are connected to all the processing elements in the next intermediate layer, and the algorithm works in this way. e Multilayer Perceptron algorithm, generally used to solve nonlinear problems, was applied to the data set used in the study, and a model was created. e confusion matrix of this model is shown in Figure 2, and the comparison criteria are shown in Figure 3. In Table 1, 62 out of 81 pieces of C34 data were classified correctly, and 302 out of 323 pieces of other data were classified correctly, resulting in an accuracy rate of 90.1% as shown in Table 1. K Star algorithm and performance criteria: in this section, the model creation process was carried out with the KStar algorithm, which is an example-based learning algorithm, which is widely used in the literature and is also included in WEKA. e confusion matrix of this model is given in Figure 2, and the information including the comparison criteria is given in Figure 3. In Table 1, 76 out of 81 pieces of C34 data were classified correctly, and 287 out of 323.
Bagging Algorithm and Performance Criterion.
Another data mining algorithm applied to the data set is the Bagging algorithm, which is among the subheadings of the Meta tab in WEKA. is algorithm was first proposed by [7][8][9][16][17][18][19][20]. In Bagging, a training set with N samples is produced from the training set consisting of N samples, by random selection with substitution. In this case, some training samples are not included in the new training set (approximately 33%), while some are included more than once. Each basic learner in the community is trained with training sets containing different examples produced in this way, and the results are combined with majority voting. e confusion matrix of the model created with the Bagging algorithm is presented in Figure 2, and the comparison criteria are presented in Figure 3. In Table 1, C34 data were classified correctly, and 309 of 323.
OneR Algorithm and Performance
Criterion. e OneR algorithm (single class limitation) is one of the algorithms found under the subheadings of the rules tab in WEKA, which creates rule trees by testing a feature and is an algorithm that is frequently mentioned in the literature. For this reason, it was preferred to create models and compare them in the solution phase of the problem. e confusion matrix of the model created with this algorithm is given in Figure 2, and the comparison criteria are given in Figure 3. In Table 1, 47 of 81 pieces of C34 data were classified correctly, and 304 of 323 pieces of other data were classified correctly, resulting in an accuracy rate of 86.9% as shown in Table 1.
ZeroR Algorithm and Performance Criterion.
ZeroR algorithm is a simple algorithm. is algorithm estimates the average value of numerical or nominal test data. It applies the rules of basic coverage algorithms [21]. ZeroR only tries to detect the majority class distribution, which is assumed as a rule. In ZeroR, the arithmetic mean for numeric class attributes and the mod class value for nominal class attributes are estimated. ZeroR does not produce any rules other than this [22]. e confusion matrix of the model created with this algorithm is given in Figure 2, and the comparison criteria are given in Figure 2. In Table 1, all of the 81 C34 data were misclassified, and all of the 323 other data were classified correctly. Figure 2, and the comparison criteria are shown in Figure 3. In Table 1, 59 out of 81 pieces of C34 data were classified correctly, and 297 out of 323 pieces of other data were classified correctly, resulting in an accuracy rate of 88.1%, as shown in Table 1.
Random Tree Algorithm and Performance Criteria.
Random Tree algorithm, another decision tree algorithm frequently used in the literature, was used for comparison purposes during model creation. e confusion matrix of the model created with this algorithm is shown in Figure 2, and the comparison criteria are shown in Figure 3. In Table 1, 64 out of 81 pieces of C34 data were classified correctly, and 297 out of 323 pieces of other data were classified correctly.
Discussion and Findings
Data mining aims to extract meaningful information from large data piles in the digital environment and evaluate this information. e biggest problem in studies conducted for this purpose is that data sources contain incorrect data in large data stacks, or the values of many attributes are entered incompletely. ese problems were encountered in the data set used in this study. In terms of the reliability of the application results, the data set was subjected to a detailed preprocessing process. At this stage, in the data set, many processes such as cleaning the data, removing the inconsistent and noisy data from the data set, deleting the rows with too many null values from the data set, pruning the extreme values, and data reduction were applied.
In data mining, different methods are used in the stage of accessing information. ere are many algorithms for these methods. ere are many studies on which of these algorithms are more successful, and the results of these studies differ from each other. e main reason for this is that this success depends on the data set used, the preprocessing process of the data, and the selection of algorithm parameters. It is normal for different researchers to have different results of studies with different parameters on different data sets. In this study, Naive Bayes, BayesNet, Logistic Regression, Multilayer Perceptron, KStar, Bagging, OneR, ZeroR, J48, and Random Tree algorithms were applied to the data set, and a model was created with each algorithm. e most successful result was obtained with the Naive Bayes algorithm among these models.
is algorithm was used in the software developed because of the success rate of 91.1% with Naive Bayes. e developed software is 91.1% successful in giving ideas to physicians.
Survey Study on Physician's Opinions.
A questionnaire with evaluation criteria between 1 and 5 was sent to professionals in the field to get their feedback on the system. Using data from several hospitals, 20 physicians were asked for their thoughts, and the average score for each question was calculated. In this context, the survey questions and the results of the physicians' responses to the survey questions were evaluated. So, 4 and 5 points are green, 1 and 2 are red, and 3 are yellow. However, question 3, which asks how many elements the study will have, has a lower score than the other questions, as shown in the table. Some doctors claimed that more qualities would be more useful in treating the situation at this time. On the other hand, the third question had the lowest average score of 3.4, indicating that some doctors felt that the number of qualities should be increased, while others felt that it was adequate.
Conclusion and Recommendations
is study used a suggestion system to help diagnose lung cancer. Various algorithms were applied to the data set to construct models. Java software was developed using the most successful algorithm, and the software presented to healthcare professionals was obtained. After the data was entered by the data entry employees, they were examined one by one and turned into a single standard, and models were constructed on the data set and compared. Models developed with several algorithms were compared to see which algorithm produced the best model. e model was built using WEKA data mining. Ten algorithms were chosen from the WEKA data mining algorithms for comparison based on their popularity and literature studies. A Random Tree algorithm combines Naive Bayes, BayesNet, Multilayer Perceptron, KStar, Bagging, OneR, ZeroR, and J48. e Naive Bayes algorithm, which is a statistical algorithm, provides a better model than other algorithms when these models are compared. e most accurate model was Naive Bayes (91.1%). e Naive Bayes algorithm's closest competitor, Bagging, came in second with 90.8%. e analysis found that the ZeroR algorithm had the lowest success rate at 80%.
Using the provided interface, the software creates an estimate for the C34 diagnosis for the user based on the values entered. It has an adaptable, updateable, user-friendly interface and accessible structure to keep up with technological developments. Using the Naive Bayes method, this study had a 91.1% success rate compared to other studies in this sector.
Recommendations.
is study can be done on datasets in different categories, it can be used to identify different cancer types, they can be compared using more algorithms, the study can be expanded by using a different data mining tool other than WEKA, or the number of features and data used in the study can be increased to be obtained in a different study in this area, and the accuracy can be evaluated.
Data Availability e data underlying the results presented in the study are available within the manuscript.
Computational Intelligence and Neuroscience 7
|
2022-06-03T15:20:44.138Z
|
2022-05-31T00:00:00.000
|
{
"year": 2022,
"sha1": "eed8b3af0d7a4cfd3868d296ce662406e0869570",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/6058213.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1365b2cb6eb2091ed020fff2717734c551b07b1f",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55212808
|
pes2o/s2orc
|
v3-fos-license
|
Convergence of the finite element method applied to an anisotropic phase-field model
We formulate a finite element method for the computation of solutions to an anisotropic phase-field model for a binary alloy. Convergence is proved in the H1-norm. The convergence result holds for anisotropy below a certain threshold value. We present some numerical experiments verifying the theoretical results. For anisotropy below the threshold value we observe optimal order convergence, whereas in the case where the anisotropy is strong the numerical solution to the phase-field equation does not converge.
Introduction
In this paper we study a finite element method for the numerical computation of an anisotropic phase-field model for a binary alloy.For details on modelling and physical background to this model we refer to Warren and Boettinger [12] and Kessler et al [7].Existence for the anisotropic model was proved by Burman and Rappaz in [1] and convergence of a finite element method in the isotropic case was proved in Kessler and Scheid [8].Other work on convergence of the finite element method for isotropic phase-field models include Chen and Hoffman [3], Feng and Prohl [5], Chen et al. [2].The anisotropy (as introduced in Kobayashi [9]) permits the modelling of branches in models of dendritic growth but makes the second order operator strongly nonlinear.However we show that this operator is strongly monotone, under a certain convexity condition, and Lipschitz continuous.The convergence of the finite element method is proved under regularity assumptions close to the regularity proved for the isotropic model.
We consider a binary alloy of two pure elements in both liquid and solid states inside a lipschitzian domain Ω ⊂ R 2 .The system is characterized by a relative concentration c = c(x, t), where the value c = 1 corresponds to the situation with only one element present and c = 0 with only the other, and by an order parameter φ = φ(x, t) (the phase-field), which takes values between 0 and 1.The value φ = 0 corresponds to a solid region and the value φ = 1 to a liquid region.The nonlinear parabolic system then takes the following form (see Burman and Rappaz [1] for details) ∂φ ∂t − div(A(∇φ)∇φ) − S(c, φ) = 0 in Ω × (0, +∞), (1.1) A(∇φ)∇φ • n = 0 on ∂Ω × (0, +∞), ( φ(0) = φ 0 , c(0) = c 0 in Ω, (1.5) where ∂Ω is the boundary of Ω and n is the unit normal to ∂Ω.We define the anisotropy matrix A(ξ) for ξ ∈ R 2 by where θ ξ denotes the angle between the x-axis and the vector ξ, the function a(θ) is given by a(θ) = 1 + ā cos(kθ) with k > 1 an integer corresponding to the number of branching directions.The functions S, D 1 and D 2 appearing in (1.1)-(1.5)are Lipschitz continuous functions, with first derivatives with respect φ and c uniformly bounded, satisfying S(c, 0) = S(c, 1) = 0, 0 < D s ≤ D 1 (φ) < D l and D 2 (c, φ) = 0 for c = 0 and 1 and φ = 0 and 1.In practice for the numerical computations we choose the following form of the non-linear functions (see Kessler et al [7].) where for 0 ≤ φ, c ≤ c.Outside this interval all these functions are extended continuously by a constant.We remark that by integrating equation (1.2) on Ω and by using (1.4) we obtain conservation of mass In Burman and Rappaz [1], we proved existence of a weak solution for this strongly nonlinear parabolic system under certain assumptions on the parameter ā.To be more specific we denote by V = H 1 (Ω), V the dual space of V , T the final time.The L 2 -scalar product is denoted (•, •) Ω and the corresponding norm • Ω .We have proved for all v, w ∈ H 1 (Ω) and a.e. in (0,T).Furthermore, if where Q T = Ω × (0, T ).Moreover, if we extend the mappings S and D 2 by zero outside the unit square (0, 1) × (0, 1) and if we assume
A finite element method
We discretize the above system of equations using P 1 -lagrangian finite elements in space and a semi-implicit Euler-scheme in time.Let T be a triangulation of Ω.For any triangle K ∈ T , we denote by h K its diameter and set h = max K∈T h K .Let V h be the finite element space defined by where P 1 (K) denotes the set of polynomials of degree 1 on K.For an integer N > 0 we introduce τ = T /N and t n = nτ , n = 0, 1, 2, . . .We consider the following fully discrete scheme.Given for all and n = 0, 1, 2, 3, . . .Note that to compute (φ n+1 h , c n+1 h ) we only need to solve a nonlinear system for φ n+1 h .We can prove in the same way as in Ref. 1 that given (φ n , c n ) both equations have a unique solution [1] (φ n+1 h , c n+1 h ) for timesteps τ sufficiently small.Existence of φ n+1 h is proved using direct methods in the calculus of variations (see Dacorogna [4]) and the existence of c n+1 h follows by a standard application of the Lax-Milgram lemma.
The nonlinear operator
To prove the convergence of the finite element scheme we need to extend the analysis concerning boundedness and continuity of the nonlinear operator.First we recall some fundamental results [1], which we state here without proofs.
Lemma 3.1: When
the Ginzburg-Landau potential
Lemma 3.2:
The Gateaux derivative of the Ginzburg-Landau potential, exists for each φ ∈ V and is given by Lemma 3.3: The anisotropic operator satisfies the following upper and lower bounds: where We also recall some results on Eulerian operators derived from Gateaux-differentiable functionals.
Definition 3.4:
where (•, •) denotes the duality pairing between V and V .In addition to these results we need the following lemma, stating that the nonlinear operator defined by (3.3) is strongly monotone and Lipschitz continuous with respect to the H 1 (Ω)-seminorm, | • | V .Lemma 3.5: If the convexity condition ā < 1 k 2 −1 holds, then we have the following inequalities for all φ, ψ ∈ V : ) where L is a constant independent of φ, ψ and • Ω is the L 2 -norm of vectorial functions.
Proof: The first inequality is a consequence of the fact that the Eulerian operator derived from a convex functional is monotone.We consider the perturbed Ginzburg-Landau functional It is easy to see that for some sufficiently small µ ā > 0 this functional will remain convex.We consider the corresponding Hessian matrix of ξ → a(ξ) 2 2 − µā 2 |ξ| 2 in polar coordinates, for ξ = 0: where O θ denotes the matrix of rotation of the angle θ.It is easy to show [1] that if ā < (k 2 − 1) −1 , then a(θ) + a (θ) > 0 and H(ξ) will remain positive definite for ξ when .
Hence, by the monotonicity of the corresponding Eulerian operator we have where • denotes the scalar product in R 2 , η T is η-transposed, and H is the above hessian matrix with µ ā = 0. Since the spectral norm of H is bounded independently of ξ, we easily obtain inequality (3.5) with The convexity of the functional is of essential importance for the wellposedness of the system.To illustrate how convexity is lost we plot the contourlines of the integrand of the functional (3.2) with k = 4 for the case ā = 0.05 and ā = 0.15 in Fig. 1.Note the non-convex zones appearing around θ = nπ/2, n = 0, 1, 2, 3 when the anisotropy parameter is larger than 1/15.These non-convex zones corresponds to "forbidden" gradient directions and will give rise to corners and rapidly oscillating gradients in the finite element approximation (2.1) of the equation for φ as we will see in the numerical section.
Regularity hypothesis
The strong non-linearity in the anisotropic operator makes a priori estimates on higher order derivatives very difficult to prove, especially considering that A(ξ)ξ is not differentiable at ξ = 0.For the isotropic problem on the other hand quite extensive regularity results were proved in Rappaz and Scheid [10].In fact we have provided that the initial data are sufficiently regular, that is to say φ 0 ∈ H 2 (Ω), ∂φ0 ∂n = 0 on ∂Ω and c 0 ∈ H 1 (Ω).This however does not suffice to show convergence of the finite element method.In the sequel we will assume that there exists a unique solution (φ, c) of the system (1.13) such that both φ and c enjoy the regularity proved for φ and in addition that the gradients are bounded on the space time interval.So we will suppose that (φ, c) ∈ W where This assumption is reasonable as long as the anisotropic functional remains strictly convex such that the strong monotonicity (3.4) holds.To make these assumptions sufficient for the convergence proof to hold we still need to show that this implies sufficient regularity of the time derivatives.For this we need to assume that the non-linear terms S, D 1 , D 2 have bounded first derivatives in φ and c.We show formally in the following lemma that this implies the necessary regularity of the time derivatives.
Lemma 4.1: Under the above regularity hypothesis we have
Proof: We only give the proof for the strongly non-linear equation for φ since the proof for the concentration is similar.Formally differentiating equation (1.1) with respect to t we get We study this equation on weak form: Clearly we have A straightforward calculation using the definition of the non-linear operator and the relation where from which we conclude that Taking the square of this inequality and integrating in time yields (4.1) for φ.
Convergence of the finite element method and error estimate
Proof: In the sequel L A , L S , L D1 and L D2 will denote the Lipschitz constant associated with operators A(∇φ)∇φ, S(φ, c), D 1 (φ) and D 2 (c, φ) furthermore D max and the finite element formulation (2.1), we may write Now using the notation φ n ∆ = φ n h − φ(t n ) and the equality (5.2) we obtain for all It now follows by Lemma 3.5 that Now multiplying by τ and summing over n we obtain, using summation by parts, We proceed by adding and subtracting φ(t n+1 ) in the two last terms in the right hand side and using the Cauchy-Schwarz inequality in combination with Young's inequality We now eliminate the second term on each side of the inequality, we use the Lipschitz continuity of the source terms and a duality argument for the second derivative in time to obtain (using Cauchy-Schwarz and Young's inequality repeatedly) and (5.7) Now using (5.6) and (5.7) in (5.5) and collecting terms we obtain where c n ∆ = c n h − c(t n ).We turn to the equation for the concentration c and obtain, in the same fashion (5.9) We use the formulation (2.1) to replace the c n h in c n ∆ by some arbitrary function Proceeding as above we may write for terms I 1 and I 5 as and using the Young inequality
Convergence of the finite element method
We treat I 2 to I 4 in the same spirit as (5.6) leading to and analogously and Multiplying by τ and summing over n in equation ( 5.10) we have Finally we multiply (5.12) by η = µāDs 64D max 2 , add (5.8) and (5.12) and apply the discrete Gronwall lemma to obtain Since in particular our regularity assumptions include we may chose where π h denotes the interpolation operator.The theorem now follows by a standard interpolation estimate.
Remark 5.2: Note that the exponential factor α is of the order of ∇φ 2 which under the regularity hypothesis should be of the order δ −2 if δ denotes the interface thickness.This is the typical worst case estimate for phase-field equations (see for instance Kessler and Scheid [8] or Chen and Hoffman [3].) However in a recent paper Feng and Prohl [5] show that for the isotropic, thermal phase-field equation, an estimation of the smallest eigenvalue of the linearized operator permits a priori estimates which show growth only in low polynomial order of δ −1 provided that all interior layers are developed in the initial data.
Numerical tests
Implementation of the numerical scheme (2.1) was done using the finite element package ALBERT developed by Schmidt and Siebert [11].We have set up tests to obtain the experimental numerical convergence order of the scheme in the norm L 2 (0, T ; H 1 (Ω)) and compare it with the theoretical result of Theorem 5.1.We have also measured experimental orders of convergence in the L ∞ (0, T ; L 2 (Ω)) norm.Tests have been run using both low and high anisotropy.
Implementation of numerical tests
For the tests, parameters of the nonlinear functions defined in (1.6)-(1.12)are set to We have treated the nonlinearities of numerical scheme (2.1) with just one step of a fixed-point method.Quadrature is exact for polynomials of degree 3, except for the mass matrices for which we used mass lumping.
By adding extra artificial source terms, we have imposed exact solutions to both equations, with which to compare the numerical solutions.We chose solutions that reproduce some features expected of the solutions of system (1.1)-(1.5).Namely, accross the solid-liquid interface, the phase-field is known to have a hyperbolic-tangent-like profile while its values change from 0 to 1, while the concentration goes smoothly from values close to a small constant c s in the solid region, to a large constant c l on the liquid side of the interface, and then down to an intermediate value c 0 in the liquid bulk phase.[6] Also, we assume that propagation velocities of the interface vary depending on the interface's normal direction proportionally to the anisotropy function a(θ).
Since we would like to see all interface directions in our test, we choose exact solutions whose initial conditions represent a circular interface separating bulk solid and liquid phases with different concentrations.The transition is smooth as described above.The system then evolves, and the interface moves outward, with local velocities depending on the interface's normal direction, assimilated in the definition of the test solutions to the angle in local polar coordinates.
Let us define polar coordinates associated to position x by where ρ = x and θ are the cylindrical coordinates of x.In the sequel, we will be always working in the space-time domain [0, 1] 3 .
Let ρ 0 and v be given constants representing the radius of the initial circular interface and the solidification front velocity.We define the auxiliary function and the actual imposed phase-field solution Let c 0 , c s , cl and ρ ∆ be given constants representing the values of the concentration in the liquid and solid bulks, a value proportional to the concentration on the liquid side of the interface, and an δ-rescaled shift from the center of the interface.We then define the two auxiliary functions and the imposed concentration solution For the tests, we fix the previous numerical constants to ρ 0 = 0.2, v = 0.6, c 0 = 0.4, c s = 0.2, cl = 0.8 and ρ ∆ = 1.
As an illustration, radial profiles of the imposed solutions (6.8) and (6.10) for t = 0.5 and θ = 0 are shown in Figure 2, as well as level sets of φ for both low and high anisotropy at the final time t = 1 in Figure 3.
In the implementation, exact solutions (6.8) and (6.10) are used as initial and Dirichlet boundary conditions, and their derivatives are combined to define artificial source terms added to both equations, ensuring that they are then solutions of the differential system.
Results of numerical tests for low anisotropy
We now present numerical results for a low anisotropy ā = 0.05, which is in the scope of the theory presented in this paper.We performed two series of tests: one in which the timestep size was decreased linearly with the space mesh size, and another where the timestep size was decreased quadratically with the mesh size.We are interested in the experimental orders of convergence (OC) for the L 2 (0, T ; H 1 (Ω)) norm of the error, which is in the scope of the theory, and also the L ∞ (0, T ; L 2 (Ω)) norm, for which the convergence rate predicted by our theoretical result is expected to be suboptimal.From the results in Tables 1-4, we verify experimentally that the order of convergence h + τ predicted by Theorem 5.1 for e L 2 (0,T ;H 1 (Ω)) is optimal.However, we can also conclude that this order of convergence is suboptimal for e L ∞ (0,T ;L 2 (Ω)) , which converges faster, at a rate h 2 +τ .The experimental orders of convergence are also illustrated by Figures 4-5, made in log-log scale using the data from Tables 1-4.
Results of numerical tests for high anisotropy
For the high anisotropy, we take ā = 0.10.We observed in numerical tests with imposed solutions (6.8) and (6.10) that the numerical solution is unable to reproduce the features of a presumably H 1 regular solution in regions where the normal direction to the level sets of solutions is pointing at an angle close to 0, π/2, π or 3π/2.This corresponds to nonconvex portions of the Franck diagram, i.e. the graph of a level set of the anisotropy energy as a function of ∇φ.These are the angles that physicists call "forbidden angles".Qualitatively, it seems that level sets of the numerical solution avoid "forbidden angles" in the imposed solution by zigzagging at "permitted angles", much like a sailboat would zigzag in an effort to sail upwind, using only directions in which it is possible to sail.
We have also tried imposing different solutions, planar front equivalents of (6.8) and (6.10), which contain only one direction.We have chosen a forbidden direction (θ = 0) and a permitted direction (θ = π/4) as examples.The corresponding imposed solutions are the same as (6.8) and (6.10), with a change of definition for ρ and θ, which are no longer the polar coordinates.The angle θ is instead the constant 0 or π/4, whereas ρ is defined as respectively x 0 or (x 0 + x 1 )/ √ 2. The qualitative behavior of these solutions can be seen in Figure 6.We now present numerical evidence of convergence in the L 2 norm even for the "forbidden direction" test, and in the H 1 norm only in the case of a "permitted direction".Apparently, the zigzagging behaviour still allows the solid-liquid front to evolve with a correct average velocity, but with wrong local gradients.For the sake of brevity, in this section we present only results for φ, and decreasing the timestep quadratically with the mesh size.We have observed that in the high anisotropy regime, c always converges better than φ, and its level sets never present the zigzagging behavior.From the results presented in Tables 5-6 and Figure 7, we conjecture that the result of Theorem 5.1 still holds in the high anisotropy case, whenever the level sets of the solution have normals in the region where the anisotropy operator is still convex.
Figure 6 :
Figure 6: Level sets of φ for forced solutions with high anisotropy
|
2019-01-11T23:28:02.445Z
|
2004-01-01T00:00:00.000
|
{
"year": 2004,
"sha1": "a08204a470bb8fe2932f9fdca720d30aa98fd808",
"oa_license": "CCBY",
"oa_url": "https://ambp.centre-mersenne.org/item/10.5802/ambp.186.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "9057b131032054f9c2b73d8ec26805a28953af3a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
244269719
|
pes2o/s2orc
|
v3-fos-license
|
Shot-noise measurements of single-atom junctions
Current fluctuations related to the discreteness of charge passing through small constrictions are termed shot noise. This unavoidable noise provides both advantages - being a direct measurement of the transmitted particles' charge, and disadvantages - a main noise source in nanoscale devices operating at low temperature. While better understanding of shot noise is desired, the technical difficulties in measuring it result in relatively few experimental works, especially in single-atom structures. Here we describe a local shot-noise measurement apparatus, and demonstrate successful noise measurements through single-atom junctions. Our apparatus, based on a scanning tunneling microscope operates at liquid helium temperatures. It includes a broadband commercial amplifier mounted in close proximity to the tunnel junction, thus reducing both thermal noise and the input capacitance that limit traditional noise measurements. The full capabilities of the microscope are maintained in the modified system and a quick transition between different measurement modes is possible.
I. INTRODUCTION
Due to the statistical nature of transmission of particles through non-transparent junctions, current fluctuates in time. These fundamental fluctuations, related to the discreteness of charge-carrying particles, lead to the so-called shot noise in small constrictions [1]. It is precisely this sensitivity to the charge of the transmitted particles (or quasi-particles), which entails great potential to the study of electron-electron correlation effects and exotic electronic states (for a review see Ref. [2]). For instance, shot-noise measurements have provided evidence of fractional charges in the quantum Hall effect [3][4][5] and Cooper-pair transport in superconductors [6][7][8]. It may further yield information of the scattering processes at Kondo impurities [9][10][11][12][13][14][15][16][17].
Hence, shot noise provides valuable information of fundamental physical processes beyond other experimental techniques. Combined with atomic-scale imaging, it could open the door to the characterization of electron correlations in nanoscale materials. Unfortunately, the measurement of shot noise is a challenging and demanding task. Firstly, other noise sources are ever-present in real systems and need to be disentangled. Secondly, the shot-noise signal is very small (∼ 100 fA/ √ Hz). As a result, its measurement requires both to minimize other sources of noise while simultaneously amplifying the shotnoise component.
The main unavoidable sources of noise are thermal [18,19] and 1/f (also called pink) noise [20]. To reduce thermal noise, the junction itself needs to be placed at low temperatures. Yet, this is not sufficient since all electronic components in the setup also contribute to the total thermal noise. It is thus important that measurement-related electronics are also placed in a low temperature environment. The second prerequisite is to * Idan.Tamir@FU-Berlin.de measure at high enough frequencies, where the amplitude of the 1/f noise can be disregarded, typically well above 1 kHz. At higher frequencies, however, there is a natural cutoff due to the system's finite resistance and capacitance to ground that form an effective low-pass filter. Finally, one has to address 50-(or 60-)Hz noise, radiated from all grid-powered electronic devices and coupled to the measurement setup. This 50-Hz noise is practically unavoidable, but can be reduced by properly grounding the setup (adopting a star-shaped grounding scheme, and providing independent ground), supplying power by using batteries, and connecting isolation transformers to buffer those devices that are not battery powered.
To interpret the shot-noise signal and to draw conclusion on the physical properties of the junction, we briefly describe the fundamental properties of transport through mesoscopic systems [2]. The transmission probability of non-interacting electrons passing through a narrow constriction is expected to follow a Poisson distribution. The associated noise in the current is determined by the total conductance: S I ≡ S P = 2eV G 0 n τ n = 2e < I >, where e is the electron charge, G 0 = 2e 2 /h is the conductance quantum, τ n the transmission probability of the n th channel, I the current, and h the Planck constant. The total conductance is defined by the sum over all transmission channels G = G 0 n τ n . Here we assumed zero temperature (T ), that the current follows Ohm's law (I = V G), and τ n 1 for all channels. This description of noise holds true when each electron passing through the junction finds an empty state at the other side. This is not the case for large junction transparencies when a forward-traveling electron finds an occupied state in the other lead and needs to be back-scattered for not violating Pauli's exclusion principle. This process effectively leads to a suppression of shot noise. The resulting noise can be described by the sum of products of transmissions and reflections: S I = 2eV G 0 n τ n (1 − τ n ) ≤ S P . The degree of shotnoise suppression is most conveniently expressed as the ratio between the measured noise and the Poissonian value. It is termed the Fano factor and defined as: F ≡ S I /S P = n τ n (1 − τ n )/ n τ n . It ranges between zero when all conduction channels are fully open, and 1 in the limit of τ n 1 as the noise approaches the Poissonian distribution. The Fano factor can be generalized for the transport of other charge carriers, carrying a charge of q. In this case, the Fano factor also stores information regarding the charge of the tunneling particle, F ∝ q/e. At finite temperatures, adopting a full quantum mechanical consideration results in a single equation for the current noise that includes both thermal-and shot-noise contributions: where k B is the Boltzmann constant, and the energy dependence of τ n is neglected [2]. At low voltages, eV k B T , Eq. 1 is reduced to the well known thermal noise S Th = 4k B T G. In contrast, the high-voltage limit is temperature independent, restoring the T = 0 shot-noise dependence S I ≈ 2e < I > F . This suggests that measuring at higher voltage values can separate shot noise from thermal noise. However, this solution is of limited use, because the system may exhibit a nonlinear current-voltage characteristic. Furthermore, since the 1/f noise increases quadratically with applied voltage, at higher voltages, it is likely to dominate the signal at larger frequencies.
Similarly to measurements of mesoscopic systems, STM-based ones follow one of two main schemes: The first approach uses a room-temperature broadband amplifier connected in parallel to the current line to detect the shot noise signal [14,28]. This technique is relatively easy to implement. The broadband noise amplifier is generally insensitive to the input resistance and allows to measure well above the 1/f noise. Its main drawback is that it requires post-measurement data correction to compensate for signal loss due to the low-pass filter formed by the tunnel junction resistance (R J ) and the relatively large capacitance at the input of the room-temperature amplifier related to the current wire's length. Furthermore, since many of the electronic components necessary for the measurement are situated in ambient conditions, thermal noise contributions are significant. The second approach uses a low-temperature resistance-inductancecapacitance (RLC) tank circuit and amplifier to sample the noise at the circuit's resonance frequency [21,38,39]. The circuit design is typically such that the resonance is in the MHz regime, well above the 1/f noise. This technique is very flexible in terms of the input resistance and can be used both in tunneling and contact (STM tip is touching the surface) modes. Moreover, due to the high frequencies used, averaging times are very short thus reducing the influence of mechanical instabilities and drifts. The drawback of this circuit design is that it requires a calibration of the amplification factor for each specific junction conductance value as R J affects the amplification and resonance width.
Here, we combine the advantages of amplification at low temperature and close to the junction using a broadband amplifier following developments in break-junction noise spectroscopy [33]. We insert such a circuit into a low-temperature STM, which allows us to measure singleatom junctions and to map the noise signal with atomic resolution. Our technique permits to measure at variable junction-resistance values, while profiting from a low thermal-noise background, and avoiding the need for data corrections due to the low input capacitance estimated to be 20 pF (see dashed black fit in Fig. 4). Thus we are able to measure shot noise in tunnel junctions with conductance values (G J = 1/R J ) as low as 0.01 G 0 (see Fig. 4).
The article is structured as follows: we start by presenting our measurement setup in Sec. II. Next, we describe our measurement procedure in Sec. III, and demonstrate the apparatus capabilities by measuring shot noise on the (111) surface of gold. In Sec. IV we finish with a short discussion of the results.
II. SHOT-NOISE MEASUREMENT SETUP
Our shot noise setup is integrated into a modified Cre-aTec STM operating under ultra-high vacuum (UHV) conditions and at a temperature of 4.3 K sustained by a liquid-helium bath cryostat. Normal scanning and spectroscopy modes of the STM remain fully operational in this integrated design. The main component of the modification is a low-temperature commercial broadband, dual-channel amplifier (Stahl electronics, model CX-4) installed close to the STM junction. (A block diagram of our setup is presented in Fig. 1). The amplifier is mounted on a cold finger thermally shorted to the helium reservoir. A very good thermal contact is required between the amplifier and the cold finger in order to assure sufficient cooling power during operation. This is achieved by a thin indium foil. The dual-channel amplifier is connected via two parallel wires between the STM tip and a shunt resistor, R S , shunting the current drain line. This shunting and the close proximity between tip and amplifier provide the required low input capacitance of the amplifier. An additional advantage of the setup geometry is that the close distance of the amplifier to the junction minimizes any pick-up of external noise prior to signal amplification. The amplifier effectively senses the voltage noise generated at the STM junction.
While the use of a shunt resistor is essential in our setup, it can potentially impair standard STM operation. It acts as a voltage divider for large tunnel-junction conductance values, and limits the current flow necessary for tip treatments (an input protection stage is designed in the low-temperature amplifier to protect it from rapid high voltage sweeps occurring during the tip treatments). Therefore we have installed a low temperature magneticlatching RF relay (RF180) to bypass the shunt resistor.
The two outputs of the low temperature amplifier are transmitted via 50-Ω flexible coaxial lines to a roomtemperature, variable-gain, post-amplifier and cryo biasing unit (Stahl electronics, model A3-5). The insulator in these lines is graphite coated to reduce friction-related electrostatic noise. The two amplified signals are then recorded using a 14 bit National Instruments signal analyzer. A cross-correlation procedure is applied to remove uncorrelated noise picked-up in the two parallel signal lines.
The total amplification of the noise signal and its frequency dependence are measured in situ. This is done by fully retracting the tip, grounding the voltage line, and introducing a small known AC excitation to the current drain line that is shorted to the low-temperature amplifier's input (the shunt resistor's bypass is closed to reduce damping during gain calibration). We measure the amplified signal (in units of V / √ Hz) using our cross-correlation scheme at different gain settings of the post amplifier and divide the signal by the input amplitude to extract the overall gain (see inset of Fig. 2). We also sweep the input-signal frequency and measure the amplifier's response in both channels, see Fig. 2. The frequency response of both channels is in very good agreement, and is constant above ∼ 40 kHz.
The measured noise signal corresponds to a voltage noise at the STM junction. The corresponding current noise follows from the equivalent circuit diagram: where R P is the parallel resistance of the junction and shunt resistor R P = (1/R s + dI/dV j ) −1 , and dI/dV j is the differential conductance of the tunnel junction (dI/dV j ≡ G J in the Ohmic regime). The determination of the noise signal thus requires knowledge of the applied voltage, the current, and the differential conductance (dI/dV ). The current is measured using a room-temperature trans-impedance amplifier (Femto DLPCA-200), while the differential conductance is measured using a standard two-terminal lock-in technique. The noise conversion also requires precise knowledge of the shunt resistor. We therefore used a chip resistor that nominally varies from room temperature to liquid helium temperature by less than 5%, and measured its lowtemperature value by crashing the tip into the sample finding R S = 210.8 kΩ.
As the shot noise signal is usually very small the system has to exhibit extremely low external noise levels. Mechanical noise is reduced by standard STM noise-reduction schemes such as the use of pneumatic feet below the STM chamber and eddy-current damping of the STM head hanging on springs. Special care is further taken for filtering the electronic noise of all input signals. All cables connected to the STM are equipped with additional high-pass Π-filters, with the exception of the current and high-frequency lines. All UHV windows are covered by aluminum foil. Since the cleanest voltage signals are provided by batteries, we introduced a custom-built voltage-bias box that is remotely controlled and has a single input and two electrically separated identical outputs. The two outputs are used for biasing the tunnel junction while independently monitoring the signal using the STM controller. The bias box can be switched between applying the output voltage from the STM controller for normal STM operation and a battery-powered low-noise voltage source that generates discrete values ranging between ±300 mV used for noise measurements. A second remote-controlled grounding box is installed at the tip side (current drain), which grounds the current line during noise measurements.
III. MEASUREMENT PROCEDURE
In this section we describe our measurement procedure and demonstrate our capabilities on a single gold atom placed on a Au(111) surface. Gold is chosen due to its well known electronic structure, having a single s-electron available for transport [43]. We prepare the Au(111) surface using several sputtering-annealing cycles until a smooth surface exhibiting the well known "herring-bone" reconstruction is apparent [44] (see inset instead of Fig. 3).
Successful shot-noise measurements require stringent stability of the STM tip-sample junction. As the shot noise amplitude directly depends on the current, an unstable junction resistance would lead to strongly fluctuating noise levels. The stability of an STM junction can be improved by adding an adatom on the surface. Contacting such adatom effectively minimizes the forces on the tip's apex compared to contacting a surface atom.
Adatoms are created either by controllable tip indentation into the surface (400-500 pm using a set-point of R j = 1 GΩ) or by adsorption of a dilute amount of adatoms from an external evaporator. A gold adatom deposited from a gold-coated tungsten tip is depicted in the inset of Fig. 3. To check for stable and reproducible junction properties prior to noise measurements, we repetitively approach and retract the STM tip to/from the adatom and record the current during this procedure (see Fig 3). Upon tip approach the tunneling current increases exponentially as expected for a tunnel junction. A sudden increase in the current signifies a "jump-to-contact". Further tip approach leaves the current nearly constant [43]. If the junction is stable, the I-z dependence during retraction is similar to that recorded during approach except for a small hysteresis in the tran- The STM junction and amplifier are mounted in UHV and are thermally connected to a liquid helium bath cryostat. Normal STM operation is controlled via a Nanonis SPM controller, which can be disconnected for noise measurements. Then, the voltage is supplied from batteries in a voltage bias box. The noise signal is detected between the tip-sample tunnel junction and a shunt resistor RS, shunting the current drain line. The amplified noise is then transmitted via 50-Ω low-noise coaxial lines to a room-temperature post amplifier. To further reduce external noise during noise measurements, a remote-controlled grounding box is connected at the current output. In the bottom left we plot our relevant measurement window set by the low-temperature amplifier's frequency response, which is constant above the black line (see Fig. 2), and the -3 dB cutoff, blue line, related to the input circuit's effective RC filter. The blue line follows f −3dB = 1/2πRP C, where C = 20 pF. We can reliably measure shot noise within the shaded blue region between the two lines.
FIG. 2. Noise amplification and frequency response.
Frequency-dependent amplification measured on the two channels of the low-temperature, broadband amplifier (labeled channel A (blue) and B (red)) as a function of a variable frequency supplied by a signal generator. Constant maximal gain is achieved above 40 kHz. Inset: Voltage gain of the noise amplification line vs. the different gain settings of the variable-gain post amplifier.
sition from contact to tunnel regime.
After testing the junction's stability, we set a desired junction resistance by approaching the STM tip to the corresponding z position. Following a few minutes' delay to avoid drift, we commence the shot noise measurement cycle. As explained above, the Fano factor contains in -FIG. 3. Approaching an adatom. I, in log scale, vs z curves for approaching (blue) and retracting (red) the tip to a gold adatom placed on the Au(111) surface. The initial exponential increase, linear in such log-linear plot, is terminated by a discontinuous "jump-to-contact", after which I is nearly independent on z. Retracting the tip results in a hysteretic behavior for the current discontinuity, but an identical exponential dependence in the tunneling regime. The dashed black line represents the maximal possible current, which is limited by the shunt resistor Imax = V /RS. Inset: Atom placed on the Au(111) surface, protruding 1.1Å from the Au(111) surface. The gray stripes are the result of the "herring-bone" reconstruction of the gold surface. A setpoint I=200 pA, V =200 mV is used for both the approach curves and the topography measurements.
formation of suppressed shot noise, allowing to deduce effects of correlations. To extract the Fano factor we measure the current dependence of the noise signal. This is done by recording the noise at a discrete set of equidistant voltage steps (positive and negative) supplied by the bias box. At each step we measure the current and voltage before grounding the current drain to measure the noise power (2 18 samples taken at a rate of 1 MS/s). We repeat the noise measurement and average the resulting power spectrum for at least 10 times (larger averaging reduces the standard deviation, but not the mean value of the signal). Thus, a single measurement cycle requires about 30 minutes. Once a measurement cycle is completed, the tip is retracted and a topographic image is recorded to confirm that the adatom's position and the tip's shape did not change during the measurement. This is done by cross-correlating the topographic images recorded before and after the shot noise acquisition routine. The measurement can then be repeated for different contact resistances (different z position). To optimize the measurements, a LabVIEW program was developed to automatically execute the procedure described above.
During data acquisition, the noise signal is recorded in the two parallel channels of the low-temperature amplifier, both shorted to the tip. The cross correlation of these two channels is computed directly in the Lab-VIEW program during the measurement's runtime. The full frequency-dependent cross-correlated S V acquired at V = 0 at one particular junction is shown in Fig. 4. Here, the tip was approached by 4Å starting from a setpoint of I = 200 pA and V = 200 mV. Since shot noise is frequency independent, a constant noise plateau should be observable whenever the gain of the low temperature amplifier is constant. This is true above 40 kHz (in agreement with Fig. 2), and below ∼500 kHz. The high frequency cutoff is related to the -3 dB point of the low-pass filter formed by the junction's resistance and unavoidable input capacitance. It is determined by fitting S V to the expected frequency response of a low-pass RC filter, S V ∝ (1 + (2πf R P C) 2 ) −1 (see sketch within Fig. 4). We find, as previously mentioned, C ≈ 20 pF. In the bottom left of Fig. 1 we use this capacitance value to sketch our measurement window as a function of junction conductance. The blue line in the figure follows the -3 dB point of an RC filter, f −3dB = 1/2πR P C, and the black line the onset of full amplification. Yet, since the signal loss at the -3 dB point is already significant, we average the noise signal at much lower frequencies (usually between 90-100 kHz). Within such junction-dependent frequency window, S V is evaluated with less than 3% deviation. Note that the 1/f noise contribution is indeed suppressed within our measurement window. We further make sure to avoid the influence of the narrow peaks observed in the noise curves. These noise peaks normally stem from power supplies and are created by their switching-mode operation.
As explained above, the Fano factor follows S I = 2e < I > F in the high-voltage limit, which is valid whenever FIG. 4. Noise data. SV vs. frequency measured in contact conditions (RJ = 15.1 kΩ) at V = 0. The data in red is the 10 time average of the two parallel noise measurements after cross-correlation. The gray curve is the same data after Gaussian smoothing, presented as a guide to the eye. The black dashed line is our best fit for the data above 40 kHz assuming only an RC cutoff of the input noise (see sketch). From the fit we can approximate the input capacitance of the low-temperature amplifier is to be 20 pF. The shaded gray region indicates where the low-temperature amplifier's gain is not constant (see Fig. 2). k B T eV . Hence, shot noise is expected to increase linearly with the DC current with a slope determined by the Fano factor. In Fig. 5a. we plot S I vs I measured on different single gold atom junctions. The curves, shifted vertically for visibility, are colored according to the junctions' resistance values and follow the expected linear trend. We note that, while the thermal noise measured at V = 0 is linearly dependent on 1/R P we observe a small offset: S I − S Th ≈ 6.1 × 10 −27 A 2 /Hz, evaluated at R P = 13.5 kΩ. This offset cannot be explained by the small input voltage noise of the low temperature amplifier (nominally v n ≈ 0.6 nV/ √ Hz). We may speculate that the temperature at the junction is slightly higher than the one measured (at the base plate of the STM). A temperature increase of 1.5 K would be in very good agreement with the measured offset, but might not necessarily be its origin. In any case, this offset does not affect the determination of the Fano factor as the latter is given by the slope, which is temperature independent.
For each curve we evaluate the Fano factor for positive and negative currents and plot the averaged value in Fig. 5b. The error in evaluating the Fano factor is estimated to be half the difference between the two independent fit values. The error in the normalized conductance (G J /G 0 ) is estimated as δG J /G = 100Ω/R J , with 100 Ω being our precision in measuring the resistance. The black line in the figure corresponds to the theoretical dependence of the Fano factor in the case of a single channel spin-degenerate transport, as expected for single-gold-atom junctions [2,43]. We find very good agreement between experiment and theory. This shows that our setup is fully operational for shot noise measurements.
IV. DISCUSSION
We presented a measurement scheme enabling the local measurements of shot noise integrated in a fully functional low-temperature UHV STM apparatus. We have demonstrated our capabilities measuring the Fano factor of single-atom junctions by contacting gold adatoms on a Au(111) surface. The great advantage of our setup is its flexibility in R J that will allow future measurements in non-linear junctions, e.g. superconducting junctions. More generally, since shot noise is a direct probe for the charge of the tunneling quasi-particle, such measurement technique holds great promise in providing "smoking gun" evidence of the existence of exotic quasi-particle states.
Finally, while shot noise seems to be a valuable observable for the study of fundamental physical processes, it may also be considered nuisance for nanoscale electronic devices. Its independence of temperature and frequency makes it the main source of noise in devices operating at low temperatures. Hence, besides the fundamental insights that may be gained from shot noise measurements, its characteristics in nanoscopic junctions has also technological implications.
ACKNOWLEDGMENTS
We would like to thank Jan van Ruitenbeek and Manohar Kumar for discussions at the very beginning of the project, Stefan Stahl who designed the lowtemperature amplifier and contributed valuable input during the setup stages of the system. I.T. acknowledges funding from the Alexander-von-Humboldt foundation in the framework of Humboldt Research Fellowship for Postdoctoral Researchers, and from the DFG in the framework of the Walter Benjamin Position (TA 1722/1-1). We also acknowledge funding by the European Research Council through the Consolidator Grant "NanoSpin" (project number 616623) (K.J.F.).
DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2021-11-18T02:15:54.981Z
|
2021-11-17T00:00:00.000
|
{
"year": 2021,
"sha1": "59847c7e5c85e7bcad41cd21a3d852efe731af15",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "59847c7e5c85e7bcad41cd21a3d852efe731af15",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
265697814
|
pes2o/s2orc
|
v3-fos-license
|
Chronic gastritis may predict risk of cerebral small vessel disease
Background and purpose Chronic gastritis, especially that caused by helicobacter pylori (HP) infection, has been associated with increased risk of ischemic stroke. But the relationship between chronic gastritis and cerebral small vessel disease (CSVD) remains largely undetermined. This study aimed to determine the potential predictors for CSVD, with chronic gastritis and its proxies as alternatives. Method Patients aged 18 years or older with indications for electronic gastroscopy were enrolled. Presence of CSVD was evaluated with brain magnetic resonance imaging (MRI) results. Degree of CSVD was scored according to established criteria. Logistic regression analysis was used for identifying possible risk factors for CSVD. Results Of the 1191 enrolled patients, 757 (63.6%) were identified as with, and 434 (36.4%) as without CSVD. Multivariate analysis indicated that patients with chronic atrophic gastritis had an increased risk for CSVD than those without (adjusted odds ratio = 1.58; 95% CI, 1.08–2.32; P < 0.05). Conclusions Chronic atrophic gastritis is associated with the presence of CSVD. We should routinely screen the presence of CSVD for patients with chronic atrophic gastritis. Supplementary Information The online version contains supplementary material available at 10.1186/s12876-023-03009-6.
Introduction
The neuroimaging markers of cerebral small vessel disease (CSVD) include white matter hyperintensities (WMH), lacunes, enlarged perivascular spaces (EPVS) and cerebral microbleeds (CMBs) [1].As a subtype of ischemic stroke, CSVD is responsible for about a fifth of stroke incidence.Coexistence of CSVD usually deteriorate stroke outcomes of other subtypes [2,3].CSVD is a major cause of cognitive decline and dementia in elderly, second only to Alzheimer disease [4].But the etiology of CSVD has been far from being determined to date.
CSVD has been associated with markers of inflammation.Previous studies associated WMH and EPVS with vascular inflammation and endothelial dysfunction in stroke patients [5][6][7].A cross-sectional study showed that infectious burden consisting of multiple common pathogens was associated with CMBs [8].On the other hand, some studies failed to associate lacunes or their markers with systemic inflammation [9,10].Recent studies indicated that chronic gastritis, especially that caused by HP infection, were related to ischemic stroke [11][12][13].HP can lead to gastric mucosal injury and other gastric diseases, both of which may enhance systemic inflammatory reaction, and, therefore, increase the risk of stroke [13].On the other hand, HP infection may likely influence gastric physiology and absorption of micronutrients such as folate and vitamin B 12 .Deficiency of folate and vitamin B 12 may increase serum homocysteine level and causes vascular damage [14].
Although chronic gastritis has been proved to affect the risk of stroke occurrence and recurrence, whether chronic gastritis increases the risk of CSVD is largely undetermined.This study aimed to explore the relationship between chronic gastritis and the risk of CSVD.
Data source
Patients were screened from Affiliated Jiangning Hospital with Nanjing Medical University.The present study is part of a longitudinal study on the long-term mortality of middle-aged and elderly from Jinling Hospital, Nanjing Medical University.Patients hospitalized for physical examination aged 18 years or older with indications (including gastralgia, gastric distention, nausea, vomiting, acid reflux, nausea, constipation, and diarrhea or voluntary gastroscopy) for electronic gastroscopy were enrolled from January 1, 2011 to May 18, 2020, and patients who agreed to underwent brain magnetic resonance imaging (MRI) examination within 48 h after admission were finally included in the study.Patients with acute gastrointestinal bleeding, acute cardiovascular diseases, pulmonary insufficiency, coagulation disorders and cancer were excluded (Fig. 1).
Risk factor definitions
Risk factors were defined as the following.In this study, hypertension was defined as a blood pressure that exceeds 140/90 mm of mercury.The definition of diabetes was either being diagnosed with diabetes, or having fasting glucose levels exceeding 7.0 mmol/L.Hyperlipidemia was defined as a documented diagnosis of hyperlipidemia or on lipid-lowering medications.The presence of a previous episode of coronary heart disease or an attack of coronary heart disease at the time was considered a history of coronary heart disease.History of stroke was defined as the experience of ischemic stroke.The diagnosis of atrial fibrillation was made based on electrocardiographic evidence or self-reported physician diagnosis.
Clinical laboratory tests
Demographic and clinical data were collected.Red blood cell counting and biochemical examinations were performed before gastroscopy examination.Presence of gastric diseases was determined according to clinical characteristics, pathological changes and gastroscopy results.Endoscopy combined with histopathological examination was used to diagnose two types of gastritis: chronic non-atrophic gastritis and chronic atrophic gastritis [15].HP infection were determined with carbon 14 urea breath test.
Neuroimaging evaluation
Enrolled patients underwent brain MRI examination with a 3.0 T scanner (Philips Medical Systems, Netherlands) with an 8-channel receiver array head coil.Head motion and scanner noise were reduced using foam padding and earplugs.Standardized parameters of the MRI sequences, including T1-weighted, T2-weighted and fluid-attenuated inversion recovery images were obtained.Burden of CSVD was graded as 0-4 based on imaging markers (WMH, lacunes, EPVS and CMBs) on MRI according to established criteria [16][17][18].Briefly, one point represents each of the following phenomenon: more than 10 EPVS in basal ganglia, presence of lacuna, periventricular WMH with a Fazekas score of 3 or deep WMH with a Fazekas score of 2 or 3, presence of deep CMBs.Patients were then grouped as with (1-4 points) and without CSVD (0 points).
Gastroscopy examination
Gastroscopy examination was performed with an endoscope (GIF-HQ290, GIF-H290Z; Olympus Medical Systems, Tokyo, Japan) with video processors (EVIS LUCERA ELITE CV290/CLV290SL, Olympus Medical Systems).Five gastric mucosa tissue specimens, two from gastric antrum, two from gastric body and one from gastric corner were clamped during gastroscopy examination for biopsy.Chronic inflammation, atrophy and intestinal metaplasia were diagnosed according to the Sydney system [19].
Statistics
Continuous data were summarized as mean values with SDs for normal distribution or median value with interquartile range for skew distribution.Categorical data were presented as frequencies with proportions.Twosample t test was used to compare continuous data.Categorical data were analyzed by the chi-square test.Logistic univariate and multivariate analyses were used for comparing group differences and identifying the risk factors of CSVD.All statistical analyses were performed using SPSS 25.0 (IBM, Armonk, NY).
When patients were stratified, the results showed that patients with chronic atrophic gastritis presented significantly higher total CSVD scores than those without (1.8 ± 0.8 vs 1.5 ± 0.9, P < 0.01).The ratios of EPVS (37.9% vs 23.1%; P < 0.01) and lacunes (70.6% vs 56.5%, P < 0.01) in patients with chronic atrophic gastritis were significantly higher than patients without chronic atrophic gastritis.This result suggests that the difference in the total CSVD burden is mainly derived from EPVS and lacunes (Fig. 3).
Discussion
This study found that chronic atrophic gastritis was related to CSVD and the difference in the total CSVD burden is mainly derived from EPVS and lacunes.
Compared with patients with CSVD, the proportion of hyperlipidemia in non-CSVD patients is higher, this is contrary to previous studies [20,21].Hyperlipidemia has been identified as a risk factor for atherosclerosis and one potential candidate risk factor for CSVD, this may be related to the intake of lipid-lowering drugs and influence of different daily life habits and activities.The traditional risk factors are only able to partially explain the presence of CSVD [22], and the incomplete understanding of the pathogenesis of CSVD limits prevention and treatment efforts.Therefore, obtaining more predictors of progression is a rational target for therapeutic treatments.In recent years, some potential new risk factors of CSVD, including chronic infection and substance abuse, have attracted the attention of scholars [23], the discovery of these new risk factors provides new thinking for the prevention and treatment of CSVD patients.
Our study revealed the higher prevalence of CSVD in patients with chronic atrophic gastritis than those without even after adjustments for multiple confounding factors.This may be related to the following factors.Firstly, chronic atrophic gastritis is an organ-specific autoimmune disease, which affects the corpus-fundus gastric mucosa [24].The decrease or disappearance of parietal cells results in reduced or absent acid production and loss of intrinsic factor, and it further interferes with the absorption of folate and vitamin B 12 [25], and further lead to anemia and hyperhomocysteinemia. Deficiency of folate can result in hyperhomocysteinemia, which is a possible risk factor for cardiovascular diseases [26].One meta-analysis indicated a 10% lower risk of stroke and a 4% lower risk of overall CSVD with folate supplementation [27].The results of the China Stroke Primary Prevention Trial (CSPPT) randomized clinical trial among adults with hypertension in China showed that folate supplementation could significantly reduce the risk of first stroke [28].Moreover, hyperhomocysteinemia has been recognized as an important risk factor for cardiovascular diseases.It may also involve in the development of dementia, diabetes mellitus, and renal disease [29].A controlled study showed that hyperhomocysteinemia Fig. 2 Association of chronic atrophic gastritis and CSVD presence.OR, odds ratio; CI, confidence interval.Risk of CSVD was analyzed with logistic regression models, and OR was generated.We adjusted for Age, Hypertension, Diabetes, Hyperlipidemia, Coronary artery disease, Previous ischemic stroke, Atrial fibrillation and Hemoglobin concentration in regression model 1 Fig. 3 Comparison of total CSVD scores in patients with and without chronic atrophic gastritis may increase the risk of lacunar infarction and severe white matter lesion [30].
There are several limitations in the current study.First, this is an observational study without further follow-up and dynamic observation of the progress of CSVD.Second, this study is based on clinical observation.Further studies are needed to explore the possible mechanisms.Third, this study is a single-center study involving individuals in the Han population in a single center.Therefore, further multicenter studies are needed to overcome these limitations.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
•
At BMC, research is always in progress.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from:
Table 1
Clinical characteristics of patients with and without CSVD
|
2023-12-07T14:56:42.201Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ea27da3d408ea35f450c4bcf7a18dde065d00a7f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ea27da3d408ea35f450c4bcf7a18dde065d00a7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59144621
|
pes2o/s2orc
|
v3-fos-license
|
Characterization of Volatile Compounds of Bulgur (Antep Type) Produced from Durum Wheat
Bulgur is enjoyed and rediscovered by many people as a stable food because of its color, flavor, aroma, texture, and nutritional and economical values. There is more than one type of bulgur overall the world according to production techniques and raw materials.The volatile compounds of bulgur have not been explored yet. In this study, Headspace Solid PhaseMicroextraction (HSSPME) and Gas Chromatography–Mass Spectroscopy (GS-MS) methods were used to determine the volatile flavor compounds of bulgur (Antep type, produced from Durum wheat). Approaching studies were used and the results were optimized to determine the ideal conditions for the extraction and distinguish the compounds responsible for the flavor of bulgur. Approximately, 47 and 37 important volatile compounds were determined for Durum wheat and bulgur, respectively. The study showed that there was a great diversity of volatiles in bulgur produced using Durum wheat and Antep type production method. These can lead to a better understanding of the combination of compounds that give a unique flavor with more researches.
Introduction
Bulgur is a cleaned, cooked, dried, tempered, debranned, milled, optionally polished, and finally size-classified. Bulgur is a national food in most of the Middle East countries. Today, it is an international delicious wheat product (USA, Europe, Australia, Japan, China, and Russia). Recently, the scientific studies related to bulgur have been increasing. Additionally, its production and consumption are increased due to its low cost, long shelf life, ease of preparation, taste, and high nutritional and economic values.
Bulgur production technique is specified and shortly described as "bulguration" [1]. In bulguration, the combination of cooking and drying operations affects the important properties of wheat, and this combination (cooking + drying) is unique in food processing.
General composition of bulgur is 9-13% water, 10-16% protein, 1.2-1.5% fat, 76-78% carbohydrate, 1.2-1.4% ash, and 1.1-1.3% fiber. Protein, calcium, iron, vitamin B1, and niacin contents of bulgur are higher than other cereal products like bread and pasta. Many nutrients leach out of wheat, but nutrients are absorbed back into the grain during the cooking operation. Losses of nutrients that are soluble in water like vitamins are prevented. Bulgur digestibility increases due to the coagulation of protein and gelatinization of starch. The excess nitrogenous substances are caused by the hard structure of starch fused with protein. This is a desirable feature in bulgur because of its resistance to insect, mites, and microorganisms and long shelf life [2,3]. Additionally, bulgur is a natural food because there are no uses of chemicals or additives during processing.
Recently, the bulgur industry has changed overall the world. According to the report published by the International Grain Council [4], the production amount is around 1 million tons. As mentioned in the report, the bulgur production of Turkey, which was 722 thousand tons in 1984, was increased to 856,000 tons in 1992. It is estimated that bulgur industry in Turkey has developed rapidly and has obtained the production of 1 million tons in the last 10 years. According to the data of the Turkish Grain Board, there are 99 big bulgur factories in Turkey by 2014. While installed capacity of these factories is around 1,595,421 tons/year, the actual capacity is 2 Journal of Food Quality about 900,544 tons/year. The numbers of bulgur plants were around 500, 30-40 years ago. Today, the number decreased to around 100, but the bulgur production capacity of each plant is increased dramatically.
Aroma compounds stimulate much more qualities and therefore are mainly responsible for the characteristic flavor of foods. These substances are one of the most significant factors, which shape the quality and affect consumer behaviors [5]. Aroma and flavor characteristics of various cereals such as corn, rye, triticale, wheat, roasted barley, malted barley, or rice were investigated, based on volatile compounds, composition standpoint, that mainly use laborious and expensive solvent extraction techniques [6]. The solvent-free, fast, and inexpensive method is called Solid Phase Microextraction (SPME) method, which is based on the absorption of volatile compounds onto a coated fused silica fiber. SPME offers the possibility of detecting compounds at the utterance level. As most cereal grains are characterized by a very low concentration of flavor. SPME has opened up new avenues allowing interested researchers to study cereal flavor. The SPME method for headspace analysis of volatile compounds was successfully applied for the identification of volatiles in processed oats [7], distiller's grains [8], and bread crumbs [9].
Overall the world, there are two bulgur production techniques such as Antep and Karaman (Mut), industrially [10]. Additionally, village and sun-dried type bulgur are available. Antep type bulgur is geographically indicated (certificated) by Gaziantep Commodity Exchange in 2017 via Turkish Patent Institute to protect its taste, technique, and specification.
There are a lot of unproved stories (urban legend) about bulgur taste and flavors depending on the production methods and raw materials. Traditional consumers prefer sun-dried bulgur. Some consumers prefer Antep bulgur due to its taste. Some consumers prefer Karaman (Mut) bulgur due to its color. All consumers have different comments about bulgur taste and flavor. New bulgur plant investors are confused regarding the best production method and raw material. Additionally, producers, academia, consumers, and quality controllers do not know the differences between both bulgur based on flavor and raw material. Therefore, this study focused on this issue to clarify the taste and flavor of bulgur depending on the raw material and production technique. In the literature, there is no information about volatile flavor compounds that deal with the flavor of Durum wheat and bulgur. The objective of this study is to identify and quantify the volatile flavor compounds of Antep type bulgur by using SPME/GC-MS as a new adapted method.
Bulgur Production.
In general, different wheat varieties are used during the commercial bulgur production. Additionally, each plant uses different processing parameters in its equipment (different motor powers, different water properties, different water ratios, etc.). These differences in the parameters and varieties would cause the significant fluctuation in the results. In order to prevent uncontrollable error in the results and to obtain standard bulgur for the analysis in the study, the samples were produced in laboratory by using commercial Antep type bulgur production technique. Also, in order to follow the changes in the volatile flavor compounds starting from raw material (wheat) to the finished product (bulgur), the samples were produced in the laboratory.
In the study, Durum wheat (Zivego) was obtained from Simaş bulgur factory (Gaziantep, Turkey) and stored at 8 ∘ C in a dark place. Bulgur was produced using Antep method, which is shown in Figure 1. In commercial Antep type production method, Durum wheat is generally used and it is firstly cleaned. Then, it is washed before cooking very rapidly. Cooking is made under atmospheric conditions until all starch is gelatinized. After cooking, drying is made and then short tempering (15-30 mins) by adding water, debranning (by using emery type debranner), and milling are made. Then, polishing is optionally made. As final stage, size classification is made.
In this study, according to Antep bulgur production method explained above, the cleaned Durum wheat was rapidly washed. Then, atmospheric cooking was made to cook wheat. After that, an artificial dryer (MK II, Sherwood Scientific, UK) was used to dry product. After the drying operation, tempering, debranning, redrying, milling, screening, and polishing were made ( Figure 1). The details of each stage were explained as follows.
Cleaning and Washing of Wheat.
Raw material (Durum wheat) was screened by using 3.2 mm screen to separate the small and foreign materials. Then, wheat was aspirated to separate dust and light foreign particles by using an aspiration system (Merba Co., Mersin, Turkey). After that, the samples were stored in a refrigerator at 8 ∘ C for further experiments. Before each experiment, the sample was rapidly washed with distilled water for 30 sec to remove dust and foreign materials from the surface of wheat kernels.
Atmospheric
Cooking. According to Antep production method ( Figure 1) distilled water was boiled (96 ∘ C, according to the altitude of laboratory that made the experiment), and then wheat was added to the boiling water. The cooking operation was continued until all found starch in wheat gelatinized. The gelatinization and cooking time were determined by using the method explained by Bayram [2]. The wheat kernels during cooking were collected and cut periodically by a blade to control the center of wheat kernel. When all starch in the endosperm of wheat kernel seems translucent (as the loss of opaqueness), this appearance shows gelatinized starch, and the cooking was stopped. The cooking time was determined as nearly 50 min and this cooking time was used during the experiments.
During cooking, the ratio of wheat to water was 1/1.75. After cooking, the moisture content of wheat was 54.48% (d.b.). Traditionally, the cooked wheat is called "hedik."
Drying.
After the cooking operation, the cooked samples were dried as soon as possible. Drying was made by using a packed bed dryer (MK II, Sherwood Scientific, UK). Drying air temperature and velocity were 40 ∘ C and 2.5 m/s, respectively. Drying column diameter was 150 mm. Drying was continued until the moisture content reached 12% (d.b.). Traditionally, the dried and cooked wheat is called "diri bulgur."
Tempering.
The main difference of the production method of Antep bulgur from the other technique is short (15 mins) and low moisture (17%, d.b.) tempering operation. Before the debranning operation, the moisture content of cooked and dried wheat was increased by tempering to 17% (d.b.) to help the removing bran from the surface of wheat kernel. A hand spray pump was used to obtain homogenous distribution of distilled water on the surface of the wheat kernels. During spraying, the wheat kernels were mixed. After the tempering [11], the samples were left for 15 min (tempering time).
2.1.6. Redrying. After the debranning operation, the moisture content was decreased to 14% (d.b.) by using a packed bed dryer (MK II, Sherwood Scientific, UK) at 40 ∘ C.
Screening.
After the milling operation, the different particle sizes of bulgur were classified using 2.8, 1.60, and 1.0 mm screens (ASTM E11, Aramtest Trade Co. Ltd., Turkey). The sample obtained from the screens between 2.8 and 1.0 mm were used for further analysis.
2.1.9. Polishing. As an optional operation in the production method of Antep bulgur, a polishing step is recently started to be used in industry to obtain polish and yellow bulgur. In this study, as a parallel to industrial application, the polishing operation was used.
According to Balci and Bayram [11], a mechanical polishing system was used (Lab. Scale Mechanical/Kneading/ polisher, Biltek Eng., Gaziantep/Turkey). Before polishing, a small amount of distilled water was added to obtain 17% (d.b.) moisture to supply gentle polishing on the surface of the kernels.
After all operations, the product was called Antep bulgur. The moisture content of bulgur was around 12% (d.b.). The bulgur samples were stored in a refrigerator at 8 ± 1 ∘ C for the analysis.
All chemicals used in the experiments were bought from Sigma-Aldrich (Sigma Co. Steinheim, Germany).
Extraction of the Volatile Compounds. Solid Phase
Microextraction (SPME) (Model 57330-U, Supelco, USA) apparatus was used for extraction of volatile compounds by Divinylbenzene-Carboxen-Polydimethylsiloxane (DVP/ CAR-PDMS) (GRAY) (Model 57328-U, Supelco, USA) fiber with 50/30 m thickness from Supelco (Bellefonte, USA), as absorbent. This method was used to extract volatile compounds of another cereals by some researchers [23,24]. The fiber was conditioned before use and thermally cleaned after each analysis at 250 ∘ C at the injector port of Gas Chromatography.
For the extraction of volatile compounds, the bulgur sample (6 g) was ground and placed in a 30-ml vial. Then, the vial was sealed with a silicon septum and the needle of SPME device (Supelco, Bellefonte, USA) was inserted into the vial. The vial was placed in a water bath and the fiber was pushed out of the hosting to absorb volatile compounds from the headspace of the vial. The best combination of heat and time for the extraction of volatile compounds in bulgur was determined as the temperature of 70 ∘ C for 120 min by preexperiments and trials. Two hours later, the fiber was pulled into needle housing again and SPME was removed from the vial. After that SPME device was inserted into GC-MS injection port, and the fiber was taken out of the needle housing and left for 5 min at 250 ∘ C for thermal desorption [25].
GC-MS Analysis for Volatile Compounds.
Gas Chromatography-Mass Spectroscopy (GC-MS) (Perkin Elmer-Clarus 500 Model, USA) was used for the analyses. The separation of volatile compounds was carried out in Supelcowax 10 capillary column (30 m length × 0.25 ID × 0.25 m film thickness) (N316551, Perkin Elmer, USA). Carrier gas was helium with a flow rate of 1.5 ml/min. The oven temperature was programmed in the beginning with 40 ∘ C, held for 4 min at that temperature, then increased to 90 ∘ C with a rate of 3 ∘ C/min, then increased to 130 ∘ C with a rate of 4 ∘ C/min, held for 4 min at that temperature, finally increased to 240 ∘ C with a rate of 5 ∘ C/min, and held for 8 min at that temperature. The injection port was operated in the splitless mode at 250 ∘ C. The electron energy of MS was 70 eV and operated in EI + mode. The source temperature was 180 ∘ C with mass range from 30 to 350 ∘ C. Wiley and NIST/EPA/NIH libraries (May 2005, Perkin Elmer, USA) were used for the identification of peaks. After the identification, the concentrations of volatile compounds were calculated as percentage.
Statistical Analysis.
The results were analyzed by using one-way ANOVA at ≤ 0.05 significant level. Standard deviations were calculated. Multiple Range Test (Duncan) was carried out to determine difference and homogenous group by using SPSS Statistical Software (version 20) (IBM Co., Chicago, Illinois, United States). The experiments were triplicated.
Physical and Chemical Analysis of Raw Material (Durum Wheat) and
Bulgur. The moisture, ash, and protein contents as well as * , * , * , and YI values of Durum wheat and Antep bulgur are given in Table 1.
The moisture, ash, and protein contents of wheat as average values were found to be 6.13, 1.39, and 12.32 (%, d.b.), respectively. The average values of the color values ( * , * , * and YI) of Durum wheat were determined as 52.70, 7.88, 23.39, and 69.61, respectively. The other properties of Durum wheat (starch content, fat content, pH, wet gluten, dry gluten, gluten index, sedimentation (cm), delayed sedimentation, falling number, fungal falling number, Alveoconsistograph resistance, and elasticity values) were measured to specify the most important properties of Durum wheat (Table 2).
According to the results (Table 1), the overall average of moisture content of bulgur increased to double due to the cooking operation. According to the Bulgur Codex [26], the obtained experimental results are suitable. The ash contents of Durum wheat and bulgur were found as 1.39 and 1.20, respectively. The ash content of bulgur is related to bran content, which is generally affected by the debranning, milling, and polishing operations. The protein content of bulgur is related to protein content of raw material and processing yield. The protein content of bulgur reduced by about 0.5 point due to the leaching of protein into water during cooking. It is an important result that shows the protein loss occurs during cooking. In the study, the protein content of Antep bulgur was between 11.92 and 12.01% (d.b.), which is similar to the study of Toufeili et al. [27]. In addition, Singh et al. [28] found significantly ( ≤ 0.05) different protein contents for different wheat varieties. * , * , * , and YI values of bulgur were determined as 59.94-62.10, 6.57-6.85, 29.70-30.10, and 71.51-77.78, respectively. The color values are similar to the results of the study of Balci [29].
Volatile Compounds of Durum Wheat and Antep Bulgur.
As a result of the increase in the consumer' demands for bulgur, the volatile compounds started to gain importance. The most important characteristics of bulgur that are of interest to consumers are flavor and color.
There is no study about volatile flavor compounds of bulgur. In this study, the method used is simple, easy, rapid, and economic does not use excess amount of chemical (no solvent). Volatile flavor compounds found in Durum wheat are also not available in the literature. Therefore, this study additionally presents the information about the compositions of the volatile flavor compounds of Durum wheat by using this new simple method.
Typical GC-MS Chromatograms of Durum wheat and Antep bulgur are given in Figures 2 and 3, respectively. The detected volatile flavor compositions of Durum wheat and Antep type bulgur are given in Table 3. The molecular weight (g/mol) and retention time (min) for each component were determined. Table 3 indicates that there is more than one compound that can be responsible for the flavor of Durum wheat and Antep bulgur. The results are expressed as the mean of GC-MS analysis of triplicated experiments. Approximately, 47 volatile components were detected in Durum wheat. 1-Hexanol, styrene, hexanoic acid, heptadecane, and dodecane were found to have the highest concentrations at a rate of 17.82, 7.06, 5.96, 5.83, and 5.72%, respectively. Relative to the other compounds detected, this might be one of the reasons that supply the sweet floral taste mixed with the flavor of grass, which could be felt while eating cooked wheat. In addition, these volatile flavor compounds are critical and important for the formation of flavors of pasta, spaghetti, Durum bread, semolina, sweets, couscous, and related Durum wheat products. The results can also be used for the studies related to these products. Thirty-seven volatile flavor compounds for bulgur were detected ( Table 3). The compounds such as dodecane, nonanal, styrene, decane, and nonanoic acid had the highest concentrations: 32.81, 13.74, 7.20, 6.44, and 4.86%, respectively. Relative to the other compounds detected, this might be one of the reasons that supported sweetie rose, orange, and floral taste mixed with the aroma of fatty grass, which could be felt while eating Antep bulgur.
When volatile compounds found in Durum wheat and Antep bulgur (Table 3) were compared, a lot of differences were found between the results. There were 16 volatile compounds missing from Durum wheat during processing to produce bulgur. These lost compounds were acetic acid, 1-nonenal, 1-butanol, 4-ethylcyclohexanol, 2,6-bis(1,1dimethylethyl)-4-methyl phenol, 2-pentyl ester, 4-methylcyclopentene, benzene, heptadecane, 1-octene, 2-butanone, 3-octen-2-one, acetophenone, beta-linalool, methane, and cyclopropane. However, there were new compounds formed in bulgur such as benzoic acid, 2-heptenal, furfural, decanal, 2-methoxy-4-vinylphenol, decane, heptacosane, and isobutyl phthalate. The change in volatile flavor compounds and the generation of new compounds can especially occur due to the cooking and drying operations, which are basic thermal processes found in the bulgur production (bulguration effect). Additionally, debranning and polishing can affect the composition due to the removal of some parts of wheat. Because of the availability of water and high temperatures, which lead to the change in the chemical structure of the components easily as a result of breaking the weak bonds between the elements of single compound during cooking and drying or because of the removal the bran of Durum wheat during the debranning and polishing processes, some chemical compounds that most of the flavor components present in bran disappear. The high concentrations of dodecane, nonanal, and styrene compounds in bulgur cause the distinct flavor formation. The major flavor for Antep bulgur can be considered as a mix of woody, sweaty, floral, and rose-orange flavors. These chemical compounds were not available during the analysis of fermented wheat extracted germ as pointed out by Yusuf and Bewaji [30]. However, dodecane was previously identified as a volatile component of peanut oil, Beaufort cheese, fried 15 bacon, roasted filberts, chickpea seed, mutton, chicken, beef volatiles, fried chicken, and kiwi fruit flowers [31].
Conclusions
This is the first study to identify volatile compounds of Durum wheat and Antep bulgur. A total of 47 and 37 volatile compounds were observed in the Durum wheat and Antep bulgur, respectively. Among these compounds, carboxylic acid, alcohols, and aldehydes were found to be the main types of volatile components. Dodecane predominated in Antep bulgur, while in Durum wheat the most abundant compound was 1-hexanol. As mentioned, overall the world, there are different production methods, for example, Antep type, Karaman (Mut) type, village type, and sun-dried type. This study gives the information about the flavor of Antep type bulgur and will also lead to new studies about this topic for different bulgur types.
Additional Points
Highlights.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
2018-12-15T17:45:13.347Z
|
2018-03-19T00:00:00.000
|
{
"year": 2018,
"sha1": "49bba7686d1c197536dd3346cf7c6232ebe0c70f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2018/8564086",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "49bba7686d1c197536dd3346cf7c6232ebe0c70f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
220057130
|
pes2o/s2orc
|
v3-fos-license
|
Phase response analyses support a relaxation oscillator model of locomotor rhythm generation in Caenorhabditis elegans
Neural circuits coordinate with muscles and sensory feedback to generate motor behaviors appropriate to an animal’s environment. In C. elegans, the mechanisms by which the motor circuit generates undulations and modulates them based on the environment are largely unclear. We quantitatively analyzed C. elegans locomotion during free movement and during transient optogenetic muscle inhibition. Undulatory movements were highly asymmetrical with respect to the duration of bending and unbending during each cycle. Phase response curves induced by brief optogenetic inhibition of head muscles showed gradual increases and rapid decreases as a function of phase at which the perturbation was applied. A relaxation oscillator model based on proprioceptive thresholds that switch the active muscle moment was developed and is shown to quantitatively agree with data from free movement, phase responses, and previous results for gait adaptation to mechanical loadings. Our results suggest a neuromuscular mechanism underlying C. elegans motor pattern generation within a compact circuit.
Introduction
Animal display locomotor behaviors such as crawling, walking, swimming, or flying via rhythmic patterns of muscle contractions and relaxations. In many animals, motor rhythms originate from networks of central pattern generators (CPGs), neuronal circuits capable of generating rhythmic outputs without rhythmic input (Cohen and Wallen, 1980;Grillner, 2003;Kiehn, 2011;Kristan and Calabrese, 1976;Marder and Calabrese, 1996;Pearce and Friesen, 1984;Yu et al., 1999). CPGs typically generate rhythms through reciprocal inhibitory synaptic interactions between two populations. In vertebrates, motor rhythms arise from half-center oscillator modules in the spinal cord (Marder and Calabrese, 1996).
Although isolated CPGs can produce outputs in the absence of sensory input, in the intact animal sensory feedback plays a critical role in coordinating motor rhythms across the body and modulating their characteristics (Friesen, 2009;Grillner and Wallén, 2002;Mullins et al., 2011;Pearson, 2004;Wen et al., 2012). Sensory feedback allows animals to adapt locomotor patterns to their surroundings (Andersson et al., 1981;Brodfuehrer and Friesen, 1986) and adapt to unexpected perturbations (Ekeberg and Grillner, 1999). In leeches (Cang et al., 2001;Cang and Friesen, 2000) and Drosophila (Akitake et al., 2015;Mendes et al., 2013), specialized proprioceptive neurons and sensory receptors in body muscles detect sensory inputs to regulate and coordinate the centrally generated motor patterns. In limbed vertebrates, proprioceptors located in muscles, joints, and/or skin cord . However, the mechanisms that give rise to these oscillators are still poorly understood. Proprioceptive feedback is crucial for C. elegans motor behavior. Studies have identified several neuron classes that have proprioceptive roles. The B-type motor neurons mediate proprioceptive coupling between anterior to posterior bending during forward locomotion (Wen et al., 2012). The SMDD motor neurons, localized at the head, have been identified as proprioceptive regulators of head steering during locomotion (Yeon et al., 2018). Both the B-type motor neurons and the SMDD head motor neurons have long asynaptic processes hypothesized to have proprioceptive function (White et al., 1986) and have been suggested as candidate locomotor CPG elements (Kaplan et al., 2020). In addition, two types of neurons, the DVA and PVD interneurons, have also been described as having proprioceptive roles in the regulation of worm's body bend movement. The cell DVA has been shown to exhibit proprioceptive properties with a dependence on a mechanosensitive channel, TRP-4, which acts as a stretch receptor to regulate the body bend amplitude during locomotion (Li et al., 2006). In another study, body bending was shown to induce local dendritic calcium transients in PVD and dendritic release of a neuropeptide encoded by nlp-12, which appears to regulate the amplitude of body movements (Tao et al., 2019).
To experimentally probe mechanisms of rhythmic motor generation, including the role of proprioceptive feedback, we measured the phase response curve (PRC) upon transient optogenetic inhibition of the head muscles. We found that the worms displayed a biphasic, sawtooth-shaped PRC with sharp transitions from phase delay to advance.
We used these findings to develop a computational model of rhythm generation in the C. elegans motor circuit in which a relaxation-oscillation process, with switching based on proprioceptive feedback, underlies the worm's rhythmic dorsal-ventral alternation. Computational models for C. elegans motor behavior have long been an important complement to experimental approaches, since an integrative understanding of locomotion requires consideration of neural, muscular, and mechanical degrees of freedom, and are often tractable only by modeling (Boyle et al., 2012;Bryden and Cohen, 2008;Denham et al., 2018;Izquierdo and Beer, 2018;Johnson et al., 2021;Karbowski et al., 2008;Kunert et al., 2017;Olivares et al., 2021). We sought to develop a phenomenological model to describe an overall mechanism of rhythm generation but not the detailed dynamics of specific circuit elements. We aimed to incorporate biomechanical constraints of the worm's body and its environment (Fang-Yen et al., 2010;Gray and Lissmann, 1964;Wallace, 1968), as well as account for how sensory feedback is incorporated. To improve predictive power, we aimed to minimize the number of free parameters used in the model. Finally, we sought to optimize and test this model with new experiments as well as with published findings.
Our model reproduces the observed PRC and describes the locomotory dynamics around optogenetic inhibitions in a manner that closely fits our experimental observations. Our model also agrees with results on gait adaptation to external load and the asymmetry in time-dependent curvature patterns of undulating worms. Our experimental findings and computational model together yield insights into how C. elegans generates rhythmic locomotion and modulates them depending on the environment.
C. elegans forward locomotion exhibits a stable and nonsinusoidal limit cycle
To gain insight into wave generation, we first sought to examine the quantitative behavioral characteristics of worms during forward locomotion. First, we measured the undulatory dynamics of body bending by computing the time-varying curvature along the centerline of the body (Fang-Yen et al., 2010;Leifer et al., 2011;Pierce-Shimomura et al., 2008;Wen et al., 2012) from analysis of dark field image sequences of worms exhibiting forward locomotion. In order to quantitatively treat the drag between the body and its environment, we examined locomotion of worms in dextran solutions of known viscosity (see Appendix; Fang-Yen et al., 2010). The normalized body coordinate is defined by the distance along the body centerline divided by the body length ( Figure 2A). The curvature k at each point along the centerline of the body is the reciprocal of local radius of curvature (Figure 2A), with a positive (negative) curvature representing ventral (dorsal) bending. We further define the dimensionless or scaled curvature K ¼ k Á L, where L is the length of the worm. Using this metric, we quantified the worm's forward movement by calculating scaled curvature as a function of body coordinate and time ( Figure 2B).
We used this behavioral data to generate phase portraits, geometric representations of a dynamical system's trajectories over time (Izhikevich, 2007), in which the time derivative of the curvature is plotted against the curvature. If the curvature were sinusoidal over time, as it is often modeled in slender swimmers (Fang-Yen et al., 2010;Gray, 1933;Guo and Mahadevan, 2008;Niebur and Erdö s, 1991), the time derivative of curvature would also be sinusoidal, with a phase shift of p=4 radians relative to the curvature, and the resulting phase portrait would be symmetric about both the K and dK=dt axes. Instead, we found that the phase portrait of the bending movement in the worm's head region (0.1-0.3 body coordinate) during forward locomotion is in fact non-ellipsoidal and strongly asymmetric with respect to reflection across the K or dK=dt axes ( Figure 2D). Plots of both the phase portrait ( Figure 2D) and the time dependence ( Figure 2C) show that K and dK=dt are strongly non-sinusoidal.
In addition to the head, other parts of the worm's body also display nonsinusoidal bending movements ( Figure 2-figure supplement 1). In this paper, we focus on curvature dynamics of the worm's head region (0.1-0.3 body coordinate) where the bending amplitude is largest and the nonsinusoidal features are most prominent (Figure 2-figure supplement 1).
We asked whether the phase portrait represents a stable cycle, that is whether the system tends to return to the cycle after fluctuations or perturbations away from it. To this end, we analyzed the recovery after brief optogenetic muscle inhibition. We used a closed-loop system for optically targeting specific parts of the worm Leifer et al., 2011) to apply brief pulses of laser illumination (0.1 s duration, 532 nm wavelength) to the heads of worms expressing the inhibitory opsin NpHR in body wall muscles (via the transgene Pmyo-3::NpHR). Simultaneous muscle inhibition on both sides causes C. elegans to straighten due to internal elastic forces (Fang-Yen et al., 2010). Brief inhibition of the head muscles during forward locomotion was followed by a maximum degree of paralysis approximately 0.3 s after the end of the pulse, then a resumption of undulation ( Figure 3A,B; Video 1).
To quantify the recovery dynamics, we defined a normalized deviation d describing the state of the system relative to the phase portrait of normal oscillation (see Appendix), such that d ¼ À1 at the origin, d ¼ 0 at the limit cycle, and d>0 outside the limit cycle. We found that the deviation following optogenetic perturbation (Figure 3-figure supplement 1) decays toward zero regardless of the initial deviation from the normal cycle, indicating that the worm tends to return to its normal oscillation after a perturbation. These results show that C. elegans head oscillation during forward locomotion is stable under optogenetic perturbation. The dynamics of these perturbed worms also allow us to reconstruct the phase isochrons and vector flow fields (Figure 3-figure supplement 2) of the worm's head oscillation, two other important aspects of an oscillator (see Appendix).
Taken together, these results show that during forward locomotion, head oscillation of a worm constitutes a stable oscillator containing a nonsinusoidal limit cycle.
Transient optogenetic inhibition of head muscles yields a slowly rising, rapidly falling phase response curve The phase response curve (PRC) describes the change in phase of an oscillation induced by a perturbation as a function of the phase at which the perturbation is applied, and is often used to characterize biological and nonbiological oscillators (Izhikevich, 2007;Pietras and Daffertshofer, 2019;Schultheiss et al., 2011). We performed a phase response analysis of the worm's locomotion upon transient optogenetic inhibitions.
Using data from 991 illuminations (each 0.1 s in duration) in 337 worms, we analyzed the animals' recovery from transient paralysis as a function of the phase at which the illumination occurred. We define the phase such that it equals to zero at the point of maximum ventral bending ( Figure 3D). When inhibition occurred with phase in the interval 0; p=6 ½ , the head typically straightened briefly and then continued the previous bend, resulting in a phase delay for the oscillation ( Figure 3C-E). When inhibition occurred with phase in the interval p=3; p=2 ½ , the head usually appeared to discontinue the previous bend movement, which resulted in a small phase advance ( Figure 3F-H). When inhibition occurred with phase in the interval 2p=3; 5p=6 ½ , the head response was similar to that within the interval 0; p=6 ½ , and also resulted in a phase delay ( Figure 3I-K). Combining the data from all phases of inhibition yielded a sawtooth-shaped PRC with two sharp transitions from phase delay to advance as well as two relatively slow ascending transitions from phase advance to delay ( Figure 3L,M). In control worms, which do not express NpHR in the body wall muscles (see Materials and methods), the resulting PRC shows no significant phase shift over any phases of illumination ( In addition to phase response analyses with perturbations to the worm's anterior region, we conducted similar analyses for the dynamics across the body by optogenetically inhibiting body wall muscles of other regions (Figure 3-figure supplement 5). We found that the sawtooth feature of PRC tends to decrease monotonically as the perturbation occurs further away from the head (Figure 3-figure supplement 5A,E,I).
Next, we asked whether the sharp downward transitions in the PRC represent a continuous decrease or instead result from averaging data from a bimodal distribution. When we plotted the distribution of the same data in a 2-D representation we found that the phase shifts display a piecewise, linear increasing dependence on the phase of inhibition with two abrupt jumps occurring at f » p=3 and 4p=3, respectively ( Figure 3M). This result shows that the sharp decreasing transitions in PRC reflect bimodality in the data rather than continuous transitions.
In addition to examining PRCs induced by muscle inhibition, we also calculated PRCs with respect to inhibitions of cholinergic motor neurons. We performed similar experiments on transgenic worms in which the inhibitory opsin NpHR is expressed in either all cholinergic neurons (Punc-17::NpHR::ECFP) or B-type motor neurons (Pacr-5::Arch-mCherry). In both strains, Figure 3 continued (green bar, aligned at t ¼ 0) from ATR+ group (red curve, 11 trials using 4 worms) and control ATR+ (no light) group (black curve, eight trials using three worms). Gray curves are individual trials from ATR+ group (10 randomly selected trials are shown). (E) Mean phase portrait graphs around the inhibitions (green line) from ATR+ group (same trials as in D) and control group (ATR+, no light, 3998 trials using 337 worms). Gray curves are individual trials from ATR+ group. (F-H) Similar to (C-E), for phase range p=3; p=2 ½ . (I-K) Similar to (C-E), for phase range 2p=3; 5p=6 ½ . (L) PRC from optogenetic inhibition experiments (ATR+ group, 991 trials using 337 worms, each point indicating a single illumination of one worm). The curve was obtained via a moving average along the x-axis with 0:16p in bin width and the filled area represents 95% confidence interval within the bin. (M) A 2-dimensional histogram representation of the PRC using the same data. The histogram uses 25 bins for both dimensions, and the color indicates the number of data points within each rectangular bin. The online version of this article includes the following figure supplement(s) for figure 3: Video 1. Transient illumination of the anterior region of a freely moving Pmyo-3::NpHR worm. Green-shaded region indicates timing and location of illumination.
https://elifesciences.org/articles/69905#video1 we again observed sawtooth-shaped PRCs (Figure 3-figure supplements 6 and 7), with variations only in the magnitudes of phase shifts. These experiments show that the sawtooth-shaped feature of PRC is maintained for motor neuron inhibition, suggesting that the transient muscle and neuron inhibition interrupt the motor circuit dynamics in a similar manner.
The GABAergic D-type motor neurons provide a dorsoventral reciprocal inhibition of opposing muscles during locomotion. We asked whether the D-type motor neurons are required for the observed sawtooth shape of the PRC. We examined transgenic worms that express NpHR in the body wall muscles but have mutations unc-49(e407), a loss-of-function mutant of GABA A receptor that is required by the D-type motor neurons (Bamber et al., 1999). After performing optogenetic inhibition experiments, we found that the PRC also displays a sawtooth feature (Figure 3-figure supplement 8). This result shows that D-type motor neurons are not necessary for the motor rhythm generator to show the sawtooth-shaped PRC.
Sawtooth-shaped PRCs are observed in a number of systems with oscillatory dynamics, including the van der Pol oscillator (Cestnik and Rosenblum, 2018), and may reflect a phase resetting property of an oscillator with respect to a perturbation (Izhikevich, 2007;Schultheiss et al., 2011). Further interpretation of the PRC results is given below.
Worm muscles display a rapid switch-like alternation during locomotion As a first step in interpreting and modeling our findings, we estimated the patterns of muscle activity in freely moving worms, in part by drawing on previous biomechanical analyses of nematode movement (Fang-Yen et al., 2010;Gray and Lissmann, 1964;Wallace, 1968).
In mechanics, a moment is a measure of the ability of forces to produce bending about an axis. Body wall muscles create local dorsal or ventral bending by generating active moments across the body. In addition to the active moments from muscles, there are also passive moments generated by the worm's internal viscoelasticity and by the forces due to the interaction of the worm with its external environment.
We estimated the output patterns of the active muscle moment that drives the head oscillations of freely moving worms immersed in viscous solutions. Following previous analyses of C. elegans locomotor biomechanics under similar external conditions (Fang-Yen et al., 2010), the scaled active muscle moment can be described as a linear combination of the curvature and the time derivative of the curvature (Equation 1; also see Methods and Appendix). We observed that in the phase portrait graph ( Figure 2D), there are two nearly linear portions of the curve. We hypothesized that these linear portions correspond to two bouts during which the active muscle moment is nearly constant.
Using fits to the phase plot trajectory (see Materials and methods and Appendix) we estimated the waveform of the active muscle moment as a function of time ( Figure 2D Inset). We found that the net active muscle moment alternates between two plateau regions during forward locomotion. From the slope of the steep portions on this curve, we estimated the time constant for transitions between active moments to be t m » 100 ms. This time constant is much smaller than the duration of each muscle moment plateau period ( » 0:5 s), suggesting that the system undergoes rapid switches of muscle contractions between two saturation states.
A relaxation oscillator model explains nonsinusoidal dynamics
We reasoned that the rapid transitions of the active muscle moment might reflect a switching mechanism in the locomotory rhythm generation system. We hypothesized that the motor system generates locomotory rhythms by switching the active moment of the muscles based on proprioceptive thresholds.
To expand further upon these ideas, we developed a quantitative model of locomotory rhythm generation. We consider the worm as a viscoelastic rod where the scaled curvature K(t) varies according to: where t u describes the time scale of bending relaxation and M a t ð Þ is the time-varying active muscle moment scaled by the bending modulus and the body length (see detailed derivations in Appendix). We note that in a stationary state (dK=dt ¼ 0), the curvature would be equal to the scaled active muscle moment. That is, the scaled active moment represents the static curvature that would result from a constant muscle moment.
We define a proprioceptive feedback variable P as a linear combination of the current curvature value and the rate of change of curvature. In our model, once this variable reaches either of two thresholds P th and ÀP th ( Figure 4D), the active muscle moment undergoes a change of sign ( Figure 4E), causing the head to bend toward the opposite direction ( Figure 4B).
Our model has 5 parameters: (1) t u , the bending relaxation time scale, (2) t m , the muscle switching time scale, (3) M 0 , the amplitude of the scaled active muscle moment, (4-5) b and P th , which determine the switch threshold. The first three parameters were directly estimated from our experimental results from freely moving worms (see Appendix). Parameters b and P th were obtained using a two-round fitting procedure by fitting the model first to the freely moving dynamics (first round) and then to the experimental phase response curve (second round) (see Appendix).
With this set of parameters, we calculated the model dynamics as represented by the phase portrait ( Figure 4C) as well as curvature waveform in one cycle period ( Figure 4F). We found that in both cases the model result agreed with our experimental observations. Our model captures the asymmetric phase portrait trajectory shape found from our experiments ( Figure 2D). It also describes the asymmetry of head bending during locomotion: bending toward the ventral or dorsal directions occurs slower than straightening toward a straight posture during the locomotory cycle ( Figure 4F Inset).
Considering the hypothesized mechanism under the biomechanical background (Equation 1), our model provides a simple explanation for the observed bending asymmetry during locomotion. According to the model, the active muscle moment is nearly constant during each period between transitions of the muscle moment. Biomechanical analysis under this condition predicts an approximately exponential decay in curvature, which gives rise to an asymmetric feature during each half period ( Figure 4F).
Relaxation oscillator model reproduces responses to transient optogenetic inhibition
We performed simulations of optogenetic inhibitions in our model. To model the transient muscle paralysis, the muscle moment is modulated by a bell-shaped function of time ( Figure 4-figure supplement 1; also see Appendix) such that, upon inhibition, it decays toward zero and then recovers to its normal value, consistent with our behavioral observations ( Figure 3B).
From simulations with different sets of model parameters, we found that the model PRCs consistently exhibited the sawtooth shape found in experiments, although differing in height and timing of the downward transitions. In addition to the model parameters t u , M 0 , and t m that had been explicitly estimated from free-moving experiments, we performed a two-round fitting procedure (see Appendix) to determine the other parameters (including b, P th , and parameters for describing the optogenetically induced muscle inhibitions (see Figure 4-figure supplement 1)) to best fit the freely moving dynamics and the experimental PRC, respectively, with a minimum mean squared error (MSE) (Figures 4F and 5A; also see Appendix). For the parameters b and P th , the optimization estimated their values to be b ¼ 0:046 s and P th ¼ 2:33, as shown on the phase portraits (gray dashed lines in Figures 4C, 5B and D).
The threshold-switch mechanism model provides an explanation for the observed sawtoothshaped PRC. By comparing model phase portrait graphs around inhibitions occurring at different phases ( Figure 5B-E), we found that the phase shift depends on the relative position of the inhibition with respect to the switch points on the phase plane. (1) If the effect of the inhibition occurs before the system reaches its switch point ( Figure 5B), the system will recover by continuing the previous bend and the next switch in the muscle moment will be postponed, thereby leading to a phase delay ( Figure 5C). (2) As the inhibition progressively approaches the switch point, one would expect that the next switch in the muscle moment will also be progressively postponed; this explains the increasing portions of the PRC. (3) If the inhibition coincides with the switch point ( Figure 5D), the muscle moment will be switched at this point and the system will recover by aborting the previous bend tendency, resulting in a small phase advance ( Figure 5E). This switching behavior explains the two sharp downward transitions in the PRC.
Relaxation oscillator model predicts phase response curves for singleside muscle inhibition
As a further test of the model, we asked what PRCs would be produced with only the ventral or dorsal head muscles being transiently inhibited. In the model, the muscle activity is represented using the scaled active moment of muscles. We conducted model simulations (see Appendix) to predict the PRCs for transient inhibitions of muscles on the dorsal side ( Figure 6A, Upper) and ventral side ( Figure 6B, Upper), respectively. To experimentally perform phase response analysis of single-side muscle inhibitions, we visually distinguished each worm's dorsoventral orientation (via vulval location) and targeted light to either the ventral or dorsal side of the animal. Transiently illuminating (0.1 s duration) dorsal or ventral muscles in the head region of the transgenic worms (Pmyo-3::NpHR) induced a brief paralyzing effect when the segment was bending toward the illuminated side but did not induce a significant paralyzing effect when the segment was bending away from the illuminated side ( Figure 6-figure supplement 1).
Combining the experimental data from all phases of dorsal-side or ventral-side inhibition yielded the corresponding PRCs ( Figure 6A,B, respectively), from which we found that both PRCs show a peak in the phase range during which the bending side is illuminated but shows no significant phase shift in the other phase range. The experimental observations are qualitatively consistent with model predictions.
We found that the PRC of dorsal-side illumination shows a smaller paralytic response than that of ventral-side illumination. This discrepancy may be due to different degrees of paralysis achieved during ventral vs. dorsal illumination ( Figure 6-figure supplement 1), possibly due to differences in levels of opsin expression and/or membrane localization. We therefore modulated the parameter for describing degree of paralysis when simulating the PRC of the dorsal-side illumination to qualitatively account for this discrepancy (see Appendix). Our model is consistent with the dependence of wave amplitude and frequency on external load C. elegans can swim in water and crawl on moist surfaces, exhibiting different undulatory gaits characterized by different frequency, amplitude, and wavelength ( Figure 7A). Previous studies Berri et al., 2009;Fang-Yen et al., 2010 have shown that increasing viscosity of the medium induces a continuous transition from a swimming gait to a crawling gait, characterized by a decreasing undulatory frequency ( Figure 7C) and an increasing curvature amplitude ( Figure 7D). We asked whether our model is consistent with this load-dependent gait adaptation. We incorporated the effect of external viscosity into our model through the bending relaxation time constant t u (see Appendix). We ran our model to determine the dependence of model output on viscosity with varying viscosity h. We found that model results for frequency and amplitude dependence on viscosity of the external medium are in quantitative agreement with previous experimental results (Fang-Yen et al., 2010;Figure 7C,D).
We sought to develop an intuitive understanding of how the model output changes with increasing viscosity. We recall that the model generates a proprioceptive feedback variable in the form Figure 4A), and that the active muscle moment in our model undergoes a change of sign upon the proprioceptive feedback reaching either of two thresholds, P th and ÀP th . As the viscosity increases, one expects that a worm will perform a slower undulation due to the increase in external load. That is, the term b _ K becomes smaller. To compensate for this effect, the worm needs to undulate with a larger curvature amplitude to maintain the same level of proprioceptive feedback.
Next, we asked how the PRC depends on external viscosity. Model simulations with three different viscosities produced PRCs with similar sawtooth shape but with sharp transitions delayed in phase as the external viscosity increases ( Figure 7F). We also measured PRCs from optogenetic inhibition experiments in solutions of three different viscosities ( Figure 7G). Comparing the relative locations of the transitions in PRCs between the model and the data, our prediction also quantitatively agrees with the experimental results.
These results further support the model's description of how undulatory dynamics are modulated by the external environment.
Evaluation of alternative oscillator models
Although our computational model agrees well with our experimental results, we asked whether other models could also explain our findings. We examined three alternative models based on wellknown mathematical descriptions of oscillators (van der Pol, Rayleigh, and Stuart-Landau oscillators) and compared them with our original threshold-switch model and with our experimental data.
First, we tested the van der Pol oscillator, the first relaxation oscillator model (Van der Pol, 1926) which has long been applied in modeling neuronal dynamics (Fitzhugh, 1961;Nagumo et al., 1962). It is based on a second-order differential equation for a harmonic oscillator with a nonlinear, displacement-dependent damping term (see Appendix). By choosing a set of appropriate parameters, we found that the free-running waveform and phase plot of the van der Pol oscillator are highly asymmetric, but in an inverted manner ( Figure 5-figure supplement 1B,F), compared with the experimental observations ( Figure 2C,D). Transiently perturbing the system with the bell-shaped modulatory function over all phases within a cycle produced a similar sawtooth-shaped PRC as that observed experimentally ( Figure 5-figure supplement 1N). However, the perturbed system was found to recover toward its limit cycle with a much slower rate than that of the experiments (Figure 5-figure supplement 1J). Simulations of single-side muscle inhibitions to the system produced single-sawtooth-shaped PRCs similar to those found experimentally ( Figure 6-figure supplement 2B,F).
Next, we examined the Rayleigh oscillator, another relaxation oscillator model which was originally proposed to describe self-sustained acoustic vibrations such as vibrating clarinet reeds (Rayleigh, 1896). It is based on a second-order differential equation with a nonlinear, velocitydependent damping term and it can be obtained from the van der Pol oscillator via a variable differentiation and substitution (see Appendix). From its free-running dynamics, we observed that the system exhibits a highly asymmetric waveform and phase plot that are similar to the experimental observations ( Figure 5-figure supplement 1C,G). Additionally, the Rayleigh oscillator also produces similar sawtooth-shaped PRCs with respect to transient muscle inhibitions of both sides (Figure 5-figure supplement 1O), dorsal side ( Figure 6-figure supplement 2C), and ventral side ( Figure 6-figure supplement 2G), respectively, and system's recovery rate after the perturbation was shown to be similar to that of the experiments ( Figure 5-figure supplement 1K).
Finally, we considered the Stuart-Landau oscillator, a commonly used model for the analysis of neuronal synchrony (Acebró n et al., 2005). Its nonlinearity is based on a negative damping term which depends on the magnitude of the state variable defined in a complex domain (see Appendix). The negative damping of the system constantly neutralize the positive damping on a limit cycle, making its free-running dynamics a harmonic oscillation which shows a sinusoidal waveform ( Figure 5-figure supplement 1D,H). Moreover, PRCs with respect to transient muscle inhibitions are constant with respect to phase ( Figure 5-figure supplement 1P), contrary to the experiments.
We compared the results of our models with the experimental results. In the van der Pol oscillator, the free-running waveform displays a different asymmetry ( Figure 5-figure supplement 1B,F) compared with the experimental observations and the perturbed system was shown to recover toward its limit cycle with a much slower rate than that of the experiments ( Figure 5-figure supplement 1J). The Rayleigh oscillator reproduces a free-running waveform similar to experimental ones ( Figure 5-figure supplement 1C,G) and its recovery rate toward limit cycle upon perturbation was close to that of the experiments ( Figure 5-figure supplement 1K). However, its PRC ( Figure 5figure supplement 1O) showed weaker agreement with the experimental PRC compared with the threshold-switch model ( Figure 5-figure supplement 1M) or the van der Pol model ( Figure 5-figure supplement 1N). Of all the models tested, the threshold-switch model showed the least meansquare error with the PRC data ( Figure 5-figure supplement 1M-P). We conclude that of these models, our threshold-switch model produced the best overall agreement with experiments.
We also found that two important experimental findings, the nonsinusoidal free-moving dynamics and the sawtooth-shaped PRCs can be achieved in our original model, the van der Pol and Rayleigh oscillators, which are all relaxation oscillators, but not in the Stuart-Landau oscillator, which is not a relaxation oscillator. Taken together, these results are consistent with the idea that a relaxation oscillation mechanism may underlie C. elegans motor rhythm generation.
Discussion
In this study, we used a combination of experimental and modeling approaches to probe the mechanisms underlying the C. elegans motor rhythm generation.
Our model can be compared to those previously described for C. elegans locomotion. An early model (Niebur and Erdö s, 1991) assumes that a CPG located in the head initiates dorsoventral bends and that a combination of neuronal and sensory feedback mechanisms propagates the waves in the posteriorward direction. In this model, sensory feedback plays a modulatory role in producing smoother curvature waves but is not explicitly required for rhythm generation itself. Other computational models have aimed to describe how the motor circuit generates rhythmicity. Several neural models for the forward-moving circuit (Karbowski et al., 2008;Olivares et al., 2021) incorporating of all major neural components and connectivity have been developed. These models included a CPG in the head based on effective cross-inhibition between ventral and dorsal groups of interneurons. In contrast, Bryden and Cohen, 2008 developed a neural model in which each segment along the body is capable of generating oscillations. In this model, a circuit of AVB interneurons and B-type motor neurons suffices to generate robust locomotory rhythms without cross-inhibition.
Other models have examined how C. elegans adapts its undulatory wavelength, frequency, and amplitude as a gait adaptation to external load (Boyle et al., 2012;Denham et al., 2018;Izquierdo and Beer, 2018;Johnson et al., 2021). To account for these changes, these models combined the motor circuit model with additional assumptions of stretch sensitivity in motor neurons, and worm body biomechanical constraints, to create a model that reproduced the changes in undulatory wave patterns under a range of external conditions. Previous detailed models of C. elegans locomotion have employed a relatively large number of free parameters (up to 40; Boyle et al., 2012;Karbowski et al., 2008). In our work, we sought to develop a compact phenomenological model to describe an overall mechanism of rhythm generation but not the detailed dynamics of specific circuit elements. To improve predictive power, we aimed to minimize the number of free parameters used in the model. Our model has only five free parameters, yet accurately describes a wide range of experimental findings including the nonsinusoidal dynamics of free locomotion, phase response curves to transient paralysis, and dependence of frequency and amplitude on external viscosity.
Our phase portrait analysis of worm's free locomotory dynamics has described a previously undescribed methods for measuring the bending relaxation time scale t u and the muscle moment transition time scale t m (see Appendix for details), which may be compared with previous studies of worm biomechanics (Fang-Yen et al., 2010;Berri et al., 2009) and neurophysiology (Milligan et al., 1997). Fang-Yen et al., 2010 measured out a linear relationship between the bending relaxation time scale and the external viscosity by deforming the worm body in Newtonian fluids with varied viscosities in the range 1-25 mPaÁs. Through an extrapolation based on that linear relationship, the relaxation time scale in 17% dextran NGM fluid (approximately 120 mPaÁs in viscosity) is estimated to be » 282ms, which is quite close to our measured result, t u » 260ms. Furthermore, our measurement of the muscle moment transition time scale (t m » 100ms) is consistent with a previously measured value for muscle time scale (Milligan et al., 1997) that has been widely adopted for other detailed models of nematode locomotion (Boyle et al., 2012;Bryden and Cohen, 2008;Butler et al., 2015;Chen et al., 2011;Denham et al., 2018;Izquierdo and Beer, 2018;Johnson et al., 2021;Karbowski et al., 2008;Olivares et al., 2021;Wen et al., 2012).
In our model, the mechanism for generating rhythmic patterns can be characterized by a 'relaxation oscillation' process which contains two alternating sub-processes on different time scales: a long relaxation process during which the motor system varies toward an intended state due to its biomechanics under a constant active muscle moment, alternating with a rapid period during which the active muscle moment switches to an opposite state due to a proprioceptive thresholding mechanism.
The term 'relaxation oscillation', as first employed by van der Pol, describes a general form of self-sustained oscillatory system with intrinsic periodic relaxation/decay features ( Van der Pol, 1926). The Fitzhugh-Nagumo model (Fitzhugh, 1961;Nagumo et al., 1962), a prototypical model of excitable neural systems, was originally derived by modifying the van der Pol relaxation oscillator equations. These and similar relaxation oscillators have been characterized in various dynamical systems in biology and neuroscience (Izhikevich, 2007). For example, the dynamics exhibited from the action potentials of barnacle muscles in their oscillatory modes were found to yield 'push-pull' relaxation oscillation characteristics (Morris and Lecar, 1981). The beating human heart was found to behave as a relaxation oscillator (Der pol b, 1940). Several studies of walking behavior in stick insects (Bässler, 1977;Cruse, 1976;Graham, 1985;Wendler, 1968) proposed that the control system for rhythmic step movements constitutes a relaxation oscillator in which the transitions between leg movements is determined by proprioceptive thresholds.
Key properties shared by these relaxation oscillators are that their oscillations greatly differ from sinusoidal oscillations and that they all consist of a certain feedback loop with a 'discharging property'. They contain a switch component that charges an integrating component until it reaches a threshold, then discharges it again (Nave, 2007), then repeats. Many relaxation oscillators, including the van der Pol and Rayleigh models, exhibit sawtooth-shaped phase response curves (Der pol b, 1940; also see Figure 5-figure supplement 1). As shown in our experimental and model results, all the above properties have been revealed in the dynamics of C. elegans locomotive behavior, consistent with the idea that the worm's rhythmic locomotion also results from a type of relaxation oscillator. In our computational model, a proprioceptive component sensing the organism's changes in posture is required to generate adaptive locomotory rhythms. What elements in the motor system could be providing this feedback? Previous studies have suggested that head and body motor neurons, including the SMDD head motor neurons and the B-type motor neurons, have proprioceptive capabilities (Wen et al., 2012;Yeon et al., 2018) and may also be involved in locomotory rhythm generation Gao et al., 2018;Kaplan et al., 2020;Xu et al., 2018). This possibility is consistent with earlier hypothesis that the long undifferentiated processes of these cholinergic neurons may function as proprioceptive sensors (White et al., 1986). In particular, recent findings (Yeon et al., 2018) have revealed that SMDD neurons directly sense head muscle stretch and regulate muscle contractions during oscillatory head bending movements.
In our model, the proprioceptive feedback variable depends on both the curvature and the rate of change of curvature. Many mechanoreceptors are sensitive primarily to time derivatives of mechanical strain rather than strain itself; for example, the C. elegans touch receptor cells exhibit such a dependence (Eastwood et al., 2015;O'Hagan et al., 2005). The ability of mechanosensors to sense the rate of change in C. elegans curvature has been proposed in an earlier study (Butler et al., 2015) in which it was hypothesized that the B-type motor neurons might function as a proprioceptive component in this manner. Mechanosensors encoding a simultaneous combination of deformation and velocity have been observed in mammalian systems including rapidly-adapting (RA) and intermediate-adapting (IA) sensors in the rat dorsal root ganglia (Rugiero et al., 2010). Proprioceptive feedback that involves a linear combination of muscle length and velocity was also suggested by a study of C. elegans muscle dynamics during swimming, crawling, and intermediate forms of locomotion (Butler et al., 2015). In our phenomenological model, the motor neuron constituent may represent a collection of neurons involved in motor rhythm generation. Therefore, the proprioceptive function posited by our model might also arise as a collective behavior of curvaturesensing and curvature-rate-sensing neurons.
Further identification of the neuronal substrates for proprioceptive feedback may be possible through physiological studies of neuron and muscle activity using calcium or voltage indicators. Studies of the effect of targeted lesions and genetic mutations on the phase response curves will also help elucidate roles of specific neuromuscular components within locomotor rhythm generation.
In summary, our work describes the dynamics of the C. elegans locomotor system as a relaxation oscillation mechanism. Our model of rhythm generation mechanism followed from a quantitative characterization of free behavior and response to external disturbance, information closely linked to the structure of the animal's motor system (Gutkin et al., 2005;Nadim et al., 2012;Schultheiss et al., 2011;Smeal et al., 2010). Our findings represent an important step toward an integrative understanding of how neural and muscle activity, sensory feedback control, and biomechanical constraints generate locomotion.
Worm strains and cultivation
C. elegans were cultivated on NGM plates with Escherichia coli strain OP50 at 20˚C using standard methods (Sulston and Hodgkin, 1988). Strains used and the procedures for optogenetic experiments are described in the Key resources table and Appendix. Preparation of OP50 and OP50-ATR plates were as previously described . All experiments were performed with young adult (< 1 day) hermaphrodites synchronized by hypochlorite bleaching.
Locomotion and phase response analyses
To perform quantitative recordings of worm behavior, we used a custom-built optogenetic targeting system as previously described Leifer et al., 2011). Analysis of images for worm's body posture was performed using a previously developed custom software . The anterior curvature is defined as the average of the curvature over body coordinate 0.1-0.3; excluding the range from 0 to 0.1 avoided measurement of high-frequency movements of the worm's anterior tip. Descriptions of the apparatus and image analyses are available in Appendix.
For phase response experiments, opsin-expressing worms were illuminated using a brief laser pulse (532 nm wavelength, 0.1 or 0.055 s duration, irradiance 16 mW/mm 2 ) in the head region (0-0.25 body coordinate). A total of 10 trials with 6 s interval between successive pulses were performed for each animal. Trials in which the worms did not maintain forward locomotion were censored. To generate the phase response curve (PRC), we calculated the phase of inhibition of each trial and the resulting phase shift. Details of calculations for the averaged PRC are given in Appendix.
All the data and image analysis codes used in the manuscript are available at Dryad (archived at https://doi.org/10.5061/dryad.wwpzgmsk2).
Computational modeling
Our primary model is based on a novel neural control mechanism incorporated with a previously described biomechanical framework (Fang-Yen et al., 2010;Gray and Lissmann, 1964;Wallace, 1968). A proprioceptive signal is defined by a linear combination of bending curvature and rate of change of curvature. When the signal reaches a threshold, a switching command is initiated to reverse the direction of muscle moment. The worm's curvature relaxes toward the opposite direction, and the process repeats, creating a dorsoventral alternation. Detailed descriptions of implementation and fitting procedure of this model and alternative models are available in Appendix. All codes for modeling analyses are available at Dryad (https://doi.org/10.5061/dryad.wwpzgmsk2). The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. ], WEN001 wenIs001 ].
Author contributions
For optogenetic experiments, worms were cultivated in darkness on plates with OP50 containing the cofactor all-trans retinal (ATR). For control experiments and free-moving experiments, worms were cultivated on regular OP50 NGM plates without ATR. To make OP50-ATR plates, we added 2 mL of a 100 mM solution of ATR in ethanol to an overnight culture of 250 mL OP50 in LB medium and used this mixture to seed 6 cm NGM plates.
Locomotion analysis
To analyze worm locomotion in viscous fluids, we placed worms in dextran solutions in chambers formed by a glass slide and a coverslip separated by 125-mm-thick polyester shims (McMaster-Carr 9513K42). For viscosity-dependence experiments, we used 5%, 17%, and 35% (by mass) solutions of dextran (Sigma-Aldrich D5376, average molecular weight 1,500,000-2,800,000) in NGMB. These solutions were measured to have viscosities of 10, 120, and 5400 mPaÁs (Fang-Yen et al., 2010), respectively. We used a 17% dextran solution for all other experiments. NGMB consists of the same components as NGM media (Stiernagle, 2006), but without agar, peptone, or cholesterol.
We recorded image sequences using a custom-built optogenetic targeting system based on a Leica DMI4000B microscope under 10X magnification with dark field illumination provided by red LEDs. Worm images were recorded at 40 Hz with an sCMOS camera (Photometrics optiMOS). We used a custom-written C++ software to perform real-time segmentation of the worm during image acquisition. The worm was identified in each image by its boundary and centerline calculated from a binary image. Anterior-posterior orientation was noted visually during the recording. Segmentation information, including coordinates of the worm boundary and centerline, was saved to disk along with the corresponding image sequences.
Post-acquisition image analysis was performed using a custom MATLAB (Mathworks) similar to previous reports . The worm centerline of each image was smoothed using a cubic spline fit. We calculated curvature k as the dot product between the unit normal vector to the centerline and derivative of the unit tangent vector to the centerline with respect to the body coordinate. Dimensionless curvature K was calculated as the product of k and the worm body length L represented by length of the centerline. Since the segmentation was relatively noisy at the tips of the worm, we excluded curvature in the anterior and posterior 5% of the body length. The worm's direction of motion was identified by calculating the gradients in the curvature over time and body coordinate, and image sequences in which the worm performed consistent forward movement (lasting at least 4 s) were selected for analysis. The anterior curvature K t ð Þ was defined as the average of the dimensionless curvature over body coordinate 0.1-0.3; this range avoided high-frequency movements of the anterior tip of the animal.
To quantify oscillatory dynamics during forward locomotion, we identified undulatory cycles from the time sequence of anterior curvature in each worm. Local extrema along each sequence were identified and portions between consecutive local maxima were defined as individual cycles. To minimize the effects of changes in the worm's frequency, we excluded cycles whose period deviated by more than 20% from the average period of all worms' undulations in each experimental session.
For the ease of computing average dynamics, we converted individual cycles from a time-dependent to a phase-dependent curvature by uniformly rescaling each cycle to a phase range of 2p. The averaged curvature within one cycle was then computed by averaging all individual cycles in the Similarly, the averaged phase derivative of curvature within one cycle was calculated as dK=df
Stability of the worm's head oscillation
To examine the stability of the worm's head oscillation during forward locomotion, we analyzed head oscillations of worms that were optogenetically perturbed with 0.1 s muscle inhibitions and estimated their recovery dynamics after being deviated from the normal oscillation due to the perturbation.
To illustrate the oscillation dynamics, we use a two-dimensional variable, x ¼ K; _ K À Á in the unit of curvature where ¼ 0:135s is a scaling factor. In Figure 3-figure supplement 2, we depicted the closed trajectory (black) in the plane spanned by the variables K and _ K for the head oscillation of unperturbed moving worms (this coordinate plane is in fact a linearly scaled version of the phase plane spanned by the variables K and _ K), which we call as the normal cycle of the worm's head oscillation.
Next, we defined an amplitude variable d that represents the normalized deviation to the normal cycle. If the oscillator is stable, the closed orbit for the unperturbed dynamics is usually called the stable limit cycle. Here, we stick to the notion of normal cycle instead of using 'limit cycle' to avoid any confusion on the stability of the worm's head oscillation. For any phase state of an individual oscillation, the normalized deviation to the normal cycle is defined as d Here, D f ð Þ is distance to the center of oscillation on the phase plane, which is set to the origin, such where f denotes the phase value of the current state estimated by the four-quadrant inverse tangent of the variable pair K; _ K À Á . In this expression, D C f ð Þ denotes the distance to the center of oscillation that is evaluated exactly on the normal cycle at phase f.
Using the deviation to the normal cycle to describe the amplitude of the worm's head oscillation, we collected the amplitude dynamics over time for all periods of the worm's head oscillations during which no illumination pulse occurs, that is, all periods of locomotion between two consecutive illumination pulses. We grouped the amplitude dynamics into bins according to their initial amplitudes and then calculated the collective amplitude dynamics for each bin. As shown in Figure 3-figure supplement 1, the collective amplitude variable d converges to zero after roughly 0.5 s regardless of the initial amplitude. This result indicates that the worm's head oscillation returns to its normal oscillation after being perturbed and that the normal cycle may represent a stable limit cycle for the oscillation.
Phase isochron map and vector field for the worm's head oscillation
On the normal cycle we define the phase of the oscillation as f C t ð Þ ¼ ! 0 Á t modT0 , where ! 0 ¼ 2p=T 0 is the angular frequency of normal oscillation (the calculation of T 0 will be described in the next subsection) and we determined the initial phase (f C ¼ 0) to be when K reaches local maximum (or Þ) and hence f C ¼ p to be when K reaches local minimum (or x ¼ K min ; 0 ð Þ). In this way, we parameterized the normal cycle by defining a bijective map between phases and state points The map F x ð Þ ¼ f has been well defined for all the state points on the normal cycle C. We next estimate the phases for points off the normal cycle. By definition (Izhikevich, 2007), if x 0 is a point on the normal cycle and y 0 is a point off the normal cycle, then y 0 will have the same phase as x 0 if the trajectory starting at the initial point y 0 off the normal cycle converges to the trajectory starting at the initial point x 0 on the normal cycle as time goes to infinity. Here, we define the set of all state points off the normal cycle having the same phase as a point x 0 on the normal cycle as the isochron (Winfree, 2001) for phase f 0 ¼ F x 0 ð Þ. In our analysis, it was not possible to define an isochron according to the theoretical definition since data were always recorded in a finite time period during experiments. We used an alternative way to estimate all isochrons on the phase plane for the worm's head oscillation. For each individual trial of illumination, we observed that, due to the optogenetic inhibition, the variable _ K quickly decayed toward zero value immediately after the illumination and then recovered after approximately 0.3 s as the oscillation converged to a normal oscillation. Therefore, by finding the local minimal of _ K immediately after each illumination pulse, we located the point at which the paralyzing effect is just removed and after which the oscillation starts a free resumption to normal oscillation. We call this point the 'notch point' x N as it can be clearly seen from the phase plot (as shown in Figure 3E, H and K). After the notch point x N , the oscillation will proceed to its next phase states Þ and x f ¼ p ð Þ (or vice-versa), both of which can be easily identified through peak finding from the curvature dynamics K. Hence, we obtained two sub-trajectories from the oscillation: one Next to determining the timing of the notch point t x N ð Þ, we determined the phase of the notch point in the following steps: (1) we computed the phase of the state on the normal cycle at time t x N ð Þ as if the perturbation had not occurred, which is Àp. Here, f x C N À Á was computed twice using phase states x f ¼ 0 ð Þ [subscripted with u] and x f ¼ p ð Þ [subscripted with l] as references, respectively; (2) we calculated the induced phase shift PRC t illum ð Þ and the phase of the notch point is f and calculating the phase of x N , we then estimated the phase values for all the points within each of the two sub-trajectories through linear interpolation.
Following the above steps, we calculated the phase values for all the state points on the phase plane that have been recorded from the optogenetic experiments. We then applied a 2-D moving average (using the angular statistics method) for the obtained phase values over the phase plane to smooth out the isochron map. Finally, we used a linear 2-D interpolation to obtain a phase isochron map with a finer resolution as shown in Figure 3-figure supplement 2.
To compute the vector field of the worm's head oscillation, we collected all the sub-trajectories Þ that are defined above and took derivative of the trajectories with respect to time. Thus, by collecting all the phase states K; _ K À Á and their corresponding time derivatives dK=dt; d _ K À Á =dt À Á that describe the tangent vectors of trajectories, we generated the raw form of vector field for the worm's head oscillation. Again, we applied a 2-D moving average for the raw outcome over the phase plane to smooth out the vector field. We used a linear 2-D interpolation to obtain a vector field with an appropriate number of quivers to be displayed ( Figure 3-figure supplement 2).
Phase response analysis
To generate phase response curves (PRCs) from optogenetic inhibition experiments, each trial's illumination phase f, as well as the induced phase shift F, were calculated. To calculate the two variables, the animal's phase of oscillation was estimated based on timings of local extrema identified from the time-varying curvature profiles via a peak finding method. Specifically, (i) the occurrence of illumination of the trial was set to t ¼ T illum ; t ¼ 0 was set at the beginning of each experiment. (ii) Around the illumination, timings of the two local maxima of curvature immediately before and after were identified as the two zero-phase points of the oscillation before and after the illumination, respectively. Here, the timings are denoted as TZ À2 , TZ À1 , TZ þ1 , and TZ þ2 , in the ascending order of time. (iii) Similarly, timings of the two local minima of curvature immediately before and after the illumination were identified as the two half-cycle-phase points before and after the illumination, respectively. Here, the timings are denoted as TH À2 , TH À1 , TH þ1 , and TH þ2 , in the ascending order of time. (iv) With these measurements, cycle period T 0 was computed as T 0 ¼ TZ þ2 À TZ þ1 þ TZ À1 À TZ À2 þ TH þ2 À TH þ1 þ TH À1 À TH À2 ð Þ=4, so the angular frequency of undulation ! 0 ¼ 2p=T 0 (T 0 was computed as the average of differences of adjacent local maxima/ minima before and after illumination; multiple cycles were used here to reduce noise). In addition, the illumination phase f of each individual trial was computed as , and the corresponding phase shift F was computed as Here, phase of illumination and the corresponding phase shift were computed twice using zero [subscripted with u] and half-cycle [subscripted with l] phase points as references, respectively. We generated 2-D scatter plots for all trials with illumination phase as x coordinate and the corresponding phase shift as y coordinate. To visualize the distribution of the scatter points we generated bivariate histogram plots by grouping the data points into 2-D bins with 25 bins on both dimensions covering the range 0; 2p ½ for x and range Àp; p ½ for y. To indicate average tendency of phase shift depending on phase of illumination, we calculated a mean-curve representation of PRCs via a moving average operation. In this process, each mean was calculated over a sliding window of width 0:16p along the direction of f from 0 to 2p. The 95% confidence interval relative to each window of data points was also computed and an integral number of them were displayed as filled area around the PRC. Through the computation, all statistical calculations followed the rules of directional statistics (Fisher et al., 1993) since f and F are circular variables defined in radians.
Phase response curves from perturbations of other body regions
We asked how phase responses for the other regions of the body would compare to that of the anterior region. We conducted optogenetic experiments that inhibited Pmyo-3::NpHR transgenic worms by transiently illuminating 0.1-0.3 (anterior), 0.4-0.6 (middle), and 0.6-0.8 (posterior) of the body length, respectively. We found that the amplitude of the sawtooth feature of PRC tends to decrease as the perturbation occurs further from the head (Figure 3-figure supplement 5A,E,I).
We also noticed that, for the same perturbed region, the PRC shape remains unaffected regardless of the region at which the dynamics were analyzed (see Figure 3-figure supplement 5A-C,D-F,G-I, respectively), suggesting that posterior regions of a freely moving worm follow their anterior neighbors with a constant phase offset. Taken together, these results suggest that a main rhythm generator may operate near the head of the worm to produce primary oscillations during forward locomotion. The sawtooth-shape feature of the PRC becomes stronger if the perturbation hits closer to the rhythm generator (
The relaxation oscillator model for locomotor wave generation
We first developed a relaxation oscillator model to simulate the rhythm generation during C. elegans forward locomotion. In this model, we incorporated a novel neuromuscular mechanism with a previously described biomechanical framework (Fang-Yen et al., 2010). Here, we only simulated the bending rhythms generated from the head region; the wave propagation dynamic is out of the scope of our study. Our phenomenological model does not contain detailed activities of individual neurons but focus on describing key neuromuscular mechanisms and their contributions to the rhythm generation.
To produce model variables that can be directly compared with experimental observations of moving animals, a biomechanical framework was first developed to describe worm's behavioral dynamics in its external environments. Following previous derivations for C. elegans biomechanics (Fang-Yen et al., 2010), the relationship between animal behavioral outputs and the active muscle moments can be described as follows: In Equation A1, the first term indicates the external viscous force that is transverse to the body segment where C N is the coefficient of viscous drag to the transverse movement and y denotes the lateral displacement of body segment; the second term indicates the internal elastic force where a is the bending modulus of the worm body; the third term indicates the internal viscous force where a v is the coefficient of the body internal viscosity. On the right side of Equation A1 is the active muscle moment m a .
Taking the second partial derivative with respect to body coordinate s on both sides of Equation A1 and, using the linear relation under the small-amplitude approximation, k » y ss , we arrive at: Under the assumptions of small-amplitude undulations and a fixed wavelength l down the worm body, k can be considered as a travelling sinusoidal wave with a small deviation, k s; t ð Þ ¼ k 0 sin 2ps=l À !t ð Þ þ d, which leads to an approximation, k ssss » 2p=l ð Þ 4 k. Plugging this approximation into Equation A2 while keeping s fixed, after some rearrangement, one gets: In terms of the dimensionless curvature K ¼ k Á L and dimensionless muscle moment we can rewrite Equation A3 as: and we note that Equations A5 and A6 yield Equation 1. In Equation A6, both the wavelength l and the normal viscous drag coefficient C N vary with the fluid viscosity h (Berri et al., 2009 ;Fang-Yen et al., 2010).
The above biomechanical framework in our model treats the worm's body segment as a viscoelastic rod and describes how the body segment bends under the forces provided by the active muscle moment. However, the simulated oscillation in K comes from the rhythmicity of the active muscle moment which originates from the hypothesized neuromuscular mechanism described by the following relaxation-oscillation process: i. Proprioceptive feedback is sensed as a linear combination of the current curvature value and the current rate of change of curvature, P ¼ K þ b _ K (black curve in Figure 4D). ii. During movement of bending, this proprioceptive feedback is constantly compared with two threshold values P th and ÀP th (gray dashed bars in Figure 4D). iii. Once the feedback reaches either of the thresholds (the switch points as indicated by red circles in Figure 4D), a switch command is initiated (blue square wave in Figure 4E). iv. The switch command triggers the active muscle moment to change toward the opposite saturation value (black curve in Figure 4E).
To simulate the switch-triggered muscle transition we used a modified logistic function: Here, the plus sign indicates the dorsal-to-ventral muscle moment transition while the minus sign indicates the opposite direction.
To initiate the oscillation in our model we set the system to bend toward the ventral side by setting M a j t¼0 ¼ M 0 and Kj t¼0 ¼ 0. During forward locomotion, the active muscle moment oscillates by undergoing a relaxation oscillation process: a relaxation subperiod during which M a stays at a saturated bending state (M 0 for ventral bending, ÀM 0 for dorsal bending), alternating between a shorter subperiod during which M a quickly transits toward the opposite state due to effects described in iii and iv. The bending curvature K t ð Þ which is driven by M a in an exponential decaying manner (Equation A5) follows the rhythmic activity of M a , thereby also exhibiting an oscillatory dynamic ( Figure 4B). This relaxation oscillator model reproduces two key features of free locomotion that we observed from experiments. First, freely moving worms exhibit nonsinusoidal curvature waveform with an intrinsic asymmetry: bending toward the ventral or dorsal directions occurs slower than straightening toward a straight posture during each locomotory cycle ( Figure 4F). Second, dynamic of the active muscle moment shows a trapezoidal waveform during forward locomotion ( Figure 2D Inset and Figure 4E). These results are independent of external conditions but reflect intrinsic properties of the neuromuscular mechanisms underlying locomotion rhythm generation.
Note that parameters M 0 , t u , and t m were estimated from data of free locomotion using phase portrait techniques described in the following subsection. Parameters b and P th were yet degenerate in this model of free locomotion. Here, we temporarily set b ¼ 0 and then set P th such that the oscillatory period predicted by model matched the average period measured from experiments with a minimum squared error: The nondegeneracy of b and P th was determined by fitting the model to the experimental PRC as described in the later subsection so that all the parameters for the model are provided as M 0 ¼ 8:45, t u ¼ 260 ms, t m ¼ 100 ms, b ¼ 46 ms, and P th ¼ 2:33.
Measuring bending relaxation time scale and amplitude of active muscle moment
To estimate these two parameters, we applied a heuristic method that uses the shape properties of C. elegans free-running phase plot ( Figure 2D). From the curve in the figure, we noticed two 'flat' portions symmetrically distributed at quadrant I and III on the phase plane. Recalling Equation 1 (or Equation A5): K þ t u _ K ¼ M a t ð Þ, the two flat regions indicate that the scaled active muscle moment, M a t ð Þ, is nearly constant during the corresponding time bouts. We then computed the linear correlation between variables K and _ K to identify the two 'flat' regions and, through linear fits, obtained two linear relations respectively: . Thus, the bending relaxation time scale t u and the amplitude of the scaled active muscle moment are estimated ast The above method used the phase plot measured from locomotion of worms swimming in a 17% dextran solution (120 mPaÁs viscosity) as an example. However, it is also valid for estimating parameters of locomotion in other viscosities.
Measuring active moment transition time scale
With t u (estimated from the above method), K h i and _ K (measured from locomotion) plugged to the left side of Equation 1, we were able to compute the waveform of the scaled active muscle moment M a t ð Þ on the right side of Equation 1. As expected and shown in Figure 2D Inset, the curve of M a t ð Þ is roughly centrally symmetric around point T 0 =2; 0 ð Þ on the plane, with two plateau portions indicating two saturated states for dorsal and ventral muscle contractions, respectively.
Between the two plateau portions represents a period during which the active muscle moment is undergoing a ventral-to-dorsal (or vice-versa) transition. We used a modified logistic function to model the ventral-to-dorsal muscle moment transition (substituting t with Àt for transition in the other direction): To estimate t m , the exponential time constant for the transition of active muscle moment, we took the time derivative of Equation A8 and took the absolute value of the resultant: We notice that when t ¼ 0, the maximum of jdM a =dtj is achieved and the value is M 0 =t m . On the other hand, the maximum of dM a =dt j j can be obtained from the experimental observation by simply finding the peak of |dM a =dt| curve where M a ¼ K t ð Þ h i þ t u Á dK t ð Þ=dt h i. Thus, t m can be estimated as:
Parameter estimation
For our original threshold-switch model, parameters t u , t m , and M 0 were estimated from free locomotion experiments as described above. These three parameters nearly fully determine the biomechanical framework of C. elegans bending movements (governed by Equation A5 and A8). On the other hand, parameters b and P th describe the proprioceptive feedback and the threshold-switch features in our model. Specifically, they characterize two threshold lines, K þ b _ K ¼ AEP th (as shown in Figure 4C). The two switch points-defined by the intersection between the phase trajectory and the threshold lines on the phase plane-determine the timing of switches for the active muscle moment (see Figure 4C-E). We noted that the model behavioral output of free locomotion is degenerate with respect to these two parameters; the same outcome would be produced if the threshold lines cross the same pair of switch points. To first determine the free-moving dynamic as well as the switch points, we temporarily set b ¼ 0 and then set P th such that the oscillatory period defined by model matched the average period measured from the experiments.
To obtain the nondegeneracy of P th and b, we fit our model to the experimental phase response curve using a global optimization procedure. Full procedure for the determination of b and P th is given below.
Modeling worm oscillations in varied environments
Differences in various environments will change only those parameters that are related to contact with external forces whereas parameters related to oscillator's internal properties will not be affected. In terms of the internal parameters of our model, we used values that were previously determined, which are t m ¼ 100 ms, M 0 ¼ 8:45, b ¼ 46 ms, P th ¼ 2:33. For the exogenous parameters, only the time constant of undulation, t u , varies according to external conditions. According to Equation A6, t u is explicitly determined in terms of other physical parameters, including biomechanical parameters measured in previous work (Fang-Yen et al., 2010): the internal viscosity of worm body is measured as a v ¼ 5 Á 10 À16 Nm 3 s; the bending modulus of worm body is measured as a ¼ 9:5 Á 10 À14 Nm 3 ; C N ¼ 31h is the coefficient of viscous drag for movement normal to the body (Katz et al., 1975), where h is the fluid viscosity. According to previous measurements of undulatory wavelengths in different viscous solutions (Fang-Yen et al., 2010), we applied a logarithmic fit to the data points, yielding l=L ¼ À0:158 log 10 h=h 0 ð Þ þ 1:5 for a continuous model realization in undulatory frequency and amplitude. Here, l is the wavelength and h 0 ¼ 1 mPa Á s. t ¼ 0 for Figure 3B) and reached maximal effect approximately at t ¼ 0:3 s. Here, we modeled the process of muscle inhibition by multiplying the scaled active muscle moment, M a , with a factor, 1 À Q Dt ð Þ, as a function of the time interval Dt in a bell-shaped form (Figure 4-figure supplement 1, Equation A14).
As described in our model, the dorsoventrally alternating feature of the active muscle moment during locomotion are described by the dynamics of M a t ð Þ. Specifically, M a t ð Þ is positive when ventral muscles contract and dorsal muscles relax, and negative for the other half of the cycle. Therefore, in our threshold-switch model, specifically inhibiting dorsal-or ventral-or both-side muscles was computationally equivalent to conditionally modulating M a t ð Þ with the bell-shaped modulating function depending on the sign of M a t ð Þ. For simulating inhibition process in the three alternative models, we factored out a specific term from individual model equations as a generalized active muscle moment. We applied the bellshaped modulating function to this term conditionally for each individual model. Detailed descriptions of implementing modeled inhibitions in alternative models are available from below.
To get a deeper understanding of how phase response curves are related to systems dynamics during wave generation, we systematically simulated transient muscle inhibitions on individual model oscillators at different times within a cycle period to generate model PRCs. To do that, we theoretically simulated the process of muscle inhibition by multiplying model active muscle moment with a modulatory factor, 1 À Q Dt ð Þ, which has a bell-shaped profile (Figure 4-figure supplement 1): where r ¼ 0:3 s is the timing of the occurrence of maximal paralysis according to our experimental observations on the effect of muscle inhibition ( Figure 3A,B), H indicated the maximal degree of paralysis, and p, q measure the paralyzing rate and duration, respectively. To ensure sufficient smoothness during computation, we let p ¼ 0:3 Á 10 À1=q so that Qj Dt¼0 >0:99. Note that when modeling the dorsal-side-only muscle inhibition, the parameter H for describing max degree of optogenetic muscle inhibition was modulated to H ¼ 0:5 Ã H optimal to qualitatively agree with experimental observations ( Figure 6). This factor accounts for unequal degrees of paralysis during ventral vs. dorsal illumination ( Figure 6-figure supplement 1), causing the PRC of dorsal-side illumination to show a relatively moderate response compared to ventral-side illumination.
To simulate the muscle inhibition on our threshold-switch model, we multiplied M a with 1 À Q ð Þ any time the model was to be inhibited during its oscillatory period. To apply this operation to the alternative models, we factored out a term as a generalized active muscle moment for each individual model and then multiplied it with the bell-shaped function described above. The generalized forms of active muscle moment for the alternative models are implemented by modifying their original forms as follows: a. For the van der Pol Oscillator, it is modified as: b. For the Rayleigh Oscillator, it is modified as: c. For the Stuart-Landau Oscillator, it is modified as: For each individual model listed above,M i (subscript i represents V, R, and S, respectively) is the generalized muscle moment which is to be multiplied by the bell-shaped factor 1 À Q ð Þ upon perturbation, and P i is the additional damping coefficient. Note that the minus sign prior to M i in the first equation of each set indicates that M i is a negative damping term that provides power to the system, while P i is set positive for modeling the effect of bending toward the straight posture due to internal and external viscosity. Also note that Equations A15-A17 would be equivalent to their original form (Equations A11-A13) when inhibition is absent (in this case,M i ¼ M i ).
By modeling the muscle inhibition process during locomotion, we were able to perform simulations of phase response experiments on individual models to produce perturbed systems dynamics ( Figure
Optimization of models
For each individual model we developed, the parameters were determined via a two-round fitting process. First, a subset of parameters was determined by fitting the model to observations of freemoving dynamics; the model could generate free-moving dynamics close to observations at this point. Second, the rest of the parameters were settled by fitting it to experimental phase response curves; a model would be fully determined at this point. Detailed descriptions of the two-step optimization procedure for individual models are provided as follows: For the original threshold-switch model, parameters t u , M 0 , and t m were explicitly estimated from the experiments of free locomotion using phase portrait techniques described above. To simulate free locomotion, we further determined the position of switch points in the model (as indicated in Figure 4C red circle), which we did using method described by Equation A7. Next, we plugged the determined parameters into the model and conducted the second round of optimization by fitting the model with undetermined parameters P th , b, as well as the parameters for simulating muscle inhibition -H and q. We generated model PRC by perturbing the model oscillator at different times within a cycle period and settled the parameters such that the model PRC matched the experimental one with a minimum mean squared error (MSE) (During the computation of MSE, values of both model and experimental PRCs were sampled across the entire range of f with 100 evenly distributed samples. In this case, Df ¼ 2p=100): To find the parameters that minimize the difference, a global minimum search was performed using the MATLAB function 'GlobalSearch' (Ugray et al., 2007). When run, the function repeatedly uses a local minimum solver with different batches of parameter range and attempts to locate a solution that produces the lowest MSE value.
Similarly, the two-step optimization procedures for individual alternative models are summarized in Appendix 1- Two-step optimization procedure for van der Pol, Rayleigh, and Stuart-Landau oscillators. The firststep optimization determines part of parameters such that individual models generate free locomotion dynamics. The second-step optimization leads to complete models such that models' perturbed dynamics and phase response curves are produced.
|
2020-06-25T09:08:34.907Z
|
2020-06-23T00:00:00.000
|
{
"year": 2021,
"sha1": "806c4807e4e180906994268dd091af069006220b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.69905",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3376c0c83a3f5e7913f51bc759dc0765a00aa5b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Biology"
]
}
|
267699478
|
pes2o/s2orc
|
v3-fos-license
|
Nutritional, environmental and economic implications of children plate waste at school: a comparison between two Italian case studies
Objective: This study aims at comparing two Italian case studies in relation to schoolchildren’s plate waste and its implications, in terms of nutritional loss, economic cost and carbon footprint. Design: Plate waste was collected through an aggregate selective weighting method for 39 d. Setting: Children from the first to the fifth grade from four primary schools, two in each case study (Parma and Lucca), were involved. Results: With respect to the served food, in Parma, the plate waste percentage was lower than in Lucca (P < 0·001). Fruit and side dishes were highly wasted, mostly in Lucca (>50 %). The energy loss of the lunch meals accounted for 26 % (Parma) and 36 % (Lucca). Among nutrients, dietary fibre, folate and vitamin C, Ca and K were lost at most (26–45 %). Overall, after adjusting for plate waste data, most of the lunch menus fell below the national recommendations for energy (50 %, Parma; 79 %, Lucca) and nutrients, particularly for fat (85 %, Parma; 89 %, Lucca). Plate waste was responsible for 19 % (Parma) and 28 % (Lucca) of the carbon footprint associated with the food supplied by the catering service, with starchy food being the most important contributor (52 %, Parma; 47 %, Lucca). Overall, the average cost of plate waste was 1·8 €/kg (Parma) and 2·7 €/kg (Lucca), accounting respectively for 4 % and 10 % of the meal full price. Conclusion: A re-planning of the school meals service organisation and priorities is needed to decrease the inefficiency of the current system and reduce food waste and its negative consequences.
and the analysis of the contract tenders stipulated between the catering services and the public authorities (e.g.municipalities), different food procurement models can be defined.For example, a food procurement can be defined as local and/or organic if it is primarily based on local and/or organic food products, while a model can be defined as 'low cost' if no quality requirements are specified in the contract which relies only on the most economically advantageous offer.
In Italy, the public administration promotes the improvement of school catering service sustainability: by designing healthy and balanced meals compliant with the national dietary guidelines (4) ; by promoting seasonal and locally sourced food and organic products (with a provision of organic fruit, vegetables, legumes, cereals and bovine meat set to at least 50 % by weight); by encouraging the consumption of low-cost protein sources, such as legumes, alternative fish species and meat cuts; and by including recipes prepared with edible parts of fruit and vegetables usually discarded (5) .Recommendations to minimise food waste are also included.Among these, relevant measures are the monitoring of food surplus and waste with a standard procedure, the identifications of the main related critical issues, and the development of educational and awareness-raising programmes on food waste involving children and their families (6) .Food waste quantification and reduction strategies are therefore crucial in the public procurement sector, which is directly implicated in the promotion of sustainable practices (7) , consistently with the national and international policies and priorities.
It is worth noting that, globally, in 2019, food waste has been estimated to be approximately 931 million tonnes, an amount mainly ascribable to households (61 %) followed by food service (26 %) and retail (13 %), corresponding to about one-sixth (17 %) of the food globally produced (8) .
In 2015, the UN launched an international call to halve the global food waste per capita, both at the retail and at the consumer level, and recommended to target food losses originating during the production phase and along the supply chain by addressing substantial efforts in prevention, reduction, recycling and reuse activities (9) .Food waste is responsible for negative externalities in multiple dimensions.It causes an additional use of natural resources such as land, water, chemicals and energy that could be mitigated by enhancing virtuous management practices and strategies to prevent it (10) .According to the estimates, the food wasted at the retail and consumption level accounts for 9 % of greenhouse gas emissions (GHGe) generated by food systems (11) , contributing to climate change.
Among the environmental indicators, the Global Warming Potential, also referred to GHGe (kg CO 2 eq), is the most considered in the studies referring to meals served in schools (12)(13)(14)(15) .A sustainability assessment tool allowing the evaluation of food impact on biodiversity has been proposed for catering companies by considering concrete targets defined per meal (16) .Some studies (17)(18)(19)(20) considered the impact of observed or supposed food waste scenarios at the consumption phase.However, evidence for the school sector is limited (18,20) .
In parallel to the environmental dimension, the global economic loss caused by food loss and waste is estimated to amount to $940 billion annually, $218 billion of which is ascribable to the USA (21) .In Europe, the annual generation of about 88 million tonnes of food waste (i.e.174 kg/per capita) is associated with an estimated cost of 143 billion euros (22) .In the context of meals served in schools, the economic loss of plate waste has been reported by a previous study involving middle schools in Boston, where about 26 % of the total food budget was annually discarded by students at lunch (23) .
In the school setting, due to the difference between the amount of food planned to be consumed by children and their actual intake, food waste can exert considerable nutritional losses.For this reason, to minimise food waste in the school canteens, its quantification, analysis and monitoring are paramount (24) .Plate waste, defined as the quantity or proportion of food served to people but then discarded by people, could be used to estimate food intake and the efficacy of interventions developed to strengthen healthy eating behaviours at schools (25,26) .
This study, conducted within the framework of the Strength2Food European Projectfunded by Horizon 2020 research and innovation programme (grant agreement no 678024)is aimed at analysing and comparing two case studies in Italy in relation to children's plate waste, and its nutritional, environmental, and economic impact.The two case studies are represented by a sample of primary schools located in the municipality of Parma (Emilia-Romagna region) and the municipality of Lucca (Tuscany region).The investigation follows a previous work in which the two municipalities have been presented and evaluated together with other eight case studies across Europe to assess the sustainability impact of different models of public procurement and discuss the actions and strategies that are more likely to address multiple sustainability outcomes (27) .As previously described (27,28) and summarised in see online supplementary material, Supplemental Table S1, Parma and Lucca are characterised by two different food procurement models, defined respectively as local-organic (LOC-ORG) and organic (ORG).
By selecting two procurement models with a different share of local/traditional products in the meal offers, we expect to potentially find different plate waste percentages in consideration of the role of food neophobia and picky eating as crucial determinants of food rejection in children (29) .Therefore, the LOC-ORG model is expected to be linked to fewer children's plate waste because of less opportunity for neophobia.
Methods
Case study description LOC-ORG and ORG cases are comparable both considering the territory and food culture and traditions.For the study, a total of four primary schools, two in each municipality, have been selected by applying the following criteria: the presence of at least 100 children attending the schools and signing up for the school catering service; the model followed to prepare and distribute meals (i.e. from a central or an internal kitchen).The distance between the schools and the cooking centre was additionally considered in the ORG case, where school menu preparation was only external, contrarily to LOC-ORG case where meals were prepared in on-site or off-site kitchen, depending on the school facilities.The profile of the selected primary schools is provided in the Supplementary File (see online supplementary material, Supplemental Table S2).
In both case studies, lunch meals are designed and approved by municipal dieticians.The schools offer a daily single-option meal represented by the standard menu or a different one designed for special diets in case of allergy, celiac disease, religious reasons or specific requests.Parents have the responsibility of the menu type selection at the beginning of the school year or during it in case of contextual illnesses that can drive the selection of a meal in white.Students are personally served by the catering staff which distributes meals typically composed by starchybased first course (i.e.cereal or cereal-derived products such as pasta), protein-based second course (i.e.eggs, meat, fish, legumes and cheese), vegetables as side dish, bread and fruit.Dessert is present only for special occasions (e.g.Christmas) in LOC-ORG case, while it is served once a week as substitute of fruit in ORG case.Children are supposed to eat all the food offered to them; they can eventually ask for a slight modification of the standard portion to be served based on their requests.
Due to the numerosity of children, the school lunch was offered in both case studies in two waves of 30 min each where students of mixed grades are served.In the LOC-ORG case, the school menu follows a four week-cycle differentiated across the four seasons, while in the ORG case it runs on a seven/eight week-cycle and differs in autumn-winter from that offered in spring-summer.This means that, within each seasonal period, the menus are identically repeated after four weeks and after seven/eight weeks, respectively.
Data collection
Seasonal school lunch menus and normative provisions were respectively obtained from the City Council and the local manager of the central school catering services.Two weeks (one in winter 2017 and one in spring 2018) were selected in each school.Plate waste, referring to the edible fraction of served food discarded by children, was collected from all children (from the first to the fifth grade) in the school canteens, excluding those served with menus for special diets.An aggregate selective plate waste method (30) was applied, collecting waste distinguishing seven food categories: starchy food; bread; protein-based dishes; vegetables; fruits; desserts; and 'other'.The latter included dishes characterised by a comparable content of starchy and protein-based food (e.g.pizza).For each dish, the average weight of the edible served food was calculated from three servings offered at the beginning of each waves.The weight of the average servings and the collected food waste were assessed using electronic weighing scales (e.g.Parcel Digital Weighing Scale 30 kg, division: 1 g, 9901, Eva Collection).
Data analysis
For each dish, the served food amount (g) was calculated as the average serving of edible food (g) multiplied by the number of the served children.The total plate waste (kg) and served food (kg) were obtained respectively as the sum of the food waste (kg) and as the sum of the served food for every food category for the two schools in each case study, and across both data collection weeks.The percentage of food waste for every food category was computed as the ratio between the total edible plate waste (kg) referred to each food category and the total amount of the food categories (kg) served to children.Finally, the plate waste as total and by food category (kg) was divided by the number of served children to estimate the waste per child (g).By subtracting this quantity to the average serving of edible food, the food intake per child was estimated.
Energy and nutritive values per dish planned to be served were calculated using the national composition database for epidemiological studies in Italy (31) .The energy and nutrient contents of each food item were summed to obtain the energy and nutritional profile of the menus.By subtracting the energy and nutrient content of plate waste to those calculated for the dishes planned to be served, an estimation of the actual energy and nutrient intakes was provided.The energy and nutritional composition of the meals as planned to be served and consumed was evaluated in comparison to the national guidelines for school lunch (being the latter depicted in see online supplementary material, Supplemental Table S3).
The environmental impact of plate waste was estimated in terms of GHGe associated with the food production and food waste management by the school meal services in the two case studies.The applied emissions factors were retrieved from a multitude of sources (32)(33)(34)(35) .Specifically, the emissions factors applied to food waste follows the approach proposed by Moult and colleagues (36) .By multiplying the average emissions factor by the total volumes of waste collected for each food categories, the total production-and transport-related embodied carbon emissions for single food categories were estimated for both the cases.To estimate the total GHGe of the plate waste collected in the two case studies, the contribution of waste transportation and disposal method was also considered and summed to get total and specific embodied carbon emissions due to the production, transportation and waste disposal activities.
To estimate the economic loss linked to the collected plate waste, an average cost per kg of waste per food category was computed by dividing the total supply budget associated with the sampled menus by the volumes of specific items procured within each category, in proportion to each other.Specifically, the estimate of average cost per kg for single food waste categories was made through the average annual market price of every food item retrieved from the statistics provided by the national Institute of Agrifood Market Services (ISMEA).The total cost of food category waste was then summed to derive an estimate of the total cost of plate waste for the two cases.
Statistical analysis
The normality of data distribution was explored using the Kolmogorov-Smirnov test.According to data distribution, comparisons between the two groups (LOC-ORG v. ORG) were tested using the Student's t-test or the Mann-Whitney U test.Data are described as median (interquartile range) or as mean and standard deviation if data followed a nonnormal or normal distribution, respectively.The statistical analysis was performed using SPSS 28.0 software (SPSS Inc.), keeping the significance at P < 0•05.
Plate waste
Although in the LOC-ORG case 6196 more dishes were collected and 667 more kg of food were served, the total amount of plate waste reported here across the schools and seasons weighed 11 kg less than the counterpart (Table 1).Accordingly, the median daily plate waste corresponded to 27•3 kg and 28•3 kg in LOC-ORG and ORG model, respectively.
Despite the small difference in terms of absolute values, the share of total plate waste was different between the two case studies (P < 0•001), corresponding to a median of 23•7 % for the LOC-ORG model and 41•5 % for the ORG model (Table 2).The same trend can be observed by considering the single food categories.Proportions of waste in four of these were higher in the ORG case than in the counterpart (P < 0•01).Similarly, significant differences were obtained in the waste per child for total (P < 0•001) and single food categories (P < 0•01), excepting bread and vegetables.In the ORG case, the waste of fruit and vegetables exceeded 50 % of the serving, while a low proportion of the 'other' dishes was wasted (11•8 %).Therefore, the median pupils' uptake of plant-based food (i.e.fruit and vegetables) was less than the half of the average serving size served.
In the LOC-ORG case, the total waste per child accounted for a median of 138•6 g, with the highest contribution of the 'other' category (59•4 g as median), while in the ORG case it accounted for 207•9 g, with fruit contributing the most (75•4 g as median).
Beyond these findings, different trends of waste among dishes belonging to the same category were observed (see online supplementary material, Supplemental Tables S4-S7).For example, simple recipes among the starchy-based dishes (e.g.pasta or rice dressed with olive oil) were less wasted than more elaborate recipes (e.g.gnocchi with tomato sauce).Within the protein-based category, higher waste proportions were observed in the LOC-ORG when legumes or fish fillet were served, while in the ORG case both fish and cheese products were highly wasted (see online supplementary material, Supplemental Tables S4-S7).
As displayed in Table 4, the LOC-ORG model provided a higher content of some micronutrients in the served lunch.Accordingly, these differences and the different rate of plate waste reflected a discrepancy in the actual intake of vitamin C, K, P, Fe, and in the waste of vitamins B 1 , B 2 , B 3 , B 6 , and Zn.
By adjusting for energy and nutrient amounts of plate waste, children in the LOC-ORG schools consumed a mean of 74 % of the total energy of the offered lunch, accounting for 10 % more than children in the ORG schools.Both energy and nutritional losses were significantly higher in the ORG case compared with the counterpart (Fig. 1(a)).
In terms of macronutrients, the mean losses ranged from 16 % to 31 % in the LOC-ORG schools and from 31 % to 45 % in the ORG schools, polarising the loss of cholesterol and soluble sugars.In accordance with the high plate waste of vegetables, dietary fibre was on average highly wasted in both the cases (30 %, LOC-ORG and 43 %, ORG).
The micronutrient losses for vitamins ranged from a median of 15 % (vitamin D) to 30 % (vitamin A) in the LOC-ORG case and from 22 % (vitamin D) to 45 % (vitamin B 9 and C) in the ORG model (Fig. 1(b)).For vitamin B 12 , a relatively high intake was registered, resulting in lower losses, with respectively a median of 16 % and 30 % in the LOC-ORG and ORG.Among minerals, the median losses ranged from 16 % (Cu) to 29 % (Na) in LOC-ORG schools and from 33 % (Cu and Zn) to 41 % (Ca) for the ORG ones (Fig. 1(c)).
When compared with the national reference, the energy and nutrient distribution of the school menus considerably changed after subtraction of the plate waste (see online supplementary material, Supplemental Figures S1 and S2).Although the proportion of the energy provided by total proteins, fats and carbohydrates did not substantially differ, only 50 % of the LOC-ORG lunches and 21 % of the ORG menus reached the minimum energy threshold.The loss of fat content was the most severe, with 5 % of LOC-ORG lunches and 11 % of ORG lunches having adequate values, while the protein content had the best outcomes, with all the LOC-ORG menus and 84 % of the ORG menus being compliant with the recommendations.
Carbon impact of plate waste
In LOC-ORG model, food production accounted for 95 % of the total GHGe linked to plate waste (Table 5) and corresponded to 19 % of the total carbon footprint due to the total food supplied by the school catering service during the data collection days, estimated to be 3991 kgCO 2 eq.Similarly, in the ORG case, plate waste GHGe related to food production were 92 % of the total plate waste carbon footprint (Table 5) and represented 28 % of the total GHGe due to the food supplied by the catering service during the data collection days, estimated to be 2790 kg CO 2 eq.The food waste impact for the average lunch meal served to children was 0•2 kg CO 2 eq for the LOC-ORG and 0•3 kg CO 2 eq for the ORG model, corresponding respectively to a share of 20 % and 31 %.
Considering the contribution of single wasted food categories to the total carbon footprint, starchy food was responsible for 52 % (LOC-ORG case) and 47 % (ORG case), followed by protein-based dishes that exhibited a share of 17 % (LOC-ORG) and 31 % (ORG).
In particular, the food items that contributed more within the protein-based plates to the GHGe were meat and fish (50 %) in the ORG case, and soft and hard cheese (39 %) (see online supplementary material, Supplemental Table S8).Conversely, fruit and vegetables represented together about 19 % (LOC-ORG) and 10 % (ORG) of the total carbon burden, although they accounted for more than 45 % of the total food waste; the food included within the categories less represented in the school menus impacted respectively from < 1 % (ORG) to 6 % (LOC-ORG) when considering 'other' food and 4 % when considering dessert.
In both models, the transportation of food destined to be wasted had a marginal carbon impact (2 % and 4•5 % of the total food waste emission, in LOC-ORG and ORG, respectively).Similarly, the impact of food waste management was very low (3 % in both models), which was based on the composting system.
In both cases, the wasted protein dishes show relatively higher emissions factors (3•54 kg CO 2 eq/kg, LOC ORG; 4•41 kg CO 2 eq/kg, ORG) followed by the 'other' category for the LOC-ORG case (2•28 kg CO 2 eq/kg) and by dessert for the ORG case (2•61 kg CO 2 eq/kg).The GHGe associated with kg of wasted fruit and vegetables were instead the lowest (0•58 kg CO 2 eq/kg and 0•62 kg CO 2 eq/ kg, LOC ORG; 0•35 kg CO 2 eq/kg and 0•34 kg CO 2 eq/kg, ORG).When GHGe are considered per serving, the most impactful categories were instead 'other' (133•3 g CO 2 eq/g) followed by starchy food (110•3 g CO 2 eq/g) for the LOC-ORG case and starchy food (133•9 g CO 2 eq/g) followed by protein dishes (105•2 g CO 2 eq/g) for the ORG case.Overall, on average, the meal served in the LOC-ORG case had a lower carbon footprint compared with the counterpart (324•4 g CO 2 eq/g v. 344•5 g CO 2 eq/g).
Economic impact of plate waste
The plate waste collected respectively in the LOC-ORG and ORG cases corresponded to a total cost of € 978 and € 1462, equivalent to € 1•81 and € 2•65 per kg of waste (Table 6).Therefore, the cost associated with the total daily plate waste collected in the two case studies is € 48•9 and € 77•0 in LOC-ORG and ORG models, respectively.Starchy food contributed the most to the total plate waste economic cost both in the LOC-ORG case (44 %) and in the ORG case (45 %).In this category, bread consistently contributed accounting for 24 % and 17 % of the food waste cost in the LOC-ORG and ORG case, respectively.Altogether, fruit and vegetables accounted for 30 % (LOC-ORG) and 22 % (ORG) of the total plate waste cost and contributed about 50 % to the total waste.Protein dishes, together with the 'other' category, showed a share of 27 % of the total economic loss in both the case studies, while desserts impacted for the remaining 6 % in the ORG case.
Considered singularly, in LOC-ORG case, protein-based plates accounted for more than 20 % of the total food waste cost, although they entailed less than 11 % of the total waste amount.Within the plant-based food category, the most expensive food item was the codfish for the LOC-ORG case (48 %) and fresh cheese, accounting for 35 % of the total cost of the same food category, followed by turkey meat (13 %) and cured meat (11 %), for the ORG model (see online supplementary material, Supplemental Table S9).
The highest average cost per kg of wasted food in the ORG case can be attributed to dessert and the 'other' category (both 6•57 €/kg) followed by protein dishes (6•15 €/kg), which were the most expensive in the LOC-ORG case (6•00 €/kg).Similar to the environmental results, fruit and vegetables showed a relatively low impact, both considering their contribution to the total plate waste cost and the average cost per kg of wasted food category.However, the estimated cost of plate waste per meal was double in the ORG model (0•48 €/meal v. 0•24 €/meal).In relative terms, the estimated cost of plate waste represents 3•9 % and 9•6 % of the full price paid by parents in the LOC-ORG and ORG models, respectively.Among food categories, when the average cost per serving is considered, the most expensive was starchy food for both the case studies.According to the estimates, for ORG model the total economic loss associated with plate waste as a proportion of the total food procurement cost was 32 %, while for LOC-ORG model the plate waste cost accounted for 21 % of the meal service budget referred to the 2017-2018 school year.
Discussion
In this study, primary schoolchildren's plate waste was quantified, and its nutritional, environmental, and economic implications were estimated.Two food procurement models (local-organic and organic) were considered and compared, with the LOC-ORG model showing a lower waste.Across food categories, vegetables and fruit were highly wasted, followed by bread.The waste of vegetables and fruit reached relatively high proportions, mainly in the ORG case, where pupils' intake was, as median, less than half of the serving size offered to them.Vegetables were however highly discarded in all schools.Conversely, the protein-based dishes in the LOC-ORG case model and pizza in the ORG case registered the lowest waste percentages.Among starchy food dishes, simple recipes reported relatively lower plate waste compared with more complex recipes; nevertheless, due to the limited number of observations, it is not possible to derive a definitive picture from these results.Starchy food (including bread) accounted for the highest proportions against the total food waste collected in both the case studies (about 40 %), similarly to what was found in a Chinese study (37) for staple food (43 %).The authors reported however a higher share for vegetables (42 %) compared with what found in this study (18 % LOC-ORG and 12 % ORG).The plate waste percentage found for vegetables in the LOC-ORG case equals that reported by Boschini and colleagues (38) for side dishes in a study focusing on the Italian primary school context.The authors reported instead a significantly lower plate waste for bread (8 %), either compared with the LOC-ORG or ORG case.
From the analysis of the nutritional consequences of plate waste, a higher detrimental impact was reported for the ORG model compared with the LOC-ORG one, in which on average, compared with the meal offered at school, the loss of energy and macronutrients was approximately 10 % lower than in the ORG model.In accordance with plate waste data, soluble sugars and dietary fibre presented the highest shares of losses.Among micronutrients, the limited loss of vitamin B 12 converges with the preferential consumption of protein-based foods by children compared with other food categories.On the contrary, because of the consistent waste of fruit and vegetables, vitamin C and folate were highly lost (from 28 % to 45 %) across the case studies.
When adjusted for plate waste data, at least half of the sampled lunch meals fell below the national energy recommendations, and a wide range of the school lunches did not reach the national standards.In terms of compliance with the national reference values, Dinis and colleagues (39) found a lower share of lunches being adequate compared with LOC-ORG or ORG.Their study was carried out in Portugal, where primary schools have narrower national energy and nutritional standards compared with the Italian ones.Their plate waste analysis showed relatively higher percentages for vegetables (>60 %), while fruit was discarded at a lower rate (24 %) than in Italy.On the other hand, comparable results can be observed for starchy-based dishes for which the waste was similar (44 %, males; and 47 %, female) to the ORG case (39) .Among protein-based dishes, meat dishes were wasted in lower proportions compared with fish dishes.This pattern was observed both in Portuguese children (31 % and 32 % v. 55 % and 58 %, respectively, in males and females) (39) and in the two Italian case studies (on average 11 %, LOC-ORG and 25 %, ORG v. 17 %, LOC-ORG and 32 %, ORG).
With regard to the environmental impact of plate waste, starchy food greatly impacted in both the case studies in similar proportions, while for protein-based dishes a different pattern is suggested, with a higher share in the ORG case.The different composition of the food waste explains the associated carbon footprint being 5 % higher in the ORG compared with the counterpart.In a similar study evaluating the carbon footprint of food waste generated in nursery and primary public schools of Cento (Italy), the Global Warming Potential of food waste was estimated to be 15-18 % of the total meal impact (20) .It was below the percentages of the carbon emissions embedded in the meal waste reported in the present study (20-31 %).The composition of food waste can explain such discrepancy, as in the present study, the starchy food contributes at most to the total plate waste in the two case studies, while in the Cento's study, vegetables had a relatively higher contribution.
Concerning the economic perspective, the loss cost per kg of food waste was 43 % higher in the ORG than in the counterpart.For families the plate waste cost impacts the budget spent for the service, with a share from 4 % (LOC-ORG model) to 10 % (ORG model) of the full price paid per lunch meal.The economic value of plate waste represents a significant share of the total school meals service budget too, with about one-fifth (LOC-ORG case) and one-third (ORG case) of the budget for the food procurement spent on food that will be discarded by children.These findings are consistent with a study carried out in Cento, in which the economic impact of food waste is estimated in a range of 6-26 % compared with the total meal cost (20) .
Hypothetical plate waste determinants
The higher occurrence of more familiar local/traditional quality products (i.e.mainly PDO cheese and cured meat products) in the LOC-ORG model could have contributed to determine different plate waste scenarios.However, a multitude of individual, social and environmental factors, including meal recipes (40) , food texture (40,41) , food preference (42,43) , the canteen environment (44) and teacher engagement (45,46) , can exert an influence.Surprisingly, cooking in an onsite kitchen has been found to determine higher plate waste compared with cooking in an offsite kitchen (46) .Indeed, we can expect that the transport could negatively modify the sensory characteristics of cooked food (e.g.food texture and temperature at which food is served) at the time of consumption.However, we do not have supportive data from our study to substantiate this hypothesis.Further investigations in this direction are warranted.Furthermore, only in the Parma school canteens (LOC-ORG), the quantification of plate waste was performed by the caterer every month to optimise the meal planning, preparation and distribution.In this occasion, children had to separate their leftover into dedicated bins.A different scenario was observed in Lucca (ORG model), where only older students, after having lunch in one of the two schools, were used to contribute to clear the tables.These findings suggest the importance of school catering management and organisation and schoolteachers' commitment in driving children towards more sustainable eating behaviours and food-related habits.Indeed, dealing with the food waste issue in classrooms has shown a positive influence on children's attitudes, knowledge and behaviour (47) .
Strengths and limitations
To quantify plate waste, the gold standard technique (i.e. the weighting method) was applied.Moreover, a wide range of LCA emission factors was adopted to mitigate the uncertainty level in the environmental results and to capture the specificity of the production processes (local raw materials and methods of production) of the local/ quality foods served in the Italian school canteens.Whereas, for market food prices a short-run perspective was applied using average yearly prices.However, important limitations should be recognised.First, the data collection was performed in a few schools and the sampled menus cover only a proportion of the total offer.Therefore, the representativeness of our findings is not guaranteed in other organic and local-organic procurement models in Italy.Secondly, the nutritional analysis cannot rely on validated national nutritional databases focusing on organic products.Consequently, the nutritional evaluation of school menus did not consider possible discrepancies in the nutritional content of organic products compared with the conventionally grown food.Considering the environmental impact, beside the carbon footprint, a wider set of environmental indicators could have been considered, for example, human toxicity, eco-toxicity, biodiversity loss and animal welfare (48) .Last, due to the confidentiality content of the food procurement contracts between caterers and food suppliers, the present study estimates the economic impact of plate waste using the food prices available in national agri-food market survey datasets and not the actual price of each food item.
Conclusion
The present study highlights relatively high percentages of plate waste in primary schools located in two Italian municipalities, with the highest proportions for vegetables and fruit responsible for major losses of soluble sugars, dietary fibre, vitamin C and folate.Environmental and economic implications of waste were instead particularly relevant for starchy food and protein-based dishes, although less discarded.To minimise the food discarded by children at school, both the municipalities and the caterers need to identify the contextual determinants and develop effective strategies accounting for the school governance and catering management.Furthermore, the technical specifications of the school meals service procurement contracts could be directed to strengthen the commitment of the school meals service supply chain to develop new methods/techniques of meal design and preparation.In addition, nutritional and environmental education should be integrated into primary school programmes to increase the awareness on food waste impacts in children and teachers.More specifically, among the virtuous actions addressing the need to simultaneously minimise nutritional, economic and environmental plate waste implications (49) , catering managers should recognise the contribution of the meals service and staff to children's education by rewarding the ability of the catering staff to increase meal uptake thanks to high-quality interaction and supervision, facilitate engagement with pupils and parents in menu design and planning, find strategies to increase fruit and vegetable consumption through menu development, support an improved canteen design with a fun layout, and ensure to allow a proper time for eating lunch and to serve adequate portion sizes for age and appetite.With the aim to reduce plate waste, a virtuous implementation of these strategies, together with collecting waste monitoring practices, could compensate a procurement model with a lower share of local/traditional products, whose setting is rather static, as defined by the procurement contract.
Table 1
Number of dishes, quantity of served food and waste, including the waste per d, reported as total values and by food categories per food procurement model
Table 2
Serving size, waste percentage with respect to the served food and waste per child expressed as total daily values and by food categories for LOC-ORG (n 20) and ORG model (n19) LOC-ORG, local-organic; ORG, organic.P values refer to between-group comparison (LOC-ORG v. ORG), Mann-Whitney non-parametric test.Statistical analysis was not performed on the categories 'Other' and 'Dessert' due to limited number of data.
Table 3
Macronutrient composition and fibre content of served lunch menus, of plate waste and of actual intake in the LOC-ORG (n 20) and ORG model (n19)
Table 4
Micronutrient composition of served lunches, plate waste and actual intake in the LOC-ORG (n 20) and ORG model (n 19)
Table 5
Greenhouse gas emissions (kg CO 2 eq) and average emissions factors (kg CO 2 eq/kg) estimated for plate waste and serving in LOC-ORG and ORG cases considering the contribution of each food category and the contribution of food production, transportation, and waste handling
Table 6
Economic impact estimated for plate waste and serving in the LOC-ORG and ORG cases To compute the average cost per serving of starchy food, the average number of servings calculated between the starchy food and bread category for the two case studies (i.e.n 3757, LOC-ORG and n 3004, ORG) has been applied.†The economic cost refers to the average meal. *
|
2024-02-17T06:17:02.793Z
|
2024-02-16T00:00:00.000
|
{
"year": 2024,
"sha1": "652d99cc15b1b2c43d3c2327d34772d4b0c62d73",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/7B08117E814DE06825E3ADA33B2C39EC/S136898002400034Xa.pdf/div-class-title-nutritional-environmental-and-economic-implications-of-children-plate-waste-at-school-a-comparison-between-two-italian-case-studies-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ef4bd16b9e4a7ffda004f0ec4fb4f105841ee8e",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
37083714
|
pes2o/s2orc
|
v3-fos-license
|
Food for Thought ... on Alternative Methods for Nanoparticle Safety Testing
Nano is a big thing in toxicology. Articles, journals, and conferences are mushrooming, paralleling the rise of nanotechnologies but also showing the hunger of toxicology for new objects to study. Perhaps it would be better to focus on new approaches first? Nanomedicine promises new solutions for old problems, but what about the old problems of toxicology? It is a fallacy to assume that one can gain beneficial effects in the human organism without unwanted collateral effects. First, any biologically active agent perturbs physiology, hopefully as a corrective for the patient but at least requiring compensatory reactions of the healthy. Second, few agents are specific enough to have only one effect, but it is rare that we want all the effects and in the given mix of strengths. third, many agents show excess toxicity – even desired effects often become negative if excessively stimulated. this increase in negative effects is directly linked to more sensitive subpopulations (children, elderly, the diseased, those with genetic polymorphisms, etc.). thus, the promise of nanoparticles (NP) may be paid for in possible side-effects, i.e., toxicities (Garnett et al., 2006). For most manufactured NP, toxicity data are unavailable, with some exceptions for carbon black, titanium dioxide, iron oxides, and amorphous silica (Di Giacchino et al., 2009). So far nanoparticles and other nanomaterials (I will use the abbreviation NP for both but primarily thinking of particles or fibers and not e.g. nano-thick films) are, for the most part, treated by regulatory toxicology as chemicals (see last issue of this series, Hartung, 2010b). The most common definitions for NP include materials with dimensions from 1 nm (size of a sugar molecule) to 100 nm (size of a virus). Regulatory frameworks are on the way, opening up possibilities for alternative approaches (Sauer, 2009). Whether the differences between NP and their parent compounds are actually small or big problems, remains to be seen. But to quote Albert einstein: “Anyone who doesn’t take truth seriously in small matters cannot be trusted in large ones either.” When discussing alternative methods for nanoparticle toxicology, we might first look at some phrase permutations – leaving out one word in each iteration: – Alternative methods for toxicology – Methods for nanoparticle toxicology – Alternative nanoparticle toxicology – Alternative nano-methods Consideration 1: Alternative or advanced methods for toxicology?
Thomas Hartung
CAAt, Johns Hopkins University, Bloomberg School of Public Health, Baltimore, USA and CAAt-eU, University of Konstanz, Germany Nano is a big thing in toxicology.Articles, journals, and conferences are mushrooming, paralleling the rise of nanotechnologies but also showing the hunger of toxicology for new objects to study.Perhaps it would be better to focus on new approaches first?Nanomedicine promises new solutions for old problems, but what about the old problems of toxicology?It is a fallacy to assume that one can gain beneficial effects in the human organism without unwanted collateral effects.First, any biologically active agent perturbs physiology, hopefully as a corrective for the patient but at least requiring compensatory reactions of the healthy.Second, few agents are specific enough to have only one effect, but it is rare that we want all the effects and in the given mix of strengths.third, many agents show excess toxicity -even desired effects often become negative if excessively stimulated.this increase in negative effects is directly linked to more sensitive subpopulations (children, elderly, the diseased, those with genetic polymorphisms, etc.).thus, the promise of nanoparticles (NP) may be paid for in possible side-effects, i.e., toxicities (Garnett et al., 2006).For most manufactured NP, toxicity data are unavailable, with some exceptions for carbon black, titanium dioxide, iron oxides, and amorphous silica (Di Giacchino et al., 2009).
So far nanoparticles and other nanomaterials (I will use the abbreviation NP for both but primarily thinking of particles or fibers and not e.g.nano-thick films) are, for the most part, treated by regulatory toxicology as chemicals (see last issue of this series, Hartung, 2010b).The most common definitions for NP include materials with dimensions from 1 nm (size of a sugar molecule) to 100 nm (size of a virus).Regulatory frameworks are on the way, opening up possibilities for alternative approaches (Sauer, 2009).Whether the differences between NP and their parent compounds are actually small or big problems, remains to be seen.But to quote Albert einstein: "Anyone who doesn't take truth seriously in small matters cannot be trusted in large ones either."When discussing alternative methods for nanoparticle toxicology, we might first look at some phrase permutations -leaving out one word in each iteration: -Alternative methods for toxicology -Methods for nanoparticle toxicology -Alternative nanoparticle toxicology -Alternative nano-methods Consideration 1: Alternative or advanced methods for toxicology? the term "alternative methods" is most commonly understood as "alternative to animal experiments" or at least using refinement and reduction alternatives to traditional animal experiments.I have been struggling with the term over the last few years.
-"Alternative" has anti-establishment connotations for many, so we might talk instead about "advanced" methods.A lot of support the area receives, however, comes exactly from this "rage against the machine" aspect of the animal welfare movement -an honest, well warranted, ethical disagreement with the way science treats animals -which needs to be accommodated to find societal compromise.-"Methods" is not very clear, since work is mostly about testing and increasingly about in silico approaches or integrated testing strategies, so the phrase "alternative approaches" is used more frequently.-Most work is not alternative to animal experiments but to animal testing, as much experimentation describes research and testing the routine application of certain methods, especially in the regulatory context.So the discussion is very much about toxicology, though vaccine testing, efficacy testing for agent discovery, or basic research all utilize far more animals.Recently, the phrase "toxicology for the 21 st century" (tox-21c) generated tremendous buzz, more on the west side of the Atlantic, emphasizing the technological needs and opportunities for change.CAAt follows a dual strategy, stressing both the "alternative" (3Rs) and "advanced" (tox-21c) aspects for the different stakeholder groups.Fortunately, the two paths normally converge, and we can see them as two sides of the same coin -the most humane science is also the best science.
there is a broad base of literature, to which this series of articles contributes, highlighting the ethical concerns, costs (Bottini et al., 2008;Bottini and Hartung, 2009), limited predictivity (Hartung, 2008b;Hartung and Daston, 2009;Hoffmann andHartung, 2005, 2006;Hartung, 2009) and limited throughput (Rovida and Hartung, 2009;Hartung and Rovida, 2009) of current approaches that, for the most part, were developed some decades ago for drug safety testing and subsequently were adapted to pesticides, chemicals, cosmetics, and foodstuffs.these limitations serve as the driving forces for change on both volvement and where, in fact, one might exist.In particular, the chronic effects of chemicals are so poorly understood that we have no idea whether we would get any relevant alert from routine animal tests, which are inadequate even with regard to well-known hazards.Possible examples of continuously increasing health problems include atherosclerosis, male infertility, autism, and diabetes.It is worth noting that air pollution involving natural NP led mainly to excess deaths associated with cardiovascular illness (Seaton and Donaldson, 2005), a hazard not generally addressed in toxicology.Arteriosclerosis, in fact, is very difficult to induce in animals.Determination of the pulmonary and systemic inflammatory hazards typically seen with NP (Kipen et al., 2005) is not among the strengths of the toxicological toolbox.
there might well be hazards not present for a parent compound due to kinetics, as the adsorption, distribution, metabolism, and excretion of NP can differ greatly from that of larger particles or soluble substances.Changes in kinetics (bioavailability) alone (Holl, 2009) are sufficient to create additional hazards not seen with the parent compound, since whenever higher plasma or tissue levels of the substance are obtained, thresholds of toxicity might be exceeded (Fig. 1).We know very well from formulations of drugs that solubility after oral administration depends on particle size influencing peak plasma levels -a crucial determinant for toxicity.Similarly, higher concentrations can be achieved if transports through barriers are accelerated.However, the faster elimination -for example by cellular uptake or chemical reactivity -acts against this (Fig. 2).
For toxic effects, size matters, as a number of studies show size-dependent effects (Gornati et al., 2009).Good examples are gold and silver, which normally are minimally reactive but become much more so at NP sizes.Silver NP, for this reason, are used as bactericidal coatings for clothes, for example ("wash sides of the Atlantic (Hartung, 2010a).While there is progress, particularly in some areas of topical and acute toxicity (Hartung, 2008c), progress for the more demanding systemic and chronic toxicities has been limited.
Is there any reason to assume that nanotoxicology would not benefit from alternative methods, e.g. that they are less applicable to particles than to dissolved substances?Indeed, some theoretical considerations apply: the in vitro kinetics of particles might differ, i.e. their behavior in cell culture.this might include particle clumping (aggregation), binding to plastic, or floating on the cell culture media surface, all of which would alter cellular exposure and, thus, the concentration response curve.Similarly, exposure to air and non-physiological culture conditions might affect the experiments.Also, specific artifacts have interfered with cytotoxicity measures (Mtt) as typically applied in alternative methods (Worle-Knirsch, 2006).later, we will discuss some general problems of alternative methods use for NP.However, altogether nanotoxicology is likely a driving force and not a stumbling block toward the use of modern approaches in toxicology (Hartung and leist, 2008;Hartung 2008a;Nyland and Silbergeld, 2009).
Consideration 2: Special methods for nanotoxicology?
The first major question for nanotoxicology is: does it even exist?Is it any different from the current risk paradigm, i.e. hazard, kinetics, exposure measurement, and overall risk assessment?First, completely new modes of action for NP have been found -if we think of asbestos as a natural nanofiber, where a key mechanism is macrophage activation after ingestion of needles of asbestos, it also applies to nanofibers in general.However, the hazards are still classical ones, i.e. fibrosis and cancer.We can argue that this is only an additional mode of action, which can either be anticipated by size and shape of particles, or it could simply be added to the assessment, and be found by traditional approaches.From this point of view, it is rather unlikely that a really new hazard that could not be seen in repeated dose studies or cancer bioassays would be attributed to particles.
However, lung toxicology, for example, lags far behind other areas of concern, while NP are especially likely to reach the alveoli of the lung and exert toxicity there (Donaldson et al., 2004;Kagan et al., 2005).Airborne exposure testing is experimentally cumbersome and is avoided when possible, due not only to the effort (costs) involved but also to the poor reflection rodents give of human exposure.testing for respiratory irritation and sensitization is not standard for industrial chemicals, and guideline tests have yet to be developed.We also should be clear that the particular health effects of industrial chemicalsendocrine disruption, immunotoxicity, and developmental neurotoxicity -are among the more recent additions and are not yet reflected in testing programs.It would just take one scandal, however, to change this.
It may be unlikely that a completely new adverse health effect is induced by NP ("the ears fall off"), but there are many human diseases where we do not suspect any chemical in- there might be opportunities for reduction alternatives, too.though it is rather unlikely that lower variability of responses to NP would allow a reduction in animal group numbers, they should still be considered, as should designs with one control group for multiple tests or longitudinal studies following the same group of animals rather than sacrificing a group per time point.Here, the imaging opportunities for NP might make a difference for kinetics experiments.Opportunities to combine studies, e.g.mutagenicity and repeat-dose studies, enhanced chronic studies including carcinogenicity, or the inclusion of developmental neurotoxicity in reproductive toxicity studies should be considered as reduction alternatives.NP do not really differ from other test materials in this regard.
Consideration 3: Do we need a traditional or an alternative toxicology for NP?
the number of different NP we might need to address is potentially extremely high, with various shapes, size distributions, and coatings for each material.this alone suggests the importance of using alternative methods, which often allow higher throughput, replicates, and parallel tests.thus, the limit of test throughput is more relevant for NP.First, the main health concerns in particle toxicity hint at risks for the most complex endpoints (cancer, lung toxicity), which require most test capacities.Rodent inhalation models are especially prohibitive in terms of time and expense (Hillegass et al., 2010).Second, since any given substance, at least theoretically, can be formulated to particles of very different sizes, size distributions, shapes, and modifications, an almost unlimited testing demand could be considered.Choi et al. (Choi et al., 2009) calculated the costs for traditional testing of NP already on the market to be between $ 250 million and $ 1.2 billion and the time required at 34-53 years.
In addition, we should be aware that current regulatory toxicology was established for drugs under development.A very precautionary approach was taken to avoid putting volunteers and patients at risk (Hartung, 2009).While this might be appropriate for nanomedicine products, we have to ask ourselves what development opportunities we sacrifice if we apply the same precautionary methods to products with lesser exposure to humans or those that are not intended to be biologically active.the problem becomes most pronounced when precautionary methods (many false-positives) are used for substance groups with rare side-effects (Hoffmann and Hartung, 2005;Hartung, 2009).
this reasoning makes evident the need to explore the toxic profiles of a broad variety of NP to learn what we must control for.Furthermore, novel approaches need to be developed, since traditional approaches might have even greater limitations for NP than for other industrial chemicals and may not offer the throughput and velocity to cope with the dynamic developments in nanotechnologies.
What are the specific opportunities to use alternative methods?First, one major concern in cosmetics is dermal penetration your socks less often, thanks to silver NP coating").At least for the antimicrobial properties of silver NP, shape dependence has been shown (Pal et al., 2007).How this translates to toxic effects on human cells is not known.Cellular uptake of gold NP has been found to be shape-dependent (Chen et al., 2006), while others reported no differences in a number of cell systems for silica NP (Cha et al., 2007).
Similarly, reactive chemistry, a key feature of many toxicants, is strongly influenced by particle surface.One milliliter of 10 nm-sized NP has the surface area of a soccer field.Thus, we might see hazards with NP at lower concentrations than the maximally tested or testable doses currently applied of the parent compound.As one consequence, the exposure might require different measurements, e.g., instead of dose measures in mg/kg, particle numbers or particle surface might be more meaningful both in vivo and in vitro.
We might also see some effects only in vitro.We should not underestimate the hazards that are masked by current tests because the animals defend themselves successfully.More than 90% of substances that can exert genotoxicity in cells are not mutagenic in animal tests.this is not to say that the cell result was wrong; rather, it usually means we did not achieve the concentrations in vivo that we can apply in vitro or that some defenses were not reflected in vitro.the substance may still present a hazard, which might become relevant for humans or subpopulations.We do not know whether the defense mechanisms against some hazards are as effective when administered as NP. the novel properties of NP also can lead to new biological interactions (Walker et al., 2009), which could result in toxicities not shown by the parent compound.this extends to refinement methods, where, for instance, the exposure to airborne particles for the obligatory nose-breathing rat and mouse require attention to avoid overloading the respiratory tract.Agglomeration.Not everything called nano is actually nano.Aggregation or agglomeration of NP is very common and difficult to prevent.NP can have complex aggregation behaviors in aqueous solutions (Holl, 2009) with substantial impact on their toxicity.Many of the studies published so far did not exclude aggregation, but even as non-mono-dispersed particles, the smaller particles are more potent in many respects (Oberdörster et al., 2007).Aggregation effects also have been recognized in the ecotoxicity of silica, titanium dioxide, and zinc oxide NP (Adams et al., 2006).Some systematic approaches to dispersed nanoparticles have been proposed (Sager et al., 2007), but the problem still needs to be addressed on a caseby-case basis.
Stability.the stability of NP is not often discussed, but the sheer surface area represents a problem, as it not only attracts substances offering binding sites (for pyrogens, for example) but also lends itself to chemical reactions such as oxidation.Many NP might actually be coated.We know as little about the modification and degradation occurring over time as we do about the metabolic fate of NP.
Dosimetry.In toxicology, we have seen a move from primarily weight-based doses (mg/kg or ppm) to (molar) concentrations, especially when kinetic measures (plasma concentrations, for example) could be assessed or when experiments were done in vitro.For NP, weight, particle number, and surface area are typical dose measures, but shape, coating, and electrophysical properties, etc. can have a further impact.the chemical characterization (Powers et al., 2007) and dosimetry clearly require closer attention than for traditional chemicals (Walker and Bucher, 2009).In some cases, toxicity correlated best with NP surface area (Unfried et al., 2007), but it is still to be established whether this is a more general rule.It makes sense for reactive chemistry, which is a leading mechanism of toxicological damage, and for oxidative processes; in fact, generation of reactive oxygen species has been a key mode of action associated with NP toxicity.
In vitro biokinetics.this term has been coined to indicate that test substances in vitro also exhibit kinetics: they are adsorbed (e.g., by plastic or the albumin of fetal calf serum), stay soluble or precipitate, are taken up by cells, are oxidized by air or metabolized by the cells, and we interfere with their presence when changing cell culture media.the situation is not as complex as in vivo kinetics, but certainly the actual effective concentration reaching the cells is not the one we added.Just as in vivo work has been augmented by introducing kinetics, we might likewise give consideration to these factors as we move the field of in vitro toxicology forward (Bouvier D'Yvoire et al., 2007).the situation is no less complex for NP, where aggregation and particle coating must be considered.Cell membranes, mitochondria, and nuclei are considered major compartments for NP toxicity (Unfried et al., 2007).Thus, uptake and intracellular trafficking must also be addressed.
Cell contact of NP.Actual exposure of cells to NP needs to be assured, as NP might swim on the culture media.Also, NP are of NP.Rodent and rabbit skin have little to do with human skin, and both artificial human skin and explants offer opportunities with available, accepted methods.Some of these also allow testing for mechanical stress or inflammation, as well as penetration via hair follicles, as specific concerns for NP.Other barrier models for gut uptake, blood-brain barrier, or placental barrier are prevalidated but not yet validated.Still, they might be useful to characterize NP and identify or rule out specific concerns.
Kinetics will be affected, which can shift concentration response curves, limits, and threshold concentrations, as well as no-effect levels, thus altering critical components of the risk assessment paradigm.We will need to explore whether this can be handled with, for example, safety or assessment factors.this means requiring, for example, an additional factor as safety margin for the use of NP in humans.For current estimations of safe-dose uses from animal no-effect levels, we often require one of 10 for interspecies differences and one of 10 for sensitive subpopulations.the numbers appear to correspond more with the decimal system than with sound science -if we had twelve fingers instead of ten, we would likely be 1.44 times better protected by regulation.Completely new ports of entry of substances have already been described, when NP sizes fall below barrier cut-offs (Hillyer et al., 2001).Note that the ease of cellular uptake may also result in bioaccumulation of NP (Oberdörster et al., 2007).Furthermore, the larger surface area per unit weight often makes NP more reactive.Since many forms of toxicity are mediated by chemical reactivity, such as mutagenicity by formation of DNA adducts or sensitization by hapten binding, this raises the possibility of increased toxic potentials.
A number of alternative methods validated for chemicals and drugs might be useful for NP (tab.1), but none has been validated for this purpose.the modular approach to validation (Hartung et al., 2004), it should be noted, allows expanding applicability domains for validated methods, a possible fast-track to obtain regulatory acceptance for NP evaluation.the potential carcinogenicity of NP is of concern to toxicologists because of several specific properties -the potential to activate inflammatory mechanisms as promoters of cancer, the ability to reach alveolar compartments when inhaled, altered cellular uptake allowing NP to reach DNA, and the fact that mutagenicity is linked with reactive chemistry, which is often amplified by the large surfaces of NP.A number of in vitro models for mutagenicity are available, but they also are known to have many false positive results (Kirkland et al., 2005(Kirkland et al., , 2007)).Many of them also can be integrated in enhanced animal studies, allowing a reduction in animal use.A specific opportunity is offered by the cell transformation assays for cancer, which currently are peer-reviewed after validation.
Inflammation can be studied in monocyte activation tests, such as the validated alternative pyrogen tests (Hoffmann et al., 2005;Schindler et al., 2006).Human whole blood assays offer specific opportunities, as a cell suspension is used (Schindler et al., 2009).
experimental set-ups for airborne exposure of particles to air/ liquid interface cultures of cells are available but have not yet been validated.Noteworthy, methods 18-23 were not developed for the purpose of chemicals testing, but current validation activities explore their use for acute toxicity testing, which might lead to an extension of the applicability domain.Validity statements not listed are not relevant for chemicals / NP.
differently.Consequently, many prediction models developed for general chemicals will not work.NP aggregation and the difficulty of application to cells and animals affect the execution of routine tests.Databases allowing computational approaches are so rare that, for the immediate future, no major contribution can be expected.there might also be reasons to question the extrapolation of the NP parent compound to humans, but it is difficult to say whether animal results for NP are better or worse when extrapolated to humans.NP differ in exposure/kinetics, and, with regard to hazards, even higher potentials for interspecies differences exist.What is required is the systematic evaluation of validated alternatives for their applicability to NP and, if necessary, a modification of the prediction model.
It is disturbing that nanotoxicology is reinventing alternative approaches, often without referring back to the two decades of development and validation already accomplished for chemicals and cosmetic ingredients.Cytotoxicity and mutagenicity assays are broadly used (Kroll et al., 2009;Holl, 2009) without necessarily bridging to the validated methodologies.Others have highlighted the need to optimize validated toxicity and ecotoxicity tests for NP (Oberdörster et al., 2007;Behra and Krug, 2008).A variety of approaches lend themselves to adaptation for NP but none has been formally validated for NP.
Conclusions
the toxicology of NP is a rapidly emerging concern.It is driven by the dramatic increase in industrial uses of NP and by public debate.Increasing funding and studies inevitably will result in reports of toxicological effects of NP -both publication bias for positive findings and the multiple testing fallacy (if 20 experiments or endpoints are analyzed with p= 0.05 for significance, one should be false-positive) will come to bear here.they will spur further research, and it will require decades to sort out what is true and what is relevant.We will need validated tools that offer the throughput, reliability, and relevance to address key features of NP risks.the additional testing demand for NP adds to the urgency of developing new approaches in toxicology.Whether this will only add some tools for NP to the traditional approach or help to create an entire new paradigm for toxicology awaits an answer.So far, it appears that the problem of nanotoxicology is mainly a kinetic one -some safety factors could help to account for differences in ADMe, but we need to keep in mind that the enhanced bioavailability of NP on body, organ, and cell level might result in thresholds of toxicity being overstepped, in which case a change in hazards suddenly does become relevant.the fact that higher exposure levels in target cells can be more easily modeled in cell systems than in vivo, and thus such hazards identified, argues again for the use of alternative methods.
taken together, it appears that nanotoxicology, to a large extent, is dependent on the use and further development of alternative approaches.the more we know what we are looking for, the better we can target our testing.If we have no hypothesis, screening in many models and black-box types of animal tests known to pass from cell to cell.Cell monolayers resemble panfried eggs lying next to each other, giving only minimal cell-tocell contact areas.Furthermore, cell density in a typical culture is only 1% of normal tissue (Hartung, 2007), which changes dosimetry and the likelihood of NP-to-cell contact.
Special artifacts by NP in vitro.Single-walled carbon nanotubes (SWCNts) appear to interact with some tetrazolium salts such as Mtt but not with others (such as WSt1, INt, xtt) (Worle-Knirsch et al., 2006).More such artifacts are likely to be discovered, which may be prompted by large surface area, electrostatic properties, or increased reactivity.
Consideration 5: Alternative nano-methods
The question whether nanotechnologies offer specific opportunities to create new alternative methods deserves some consideration.Nanostructuring the surfaces of cell culture dishes is one example, to induce or maintain the differentiation of cells.Coating techniques also are often required for approaches such as "cells on chips."Imaging technologies using quantum dots might also enhance (non)-invasive imaging technologies for laboratory animals as they are developed in humans.NP already are used to deliver genes or other materials into cells, enhancing and broadening opportunities for in vitro approaches.the opportunities offered by nanotechnologies, however, are only starting to be exploited.
Consideration 6: Opportunities for in silico alternatives in nanotoxicology
Computational approaches to nanotoxicology so far are rather limited.With increasing datasets, however, modeling some aspects of interest might become feasible.Data mining of large datasets and the interspecies extrapolation of kinetics are most promising.Size and shape variations add dimensions of complexity to the correlative approaches, however, which will require enormous data-sets.the tremendous opportunities and challenges to in silico toxicity approaches have been discussed recently (Hartung and Hoffmann, 2009).In the meantime, modeling of kinetics, starting with airway disposition, might hold the most promise.However, nanotoxicology could be very stringent from the beginning, making the best use of biometry and avoiding the many pitfalls repeatedly discussed in this Food for Thought series (multiple testing, lack of power analysis, significance vs. relevance, lack of meta-analysis, etc.) Consideration 7: Are there reasons to make current alternative tests less applicable to NP? the answer to the above question is yes, unfortunately, since the biokinetics of NP will affect in vivo and in vitro results very might be the only way forward.NP are different, but they are not so different that we should expect completely new hazards.Hazards not necessarily shown by the parent compound may be seen, however, due to higher concentrations achieved at target tissues.
In vitro approaches represent a reasonable compromise between effort and information gain, allowing direct comparison of various NP and their parent compounds.A broad, animalbased screening approach is not feasible with regard to laboratory capacities and costs, and it certainly is not desirable from an animal welfare viewpoint.
A number of alternative approaches have undergone the optimization and validation process to make them suitable for regulatory purposes.It appears to be most promising to adapt these to NP in order to have a testing platform for broader characterization.When combined with a somewhat more extensive physicochemical characterization than normally applied to industrial chemicals, this will help us derive some more general rules about the hazards of NP.The field of alternative approaches has paid for its lessons on the importance of good practices and standardization for the success of validation and regulatory acceptance of methods.It is strongly advised that the respective guidance on Good laboratory Practice for in vitro toxicity (OeCD, 2004) and Good Cell Culture Practice (Coecke et al., 2005) be followed from the beginning.It is promising that some good practices for how to test NP have emerged from expert workshops (Maynard et al., 2006;Balbus et al., 2007;Warheit et al., 2007;Hoet and Boczkowski, 2008).In the near future, the respective quality assurance for the execution of such tests will be integrated.
Due to the high number and heterogeneity of particle samples and experimental systems, it is still difficult to find common principles of NP toxicity (Hoet and Boczkowski, 2008).We have been rightly warned (Fadeel et al., 2007), however, that we are witnessing only the first generation of NP; more sophisticated NP (active nanostructures, coated NP, integrated nanosystems, etc.) will make this even more complicated.thus, it might well be that each and every NP formulation of a substance will have to be considered an individual entity requiring at least some risk assessment.K.C. elliott (elliott, 2007) characterized nanotoxicology as a pre-normal science, in which researchers have no widely accepted paradigm to guide their investigations.
We must not forget that not only NP themselves, but also contaminations, may have adverse effects.Carbon nanotubes, for instance, were shown to include metals, amorphous carbon, and other compounds (Pulskamp et al., 2007;Fadeel et al., 2007).A special case of high relevance is the contamination with pyrogens due to the large surface area and the high lipophilicity of these compounds (Ashwood et al., 2007).It remains to be seen whether current pyrogenicity tests can retrieve such contamination before applying nanomedicines by injection.
the major problem for NP risk assessment is kinetics.though we expect differences from the parent compound due to size and shape, we do not really know how to test for them.Species differences are not really well established.A key prob-lem is that we still do not know how NP are metabolically processed (Fisher and Chan, 2007).The field of alternatives mainly has to offer some barrier models, which certainly represent a key priority.last but not least, toxicity is not always bad news, since sometimes it can be exploited for therapeutic purposes (Oberdörster et al., 2007).the main difference between toxicology and pharmacology is whether an effect is desired.NP offer fascinating opportunities to interfere with the organism in new ways.We must take care to find the right balance between opportunities and safety concerns.In vitro approaches promise to provide an affordable database on the biological activities to help understand the risks and opportunities.
Fig. 1 :
Fig. 1: Relation of biokinetics with the threshold of toxicity
Fig. 2 :
Fig. 2: Relation between plasma levels of parent compound and respective NP
|
2017-08-02T13:02:43.856Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "0429005a65e9853a95ee1168739383753cd5073d",
"oa_license": "CCBY",
"oa_url": "https://www.altex.org/index.php/altex/article/download/557/566",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0429005a65e9853a95ee1168739383753cd5073d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
16364872
|
pes2o/s2orc
|
v3-fos-license
|
From microscopy data to in silico environments for in vivo-oriented simulations
In our previous study, we introduced a combination methodology of Fluorescence Correlation Spectroscopy (FCS) and Transmission Electron Microscopy (TEM), which is powerful to investigate the effect of intracellular environment to biochemical reaction processes. Now, we developed a reconstruction method of realistic simulation spaces based on our TEM images. Interactive raytracing visualization of this space allows the perception of the overall 3D structure, which is not directly accessible from 2D TEM images. Simulation results show that the diffusion in such generated structures strongly depends on image post-processing. Frayed structures corresponding to noisy images hinder the diffusion much stronger than smooth surfaces from denoised images. This means that the correct identification of noise or structure is significant to reconstruct appropriate reaction environment in silico in order to estimate realistic behaviors of reactants in vivo. Static structures lead to anomalous diffusion due to the partial confinement. In contrast, mobile crowding agents do not lead to anomalous diffusion at moderate crowding levels. By varying the mobility of these non-reactive obstacles (NRO), we estimated the relationship between NRO diffusion coefficient (Dnro) and the anomaly in the tracer diffusion (α). For Dnro=21.96 to 44.49 μm2/s, the simulation results match the anomaly obtained from FCS measurements. This range of the diffusion coefficient from simulations is compatible with the range of the diffusion coefficient of structural proteins in the cytoplasm. In addition, we investigated the relationship between the radius of NRO and anomalous diffusion coefficient of tracers by the comparison between different simulations. The radius of NRO has to be 58 nm when the polymer moves with the same diffusion speed as a reactant, which is close to the radius of functional protein complexes in a cell.
Introduction
The complex physical structure of the cytoplasm has been a long-standing topic of interest [1,2]. The physiological environment of intracellular biochemical reactants is not one of well diluted, homogeneous space. This fact is in contradiction with the basic assumption underlying the standard theories for reaction kinetics [3]. The difference may render actual in vivo reaction processes deviate from those in vitro or in silico. Lately, we showed the results of a combined investigation of Fluorescence Correlation Spectroscopy (FCS) and Transmission Electron Microscopy (TEM) [4,5]. We examined the effects of intracellular crowding and inhomogeneity on the mode of reactions in vivo by calculating the spectral dimension (d s ) which can be translated into the reaction rate function. We compared estimates of the anomaly parameter, obtained from FCS data, with the fractal dimension from an analysis with transmission electron microscopy images. Therefrom we estimated a value of d s = 1.34 ± 0.27. This result suggests that the in vivo reactions run faster at initial times when compared to the reactions in a homogeneous space. The result is compatible with the result of our Monte Carlo simulation. Also, in our further investigation, we confirmed by the simulation that the above-mentioned in vivo like properties are different from those of homogeneously concentrated environments. Also other simulation results indicated that the crowding level http://bsb.eurasipjournals.com/content/2012/1/7 of an environment affects the diffusion and reaction rate of reactants [6][7][8][9]. Such knowledge of the spatial condition enables us to construct realistic models for in vivo diffusion and reaction systems.
The novel points of this study are the following three: (i) we investigated the influence of the mobility of non-reactive obstacles (NRO) on the anomaly coefficient, (ii) we investigated the influence of the size of the NROs, and (iii) we reconstructed the static simulation space based on TEM images and run diffusion tests in these virtual volumes as well in order to make the in silico simulation environment more realistic. The in vivo NROs have a wide size distribution and complex shapes. Based on our simulations we can suggest simpler systems with just one class of NROs which result in the same properties in the observed effective diffusion of the tracer molecules in the complex environment and experimental results. While several projects investigated diffusion and reaction within compartments like the ER [10,11], this study aims at resolving the diffusion and reaction of cytosolic proteins outside of these structures, for instance signaling molecules that have to travel from the plasma membrane to the nucleus [12,13]. Cryoelectron tomography can be used to obtain a 3D reconstruction of only the scanned cell section [14,15]. Statistical methods, in contrast, can be used to learn the properties of the 3D space and to generate many samples from it [16,17]. In order to generate reaction volumes with the same properties like the TEM images, we therefore learned the image statistics. This enables us to test the influence of the structures such as mitochondria and membrane enclosed compartments on the diffusion and reaction of molecules in the cytosol. By using state-of-the-art volume visualization techniques we can also show the shape of the generated volumes.
The generated structures are used for a volumetric 3D pixel (voxel)-driven graphical representation, which was further filtered into a smooth analytic surface using the software package BioInspire [18,19]. This analytic conversion for the visualization was done to better understand the properties of the 3D structure, which is not obvious from single 2D slices. The analytic surface is also the natural description of large intracellular objects like membrane enclosed compartments or mitochondria [11,16] and avoids the discreteness of pixel/voxel-based approaches [20]. The 3D ray tracing visualization package BioInspire is used to interactively sample the analytical surface to create the final image; therefore, never losing any details by going over some intermediate representation such as a triangle mesh as is common in literature [21,22].
Generally, TEM images visualize the information of scattering/absorption or permeation of electron rays through a sample slice of the cell. The electron rays are detected by charge-coupled devices and converted to grey scale images. The part in a sample section where electrons have been scattered or absorbed appear darker on the image, while the parts permeating electron rays appear white. There exist many imaging studies which investigated intracellular structures by electron microscopy. In those images, organelle, such as nucleus, mitochondria, rough endoplasmic reticulum, zymogen granules, Golgi complex, etc., appear as clear shadows, resulting from scattered or absorbed electron rays.
Based on the above reasons, we assumed that the black segment in the TEM images consisted of solid structures comprising the non-reactive obstacle. Simultaneously, the non-reactive surface can provide anchorage for small mobile molecules. The faint segment areas in TEM images presumed to be made up of sol proteins, which formed the main reaction chamber for the intracellular reactants.
Besides the (at least temporarily) static structures the cytoplasm is known to be filled with all kinds of mobilecrowding molecules [2]. Therefore, we added the mobility of the NRO and their size to the parameters that are investigated in this study.
In our former simulation, we used just one size of NRO, which could, e.g., represent single molecular obstacles [4,5]. But in a cell, many of those molecules representing the NRO exist as complexes or polymers, for instance cytoskeletal proteins. In order to include this information, we analyzed if the overall radius of the obstacles would affect the diffusion and reaction processes. Especially, we checked the results obtained in such simulations for anomalous diffusion, which is a sensitive probe for crowding conditions [9].
Anomalous diffusion is a common phenomenon in cell biology [23] but was previously defined by using a random walker on percolation clusters [24]. Percolation theory deals with the number and properties of clusters which are formed as follows [25]; each site of a very large lattice is occupied randomly with probability p, independent of its neighbors. The resulting network structure is the target of percolation theory [26]. When the probability p is over the critical value (p c ), the cluster reaches from one side to the opposite side of the lattice. This p c is the threshold to undergo phase transition like the gelation of polymer sol. Anomalous diffusion is observed when the reaction space is occupied inhomogeneously with obstacles until the relative volume of obstacles reaches close to the threshold. The value of p c for the 3D cube is 0.312 [27].
In several numerical simulations including our model, a percolation lattice is used as a simple example of the disordered medium [7,28,29] and we found that it is similar http://bsb.eurasipjournals.com/content/2012/1/7 to the in vivo reaction space. Likewise the structured in vivo reaction space is similar to porous media [6,30]. Such structures, which are often self-similar, can readily be seen under the TEM and are easily generated for instance by self-organizing molecules such as titanium dioxide and sol-gel powders.
When p = 1, the cluster becomes a regular lattice without disorder. If the non-obstructed space in the cell forms such a regular lattice, the time dependency of the mean squared displacement (MSD) of a random walker on the lattice grows linear with time. On the other hand, if the random walker is confined at a specific volume, the MSD converges to a constant [31]. The case between these two extreme cases was named anomalous diffusion by Gefen et al. [24]. The exponent α represents the anomaly of the MSD [23]: We estimated diffusion constants of NRO based on simulation results in different environments. Our in silico models enables us to verify the consistency of the hypothesis that the intracellular component is built using a selforganization and that the structure provides a percolation cluster-like environment for soluble molecules. We computed α from the Monte Carlo simulations in these virtual environments, as well as D(t), and compared it with the experimental results from FCS measurements to find the parameters of the in silico models which match the in vivo results.
Reconstruction of reaction space based on TEM image data
Based on TEM images ( Figure 1) the intracellular environment was reconstructed (Figures 2 and 3) as described in Methods, "Generation of virtual cellular structures". The 3D visualization of the static NRO structure helps to grasp the properties of the volume, which cannot be seen from single 2D images. A video showing the complete volume and sweeping through it is available as Supporting material (see Additional file 1).
The 1D statistics about neighboring pixels/voxels is sufficient to generate similar structures in two and three dimension applying an isotropy assumption. The structures show a wide size distribution in 2D images and a tubular network in the 3D volume. Only completely spherical structures are not generated in the present approach. The applied filters in the volume generation process have a tendency to increase the size of structures (eroding) or to reduce it (dilation). By controlling the NRO volume fraction in the process we could create volumes which have the same NRO volume fraction like the TEM images. Note that the smoothing of the surface for visualization likewise can increase the volume occupied by NROs (cf. Figure 3).
With respect to the diffusion of molecules through such structures the identification of the true fine-grained structure becomes very important. The diffusion test simulations in these 3D structures were performed with the continuous space discrete time Brownian dynamics simulation [6,32,33] (see Methods "Diffusion simulations in the virtual environment"). In the rather noisy structure corresponding to the thresholded TEM images, the diffusion is hindered much stronger than in a smoothed structure. We fitted the observed MSD to Equation (1) yielding = 3.37 ± 0.14 in the noisy volume and = 3.79 ± 0.15 in the smooth volume, i.e., the MSD grows faster in the smooth volume. The anomaly is α = 0.940 ± 0.004 and α = 0.948 ± 0.005, respectively. All simulations stopped, when the first of the 10,000 molecules starting from the center had reached the surface of our test volumewhich restricts a further increase of the MSD. This time span/distance is not sufficient to leave the anomalous regime. The effective diffusion coefficient is on average reduced to 63% of the input value in the noisy volume and to 70% in the smooth volume at this point in time. Especially, the larger surface of the noisy volume leads to an http://bsb.eurasipjournals.com/content/2012/1/7 increase in the excluded volume for finite particle radii, which is consistent with an increased reduction of the diffusion. Therefore, the more fragmented space leads to a stronger reduction in the diffusion [6].
Also depending on the local structure the effective diffusion varies. As indicated in Figure 4, the structures can (locally) vary in their isotropy, leading to an anisotropic diffusion. It is especially important that the reaction space reconstruction process leads to isotropic structures because even slight deviations are sensitively recognized by the diffusion process. Likewise the original microscope data where each voxel is 17.6 × 17.6 × 60 nm are non-isotropic. The comparison of the diffusion properties in the reconstructed reaction space and FCS measurements shows that the static (or at least temporarily static) structures are not sufficient to explain in vivo diffusion. The anomaly coefficient α = 0.94 does not match the values observed in in vivo FCS measurements (α = 0.768 ± 0.14) [4,5]. Especially, the molecular crowding by mobile NROs seems to have an important effect [9,34]. The computational complexity of the multitude of interactions between all particles and the dimension of the simulation-parameter space however renders the analysis within such a detailed 3D volume structure impossible. Therefore, we investigated the influence of mobile NROs within a scalable discrete lattice-based simulation framework.
Dynamics of NRO change the diffusion and reaction speed
We performed Monte Carlo simulation with mobile NRO in our lattice-based simulation space described in Methods "Lattice-based Monte Carlo simulation" (the latticebased simulator is also included as Additional file 2 and available from [35]). The motivation to move the NRO despite the increased computational complexity is to make the simulation environment compatible with realistic intracellular conditions, and to investigate if we can find a simulation-parameter regime matching our former FCS results [4,5].
First, if the jump probability describing the mobility of the particles (P f ) of the reactants equals the jump probability of the NROs (i.e., P f = 1), the diffusion of reactants was independent from the crowding level of their http://bsb.eurasipjournals.com/content/2012/1/7 environment. They show normal diffusion instead of anomalous diffusion ( Figure 5A). By FCS analyses, we observed anomalous diffusion of green fluorescent protein (GFP) in cytoplasm. The simulation results with the NRO jump probability P f = 1 thus was not compatible with experimental results. Especially, when the relative volume of NROs is lower than 50%, the diffusion of the reactants shows no anomalous subdiffusive behavior.
Starting from this incompatibility with the experimental results, we varied the following two parameters: (i) the probability which determines the mobility of NRO in the simulation space and (ii) the radius of NROs to analyze the effect of the size of NROs on the diffusion of the reactants.
NRO mobility which leads to matching diffusion with experimental results
We varied the jump probability P f , which determines the mobility of NRO in the simulation space ( Figure 5B). In this analysis, we fixed the size of NRO to occupy only one lattice site (i.e., single or small crowding molecules). The frequency of NRO movement was given in the range from 1/40 to 1/10 of the frequency of reactant moves, which move in every simulation step. This means that the NRO move once per 10 steps (P f = 1/10), once per 20 steps (P f = 1/20), once per 30 steps (P f = 1/30), once per 40 steps (P f = 1/40), or never (P f = 0), respectively.
The results in Figure 5A show that if P f is less than 1/10, diffusing reactants show the anomalous subdiffusive behavior for all tested NRO levels from 10 to 70%. This result is in agreement with previous works which indicated that the more static NROs result in a stronger confinement of the reactants [6,31], hence a more anomalous behavior (smaller α).
For all P f < 1, we can obtain an anomalous parameter compatible with our experimental results (α = 0.768 ± 0.14) with about 20% relative volume of NRO in the reaction space. The estimated P f value to reproduce the compatible α is 0.2383 to 0.3689. This means that the reactants move 2 to 5 times faster than the NROs in the reaction volume. However, the estimated relative volume amount is less than the occupied volume in the TEM images of 37%. Previous studies showed that the NRO-effect on the diffusion strongly depends on the size of the NROs [6,36]. Therefore, also the size has to be taken into account.
NRO size which leads to matching diffusion with experimental results
We also varied the aggregation level of NRO in the simulation space ( Figure 5B). In this analysis, we fixed the mobility of the NROs to the same rate like the mobility of the reactants (P f = 1).
The radius of the NRO was varied from 1 to 5 pixels. The original size (r nro = 1) means that the object occupies 8 pixels. We assumed the reactants diffuse in cytoplasm. Because the reactants affect the moves of the NROs in the same way like the NROs block the way of the reactants, the concentrations of both NROs and reactants have to set in the right proportion. In order to adopt our simulation environment to the case of cytoplasmic enzyme, we chose 1.0 μM as the approximate concentration of the reactant. Our simulation environment for varying NRO radius is 1000 reactants in the lattice with 50 × 50 × 50 total sites. To reconstruct the realistic intracellular environment by our simulation space, we assume the size of 1 pixel equals to 77.8 nm. This is about 15 times larger than the diameter of GFP, which is the molecule for which we analyzed the diffusion in a cytoplasmic region. Also, the approximate compartment size is 64 μm 3 = 64 fl. This volume is acceptable as a part of cytoplasm; the expected whole volume of cytoplasm of a cell is 2.8 pl [37]. Now the radius of NRO varied from 1 to 5 pixels means the diameter of NRO is 155.6 to 778 nm. http://bsb.eurasipjournals.com/content/2012/1/7 By changing the size of NRO, we find that the relative NRO volume is different for each different NRO size to produce compatible anomalous diffusion coefficient with experimental results. When the NRO size is small (155.6 nm, i.e., 30 times larger than a reactant), a cell can involve only 15 to less than 20% relative volume of NRO to produce a compatible anomalous diffusion coefficient with experimental results. If the NRO size is large (778 nm, i.e., 150 times larger than a reactant), a cell can involve over 30% relative volume of NRO to produce a compatible anomalous diffusion coefficient. This result is also consistent with a previous studies which showed that smaller objects have a much bigger influence on the diffusion of test molecules [6,36].
Empiric relationship between α, D nro , and r nro
We fitted the empiric functions given in Table 1 to the results of our Monte Carlo simulation with various conditions in order to find parameter ranges which are consistent with the results from FCS measurements. Note that these empiric functions do not need to have a physical meaning, but for instance show that the Stokes-Einstein relation D ∝ 1/r is not valid in the cytoplasm, because due to the microscopic structure different radii exhibit different viscosity. For instance large molecules sense a bigger hindrance in their mobility and can even be trapped by the meshes of the cytoskeleton [2,6].
The relation between D nro and r nro ( Table 1, third equation) is calculated from the first two equations in Table 1 for the condition P f = 1. Based on the appropriate size of the NRO from the previous section and the relationship with r nro we conclude that D nro = 21.96 to 44.49 μm 2 /s in order to obtain the desired α in the simulation at the target NRO fraction of 37%.
This diffusion coefficient is still in the same range like the diffusion coefficient of GFP in cytoplasm. On the one hand it is rather fast for large molecules but on the other hand our model in silico cytoplasm is just constructed out of one class of NROs compared to the complex size distribution in vivo [9,34]. The diffusion coefficient is not more than 10 times faster than the diffusion coefficient of large macromolecules (e.g., microtubule) in cytoplasm, thus supporting that our results are in a realistic physiological regime.
Table 1 Empiric relations between α, D nro , and r nro
Relationship between α and D nro α = 0.0093D nro + 0.4606 Relationship between α and r nro α = 0.1302 × ln r nro + 0.0976 Relationship between D nro and r nro D nro = 14.0 ln r nro − 39.0 The empiric relations are fitted to the simulation results. We used the value D GFP = 82 ± 2μm 2 /s for GFP and its mutant protein in solution [38]. The last relation is then deduced from the first two for P f = 1.0 and a NRO volume fraction of 37%.
On the other hand, if the diffusion of NRO occurs at the physiological macromolecule level (ex. tubulin in cytoplasm is measured as 4-10 μm 2 /s [39]), the diameter of NRO must be about 33-43 nm. This is smaller than the single NRO in our simulation. That means if the reaction space is crowded only with this size of obstacles, the anomalous diffusion constant will be smaller than the physiological value at the relative NRO volume fraction of 37%, which we found in our TEM image data. This value of relative NRO volume should be independent from the mobility state of the NROs.
Conclusions
We can conclude from simulation results in the reconstructed reaction space that the correct identification of noise or concrete structures in TEM images is very important because the diffusion strongly depends on it. The reconstructed tubular structures are consistent with, e.g., ER structures [11]. The structures are static in simulations of that reconstructed space (at least on the short timescales of the simulation), but future work aims at modeling the spatial dynamics of such membrane enclosed compartments [40]. The present generated structures could serve as a starting point for the size distribution of the compartments. Finally, a detailed and multi-scale simulation should include both the quasistatic cellular structures and the mobile NROs responsible for the majority of the molecular crowding effects. At the same time, investigation of the mixing ratio of differently sized NROs is also necessary in order to find a functional size distribution.
As the microscope data are discretizing the cell internal structures one could argue that the simulation should also use the 3D analytical surface representation, reconstructed inside the BioInspire visualization software. At the moment the simulation is not using this surface as the interfaces between the simulation and visualization are currently being defined. For an investigation of transient anomalous diffusion in such structures [23], much longer time spans need to be covered, which means that particles will diffuse much further away. Therefore, periodic boundary conditions for the volume are necessary. The reaction space might also be reconstructed based on the Fourier transform of the TEM images, which would lead to smooth boundaries under periodic boundary conditions.
The TEM image-reconstruction for a realistic simulation space gave us (i) an impression how the microscopic intracellular environment is structured in 3D and (ii) lets us further compare the results with that of lattice based and more scalable simulations, which also includes mobile NROs. By searching a compatible condition between the results of TEM-reconstructed space and artificial space, we could estimate the parameters for in silico simulation http://bsb.eurasipjournals.com/content/2012/1/7 environments with realistic intracellular structures and dynamics.
Due to computational limitations these environments have to be tremendously simplified compared to the complexity of the in vivo system. Thus, our efforts match for instance the approach of Hou et al. [41] trying to create a simplified yet realistic in vitro model of the cytoplasm.
We confirmed that the diffusion characteristics of inert test molecules in a crowded space are preserved in the characteristics of molecules which take part in a Michaelis-Menten reaction by using discrete reaction space [42]. The reaction proceeds quickly at the beginning, but later on the reactants are exhausted slowly in our simulations. This result may mean that the intracellular environment transforms reaction processes in a cell from the in vitro reaction in a fractal manner [8]. It is comparable to the classic mass action system with a timedependent rate constant. Also the observable effective reaction rate constant depends on the level of crowding and the effective diffusion, and might sensitively react in the case of anomalous diffusion [32]. These results support the importance to confirm detailed structures of the reaction space because the reaction environment affects the reaction process.
Therefore, the next challenge for in vivo oriented simulations will be performing simulations of bimolecular enzymatic reaction processes in the reconstructed reaction volume based on true cell environment, also by estimating the concrete value of environmental dynamics, and possibly by mixing static structures and mobile NROs.
Cell culture
Cell culture reagents for 3Y1 cells were obtained from Wako Pure Chemical Industries, Ltd. (Japan). The cell lines were routinely cultured in Dulbecco's Minimal Essential Medium supplemented with 10% fetal bovine serum in a 5% CO 2 incubator. We obtained 3Y1 cell line from Japanese Collection of Research Bioresources (JCRB) Cell Bank for use at Keio University.
Transmission electron microscopy
We obtained 101 images of rat fibroblast 3Y1 cells. We selected those images from the cytoplasmic regions, mainly at a magnification 1000.
The cells were collected on the day when the cells reached at the confluent condition in order to obtain a homogeneous population in their cell cycle (G1 to G0 cells).
In preparation for TEM, the cells were fixed with 4% formaldehyde and 2% glutaraldehyde in 0.1-M phosphate buffer (pH 7.4) for 16 h at 4°C, and successively with 1% osmium tetraoxide in 0.1-M phosphate buffer (pH 7.4).
The cells were dehydrated in graded ethanol and embedded in epoxy resin. Ultrathin sections (approximately 60nm thick) were prepared with a diamond knife and were electron-stained with uranyl acetate and lead citrate, and were examined using an electron microscope (H-7650; Hitachi Ltd.).
First, the TEM images were binarized into objects and background using the auto-thresholding function of ImageJ (http://rsbweb.nih.gov/ij/; see Figure 1). Briefly, this algorithm computes the average intensity of the pixels at below or above, a particular threshold. It then computes the average of these two values, increments the threshold, and iterates the process until the threshold is larger than the composite average. That is, threshold = (average background + average objects) 2 .
Subsequently, the binary images were translated into a 1-0 matrix in Matlab to reconstruct the simulation space. The simulation space for Figures 2, 3, and 4 was reconstructed based on TEM images as indicated below.
Generation of virtual cellular structures
In order to reconstruct the intracellular environment we learned the following statistics from the thresholded binary TEM images (cf. Figure 1B): P b (I(px i ) = 1|I(px i−1 ), I(px i−2 ), I(px i−3 )), the probability that this pixel px i is black (I(px i ) = 1), given the sequence of the neighboring three pixels, averaged over all directions (cf. Figure 2C). Likewise, we learned the probability of a pixel being black which is between two other pixels (separated by a distance j), and the average blackness (0.3755).
The 300 × 300 × 300 px in silico volume is generated by drawing lines from P b , each separated by 16 px in all directions. Next, we interpolated the pixels in between the lines (distance 8, 4, 2, and 1 px) to generate the complete volume. The generated volume is then iteratively processed by filtering it (erosion and dilation) until itsP b in all directions equals the empirical P b of the images (cf. Figure 2A,C). In order to preserve not only big structures but also finer objects in the processed volume, the raw volume was fed back into the processed volume repeatedly by averaging over both images, while the weight of the raw image was reduced in each iteration. In order to produce a smoother surface, the volume was also low pass filtered (cf. Figure 2A-D). The necessary 3D filters were created based on ordfilt3 by Olivier Salvado from the Matlab central File Exchange (File ID: #5722). The present Matlab code to generate the volumes is available as Additional file 3.
In order to avoid boundary effects only the pixels 10-290 are used subsequently in the simulations, and accordingly a sphere with a diameter of 4.928μm is created at the scale of 1 px = 17.6 nm. http://bsb.eurasipjournals.com/content/2012/1/7
Visualization
The 3D NRO structure described in the previous section-even if filtered twice, once in 2D with ImageJ (section "Transmission electron microscopy") and once in 3D in Matlab (cf. Figure 2A,B)-still contains highfrequency components from image noise and the discretization of data into voxels. Image stacks acquired from TEM are discretizations of the actual natural analytic (or at least very highly detailed) environment of the cell's internal structures, which is why the direct visualization of the voxel space itself only reveals the coarse grained, cubic 3D environment. As input to the BioInspire raytracing engine, a total of 12.5 million voxels (4.5 million of which are occupied by NROs) were given, corresponding to the spherical subvolume of the simulation space. As touched upon in the introduction a 3D filter of the software package BioInspire was used to create a smooth surface by averaging over the 3D structure. The difference in nonprocessed data and filtered data can be seen in Figure 3 where the number of control points and parameters is adjusted. Clearly, the filtered version with a smoother surface is preferable for a clear visualization of the 3D structure. A section of the volume is shown in Figure 2 for comparison with the 2D 300 × 300 pixel image of single slices.
Diffusion simulations in the virtual environment
The continuous space discrete time diffusion simulator as described in [32] is used to simulate the diffusion of inert tracer molecules through a cell which contains the generated structures. The structures are represented by a binary 3D grid of spheres at the positions of black voxels of the generated volume. The static spheres had a radius of r s = 10.92 nm, such that their volume matches the volume of each pixel of (17.6 nm) 3 . We performed the simulations in 20 different structures to average over the different realizations. The diffusion of tracer molecules with molecular radii of r i = 2.6 nm was simulated with 10 sets of 1000 molecules each. All original diffusion coefficients are arbitrarily set to D 0 = 1μm 2 /s, and t is chosen such that max x/(r i + r s ) = 0.08, i.e. t = 1.27 × 10 −7 s. The effective diffusion D eff = (x(t) − x(t 0 )) 2 /(2d(t − t 0 )) was obtained in 3 dimensions (d = 3) as well as in each dimension separately (d = 1). The test volume was a cell with a diameter of 4.928μm and was accordingly filled with approximately 4.5 million obstacles. The simulations were performed on the Brutus computing cluster at ETH Zurich, needed 10 h for 0.15 s of physical time and 400 MB memory at max (non-parallelized, but the different sets were running in parallel). With a Intel Core i7 2600K at 3.5 GHz and 8 GB RAM 1 × 10 6 steps, (i.e., 0.127s) of all 10000 particles of one set needed 3 h. The simulation is available from [33]. We used this virtual environment for the calculation of effective diffusion constant and for the investigation of the local anisotropy of the volume.
Lattice-based Monte Carlo simulation
We also performed a scalable lattice-based Monte Carlo simulation and compared it with the results from the simulations in our virtual environment as well as experimental results from [4,5] by changing the size and mobility of NRO, in order to clarify the characteristics of such a crowded environment. This simulation is available from [35] or Additional file 2.
Diffusion simulation with immobile NRO
The simulation space is a 50 × 50 × 50 cubic lattice with periodic boundary conditions. The reaction space is randomly interspersed with NRO. The random walkers representing the diffusing reactants can jump to a neighboring lattice site in each iteration, which is selected randomly. If the chosen lattice site was previously empty, the reactant fills the site; if the site was occupied by an NRO, a new position is randomly allocated for the reactant. The simulator is implemented in the C++ programming language.
Reaction simulation with immobile NRO
The reaction simulated in our model is A + A → A. If the chosen lattice site of reactant A1 in a diffusion step is occupied by another reactant A2, A2 is obliterated and only A1 remains at the new lattice site.
Pseudo-mono reaction process simulation with mobile NRO
We changed the characteristics of NRO such that they can move randomly as well. Their probability to move P f was varied from the same as reactants (P f = 1) to 40 times smaller (P f = 1/40), i.e., slower, to investigate the effect of NRO mobility to the reactants behaviors.
All NRO move as single independent molecules. The other conditions for this simulation remain the unchanged.
Pseudo-mono reaction process simulation with aggregated NRO
We also varied the diameter of NRO to test the effect of NRO size to the reactant behaviors. By this analysis, we investigated the condition relating with NRO aggregation level, which move with P f = 1, i.e., with the same probability as the reactants. The other conditions for this simulation remain the unchanged.
Additional files
Additional file 1: Video of the 3D volume. Dynamic exploration of the generated 3D virtual cytoplasm.
|
2016-01-25T19:18:26.375Z
|
2012-06-26T00:00:00.000
|
{
"year": 2012,
"sha1": "22e0de69bbc559b717d8cbd8df90e09f22c78d23",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3698665?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb4deee3cedb7c4fcab10cdd620630e67316cc39",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
260018350
|
pes2o/s2orc
|
v3-fos-license
|
Multidisciplinary management in chronic myeloid leukemia improves cardiovascular risk measured by SCORE
Introduction: Cardiovascular events are one of the main long-term complications in patients with chronic myeloid leukemia (CML) receiving treatment with tyrosine kinase inhibitors (TKIs). The proper choice of TKI and the adequate management of risk factors may reduce cardiovascular comorbidity in this population. Methods: This study evaluated the cardiovascular risk of a cohort of patients with CML at diagnosis and after follow-up in a specialized cardiovascular risk consultation. In order to do this, we performed data analysis from 35 patients who received TKIs and were referred to the aforementioned consultation between 2015 and 2018 at our center. Cardiovascular risk factors were analyzed separately, as well as integrated into the cardiovascular SCORE, both at diagnosis and at the last visit to the specialized consultation. Results: At the time of diagnosis, 60% had some type of risk factor, 20% had a high or very high risk SCORE, 40% had an intermediate risk, and 40% belonged to the low risk category. During follow-up, the main cardiovascular adverse event observed was hypertension (diagnosed in 8 patients, 23%). 66% of patients quit smoking, achieving control of blood pressure in 95%, diabetes in 50%, weight in 76%, and dyslipidemia in 92%. 5.7% of patients suffered a thrombotic event and a significant percentage of patients showed a reduction in their SCORE. Conclusion: Our study shows the benefit of controlling cardiovascular risk factors through follow-up in a specialized consultation for patients with CML treated with TKI.
Introduction
The introduction of tyrosine kinase inhibitors (TKIs) in the treatment of chronic myeloid leukemia (CML) marked a significant change in the management and prognosis of this disease (Berman, 2022). This family of drugs allowed higher survival rates of these patients to a level like that of the general population (Hochhaus et al., 2020). Moreover, TKIs helped in achieving symptom control, total clearance of the tumor clone, and significantly reducing the rate of acute transformation (Cortes et al., 2021).
However, TKI treatment poses new challenges in the management of CML, like those associated with the numerous interactions of these drugs and the adverse effects derived from their use (Haouala et al., 2011). Among the latter, the most frequent and concerning are cardiovascular side effects (Douxfils et al., 2016) (Dahlén et al., 2016), which raise the need for strict control of cardiovascular risk factors at the time of diagnosis or those emerging over the follow-up (Barber et al., 2017).
Currently, five TKIs with similar efficacy rates and a different toxicity profile are approved for the treatment of CML (García-Gutiérrez & Hernández-Boluda, 2019). Generally, patients experience some type of (mostly mild) adverse effect, that may sometimes prompt a change in TKI (Cortes & Kantarjian, 2016).
The mechanism by which TKIs cause cardiovascular damage is not fully characterized, although it appears to be related to endothelial damage through non-specific inhibition of tyrosine kinases ("off-target" effect), alteration of glycemic metabolism, direct hypertensive effect or glomerular impairment (Chaar et al., 2018).
There are no studies comparing directly second-generation TKIs (dasatinib, nilotinib, bosutinib), but the results of studies comparing these with imatinib show a higher rate of cardiovascular events with this generation of TKIs, so imatinib may be a preferable option in patients with a high risk of cardiovascular disease (Cortes, 2020).
Furthermore, no clear consensus exists on when to refer a patient with CML from the hematology consultation to another specialist for the evaluation and management of cardiovascular risk. Guidelines on this matter recommend doing so in the case of a history of cardiovascular disease (Seguro et al., 2021), high risk of cardiovascular disease (Zamorano et al., 2016) or presence of risk factors when starting high risk TKI such as nilotinib (NCCN Clinical Practice Guidelines in Oncology, 2023). There are no specific recommendations to this effect from the European Leukemia Net.
However, at the time of diagnosis, patients diagnosed with CML presented a high prevalence of cardiovascular risk factors, which seems to be higher than that of the general population (Seguro et al., 2021). One study showed, at the time of CML diagnosis, a prevalence of 30% of hypertension, 11% of diabetes and 18% of dyslipemia (Coutinho et al., 2017).
Most of our knowledge about the efficacy and adverse effects of TKIs comes from clinical trials. Nevertheless, their results could underestimate the development of cardiovascular comorbidity, considering the exclusion of patients with insufficient control of cardiovascular risk factors, or the younger average age of patients included in the main first-line trials with dasatinib (Kantarjian et al., 2010) or nilotinib (Saglio et al., 2010). Therefore, real world evidence studies are essential, as they are able to show the prevalence of complications arising from the use of TKIs in a routine clinical practice scenario. One of the largest studies to date (Coutinho et al., 2017), showed a prevalence of almost 80% of cardiovascular risk factors at 5 years after the diagnosis of CML.
In this study, we report our experience in the management of cardiovascular risk factors at our center, where patients are referred to a specialized internal medicine consultation at diagnosis or during follow-up. The purpose of this strategy is to optimize the control of cardiovascular risk factors. Only symptomatic patients are referred to other specialized consultation (cardiology or angiology).
To analyze the impact of this intervention, we have used the SCORE (Systematic Coronary Risk Evaluation) model, which estimates the risk of death from cardiovascular causes in 10 years. It has the advantage of being adjusted to different European countries, and estimates mortality associated with all atherothrombotic manifestations and not just coronary mortality, unlike the Framingham score. Moreover, SCORE is straightforward to calculate because it includes few parameters: age, sex, systolic blood pressure, total cholesterol and smoking (Visseren et al., 2021).
This model has already been used by other researchers to evaluate the risk of developing cardiovascular events in patients with CML treated with different TKIs, demonstrating its predictive value at diagnosis (Breccia et al., 2015;Caocci et al., 2019).
Study design
This is a retrospective, single-center observational study that analyzed a total of 35 patients diagnosed with CML at the 12 de Octubre University Hospital, referred to the cardiovascular disease consultation between 2015 and 2018, who received treatment with one of the approved TKIs for this indication (imatinib, dasatinib, nilotinib, bosutinib and ponatinib). The patients received outpatient follow-up in hematology consultation and by an internal medicine specialist in the aforementioned cardiovascular control consultation.
The diagnosis of CML was made following criteria established by the latest classification of hematological neoplasms published by the WHO (Swerdlow et al., 2017). The following prognostic scores for CML were applied to the diagnosis: Sokal, Hasford, EUTOS and ELTS. Regarding the criteria used to define cardiovascular risk factors, they are explained below.
Cardiovascular variables
Arterial hypertension: defined as systolic blood pressure ≥140 mmHg and/or diastolic blood pressure ≥90 mmHg, following the criteria used by the ESC/ESH Guidelines for the management of arterial hypertension (Williams et al., 2018). Arterial hypertension was considered to be controlled according to the target for general and specific subgroups of hypertensive patients, following the mentioned guidelines.
Dyslipidemia: defined as hypertriglyceridemia (triglycerides level >200 mg/dL) and/or hypercholesterolemia (cholesterol level >200 mg/dL), following the criteria from the ESC/EAS Guidelines for the management of dyslipidaemias (Mach et al., 2020). Dyslipidemia was considered to be controlled following the criteria defined by these guidelines.
Diabetes mellitus: defined as an A1C ≥ 6.5%; fasting blood glucose ≥126 mg/dL; blood glucose ≥200 mg/dL 2 hours after a 75 mg intake of glucose; or a casual blood glucose ≥200 mg/dL, according to the ESC Guidelines on diabetes (Cosentino et al., 2020).
Control of diabetes was defined according to the targets specified by these guidelines.
Statistics
Frequencies were calculated as percentages for qualitative variables and as means and standard deviations for quantitative variables. Comparison of variables was carried out using the McNemar-Broker test. A p < 0.05 was considered statistically significant. Statistical analysis was conducted using the SPSS computer program version 25.0 (IBM, Chicago, IL). Table 1 summarizes the main characteristics of the 35 patients included in the study. The mean age at the time of referral to the cardiovascular control consultation was 50 years (standard deviation, 13.5). 45.7% of patients were women. Over half of the patients (51.4%) were classified in the low risk category according to the Sokal index, 60% according to the Hasford score, and 60% according to the ELTS score. However, most patients belonged to the high risk group according to the EUTOS Score (77.1%).
Results
Regarding the prescribed TKI (Table 2), all except 2 patients received imatinib (median time of exposition, 20.3 months), 45.7% received dasatinib (median time of exposition, 24 months), 42.9% received nilotinib (median time of exposition, 15.5 months), 3 patients received bosutinib (median time of exposition 4.1 months), and 1 received ponatinib (7.3 months of exposition). 60.6% of patients treated with imatinib had to stop it (in 40% of these cases due to lack of optimal response, 35% as a result of adverse effects, and 25% because of clinical trial protocol). Patients who stopped dasatinib (57.7% of those who received this drug) did so for reasons related to adverse effects. The discontinuation rate with nilotinib was 60% (in one because of lack of efficacy and in the remaining 88.9% as a consequence of toxicity). Out of the three patients treated with bosutinib, one stopped it due to toxicity, and the only patient treated with ponatinib stopped it because of clinical trial protocol.
EUTOS Score
Low, n (%) 6 (17.1%) High, n (%) 12 (34.3%) Unknown, n (%) 2 (5.7%) Frontiers in Pharmacology frontiersin.org 04 Table 3 shows the proportion of patients who had some type of cardiovascular risk factor either at the time of referral or at the last visit to the cardiovascular control consultation. The time elapsed between diagnosis and consultation was approximately 54 months on average. At the time of the consultation 17.1% had an active tobacco habit and 28.6% had stopped smoking. 11.4% had alcohol consumption in the range of abuse according to the previously stated criteria. 34.3% had hypertension, 8.6% had DM, and 40% had dyslipidemia. Seven patients had already developed cardiovascular disease at the time of the consultation (2 in the form of coronary disease, 2 stroke, and 3 peripheral arterial obstructive disease). One patient had a diagnosis of chronic obstructive pulmonary disease COPD and 2 had chronic kidney disease.
During a mean follow-up of 31.25 months, 3 patients were diagnosed with diabetes, 8 developed hypertension (13.3% of patients with nilotinib, 12.5% with dasatinib and 12.1% with imatinib), 10 dyslipidemia and 3 peripheral arterial obstructive disease (PAOD). Strict control of hypertension was achieved in all but one patient, control of dyslipidemia in all but 2 and only 3 patients did not reach adequate diabetes control. However, there was an improvement of blood pressure, glucose level and lipids in all patients. In 12 patients, it was necessary to change either the type or dosage of TKI because of interactions with concomitant medication, with statins being the main reason in 75% of these cases.
The most frequent cardiovascular disease in our cohort was PAOD (6 patients developed PAOD after CML diagnosis, 3 of them before referral to Internal Medicine Department and 3 of them afterwards). The median age at the time of PAOD diagnosis was 63.5 years, with a median time of 13.15 years from the introduction of TKI treatment. Regarding the former 3 patients, 2 were receiving nilotinib and 1 dasatinib. Two of them belonged to the intermediate risk and 1 to the very high risk SCORE category. An improvement of SCORE was reached in 2 of them.
The latter 3 cases with PAOD were diagnosed with a median of 2.5 years after the first consultation. All of them were receiving imatinib (with a median time of exposition of 13.7 years). One of them belonged to the high risk group and 2 to the intermediate risk group. All of them remained in the same SCORE category, despite the adequate control of cardiovascular risk factors.
The Figure 1 shows the distribution of patients according to the cardiovascular SCORE at the time of the first consultation and the last one. We have observed an increased number of patients belonging to the low risk group at the expense of a decrease in those assigned to the intermediate, high, and very high risk groups, with a difference that reaches statistical significance.
The distribution of the patients among the different groups before and after follow-up is low risk (21 versus 14 patients), intermediate risk (10 versus 14) and high risk (3 versus 2). Only one out of three patients remained in the very high risk category.
Regarding arterial thrombosis, one patient receiving treatment with dasatinib presented an episode of acute coronary syndrome. He had a history of hypertension, dyslipidemia and coronary disease prior to TKI initiation, belonging to the high risk SCORE category when starting follow-up.
With respect to the data on thromboembolic disease, only one patient (receiving imatinib as TKI) presented a venous thrombotic event in the form of deep vein thrombosis in the lower limb, arising in the postoperative context of a major abdominal surgery. A patient with a history of antiphospholipid syndrome and deep vein thrombosis prior to the diagnosis of CML received imatinib without new thrombotic events after the start of this drug. No patient treated with second generation TKI or ponatinib developed venous thrombotic events.
At the end of follow-up, 8 patients (22.9%) had been referred to the vascular surgery and angiology consultation. Out of the 8 patients, 7 were referred because of intermittent claudication and 1 for multidisciplinary assessment due to very high cardiovascular risk.
These patients were evaluated with lower limb and carotid doppler. Half of them showed carotid atherosclerosis, but only one presented with significative stenosis (more than 50% of arterial diameter reduction).
Ten patients underwent lower limb doppler in order to rule out significant arterial obstruction. Three patients showed findings of arterial obstruction (those diagnosed with PAOD), four atherosclerotic plaques and three did not reveal pathologic findings.
Discussion
In this paper we present the results of cardiovascular control in patients with CML under treatment with TKI in a specific consultation. A reduction in cardiovascular risk factors was achieved with at least a 20% improvement in cardiovascular score.
The baseline characteristics of our cohort are similar to those reported previously in patients with CML: an average age of 57 years and a slight predominance in males (Dahlén et al., 2016). As for the cardiovascular risk factors in our series, the data are comparable to those reported by other authors. The study by Coutinho et al. (Coutinho et al., 2017) showed a rate of hypertension of approximately 30%, like that of our population, and 11% of diabetes (in our study 8.6%). The high proportion of patients with dyslipidemia (40% compared to 18% in the aforementioned study) in our cohort is striking, a difference that may be due to heterogeneity of criteria used to define this condition.
The presence of cardiovascular risk factors or comorbidities is important, on the one hand, for the choice of TKI, given the different toxicity profile of each one, and on the other hand, for the management of such comorbidity (Latagliata et al., 2021). Thus, given that most of our patients received treatment with imatinib and we have a small proportion of patients who received new generation TKIs, it is difficult to make inferences about the relative risk for the development of cardiovascular comorbidity regarding the TKI.
However, according to previous studies, it seems that nilotinib is more associated with the development or worsening of arterial hypertension (Roa-Chamorro et al., 2021), as well as coronary disease together with dasatinib (Barber et al., 2017). Nilotinib is especially associated with stroke (Chen et al., 2021), as well as peripheral arterial disease together with dasatinib (Chen et al., 2021). However, treatment with ponatinib has been the most associated with hypertension (17% vs. 10%) for all newgeneration TKI in a pooled analysis of hypertension incidence (Mulas et al., 2021) and thrombotic risk (10% patients developed cerebrovascular or vaso-occlusive disease) (Jain et al., 2015).
For this reason, patient-based therapy has become increasingly important in the treatment of CML (Ciftciler & Haznedaroglu, 2021). The availability of several TKIs has made it possible to choose the most appropriate drug for each patient based on individual factors such as age, comorbidities and availability in each center (Rabian et al., 2019). It Frontiers in Pharmacology frontiersin.org 05 is important to consider factors such as the patient's overall health status, potential side effects, and the risk of developing resistance to the TKI when selecting the best option. In summary, a personalized approach to CML treatment can improve outcomes by maximizing the benefits of therapy while minimizing side effects and reducing risk of treatment resistance (Ciftciler & Haznedaroglu, 2021).
Given that most of our patients received treatment with imatinib in first line and we have a small proportion of patients who received new generation TKIs, it is difficult to make inferences about the relative risk for the development of cardiovascular comorbidity according to the TKI in our CML cohort. Nevertheless, with a median follow-up of 27.8 months, none of the patients who received second generation TKI and who had previous arterial hypertension showed a worsening of this condition (only a patient with imatinib had a deficient control of hypertension during follow-up). 23% of patients were diagnosed of hypertension at some point after TKI initiation. This percentage is slightly lower than that showed by the large cohort of Jain et al. (2019). The only patient who received ponatinib was under antihypertensive treatment before diagnosis of CML and showed an adequate control of hypertension during TKI therapy.
Although the associated thrombotic risk is assessed as a class effect of TKI, the difference in targets of each of the different TKI may explain the differences observed. The Swedish registry showed that patients with CML have an overall risk of venous thromboembolic events and arterial thromboembolic events 1.5 and 2 times higher than general population, respectively (Dahlén et al., 2016). Moreover, second generation TKI and ponatinib seem to confer greater risk than imatinib (Douxfils et al., 2016). In our cohort, the rate of thromboembolic events was low, and these only occurred in patients with strong risk factors. The comparison with other studies is difficult due to difference of median follow-up (Jain et al., 2019). However, these data suggest that follow-up in the specialized consultation may have been effective in preventing thrombotic events.
PAOD rate was surprisingly high in comparison to other cardiovascular events. Other studies show a greater percentage of coronary or cerebrovascular events, with an incidence lower than 1% of PAOD among patients treated with imatinib (Chen et al., 2021). Half of our patients were receiving imatinib at the time of PAOD diagnosis, although nilotinib seems to be more associated with PAOD than other TKI (Douxfils et al., 2016). Yet our patients had a long history of exposition and many cardiovascular risk factors. Our high rate of PAOD could be a consequence of the high suspicion degree maintained in the specific consultation. There are many comorbidities causing lower limbs pain and, unlike cerebrovascular or coronary disease, PAOD is often mildly symptomatic and thus misdiagnosed (Nordanstig et al., 2023). For this reason, nearly one-third of patients underwent a Doppler study and were referred to vascular surgery and angiology consultation.
Another important aspect to consider when controlling cardiovascular risk factors through pharmacological measures is the potential interactions of the TKIs. In our cohort, this had a fundamental impact on the use of statins, as previously seen (Haouala et al., 2011), and for which rosuvastatin or pravastatin are usually recommended, as they are not substrates of CYP3A4 (Osorio et al., 2018).
An appropriate approach to estimating the risk of developing cardiovascular events are prognostic scores, such as the Framingham score, the Pooled Cohort Equations score or the SCORE. Among them, the SCORE model shows many advantages: there are many countryspecific versions derived from local data, it is easy to calculate, and it is capable of predicting mortality derived from myocardial infarction, stroke or heart failure over the next 10 years (Caocci et al., 2019).
As results have shown, a significant percentage of patients achieved a change in their risk stratification according to the SCORE, in all cases achieving a better prognosis category than before follow-up, which was achieved thanks to the control of blood pressure, dyslipidemia, or smoking cessation.
Two studies have shown a correlation between the SCORE and the occurrence of cardiovascular events in patients with CML and TKI treatment (although both only included patients with ponatinib) (Breccia et al., 2015;Caocci et al., 2019). Both showed a higher incidence of cardiovascular events in the high and very high risk groups, with a significant difference. In the study by Breccia et al., none of the patients with a low risk SCORE developed cardiovascular disease.
The importance of preventing cardiovascular disease lies in the fact that it is the second leading cause of mortality in cancer patients (Sulpher et al., 2015). For this reason, the importance of a multidisciplinary management of patients with malignant hematological disorders is increasingly been recognized, although we do not find in the literature studies on multidisciplinary management of cardiovascular risk in patients with CML, even when various groups have called attention to this need (García-Gutiérrez et al., 2016;Basile et al., 2022).
Our study shows that this approach to CML patients, in coordination with specialists is feasible and results in an improved control of cardiovascular risk factors. The main limitation of our study is its retrospective nature and the limited number of patients analyzed. In addition, there has been no prolonged follow-up of patients that could demonstrate a reduction in cardiovascular events in patients with a better prognosis SCORE. Among the strengths of the study, it includes patients treated with different TKIs, and the use of a standardized and population-targeted cardiovascular risk model.
Conclusion
The adverse effects of tyrosine kinase inhibitors are one of the main concerns when treating patients with chronic myeloid leukemia. These are usually related to their off-target effects and each TKI has a different toxicity profile. Cardiovascular events are among their most frequent and life-threatening complications, and their occurrence can influence the choice or switch of TKI. The development of these events can be prevented by controlling risk factors, which often requires an interdisciplinary management. Our study shows that follow-up in a specialized consultation is an attainable feasible approach that can reduce cardiovascular risk of these patients.
Data availability statement
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Comité de Ética de la Investigación (CEI) Hospital universitario 12 de Octubre. The patients/participants provided their written informed consent to participate in this study.
|
2023-07-21T15:05:52.024Z
|
2023-07-19T00:00:00.000
|
{
"year": 2023,
"sha1": "1e4e08138545456d867330fd554f5edc6bab7602",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2023.1206893/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f781ebcb43125c67089386745767753c5c1a1465",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
253582967
|
pes2o/s2orc
|
v3-fos-license
|
Role of multiresolution vulnerability indices in COVID-19 spread in India: a Bayesian model-based analysis
Objectives COVID-19 has differentially affected countries, with health infrastructure and other related vulnerability indicators playing a role in determining the extent of its spread. Vulnerability of a geographical region to COVID-19 has been a topic of interest, particularly in low-income and middle-income countries like India to assess its multifactorial impact on incidence, prevalence or mortality. This study aims to construct a statistical analysis pipeline to compute such vulnerability indices and investigate their association with metrics of the pandemic growth. Design Using publicly reported observational socioeconomic, demographic, health-based and epidemiological data from Indian national surveys, we compute contextual COVID-19 Vulnerability Indices (cVIs) across multiple thematic resolutions for different geographical and spatial administrative regions. These cVIs are then used in Bayesian regression models to assess their impact on indicators of the spread of COVID-19. Setting This study uses district-level indicators and case counts data for the state of Odisha, India. Primary outcome measure We use instantaneous R (temporal average of estimated time-varying reproduction number for COVID-19) as the primary outcome variable in our models. Results Our observational study, focussing on 30 districts of Odisha, identified housing and hygiene conditions, COVID-19 preparedness and epidemiological factors as important indicators associated with COVID-19 vulnerability. Conclusion Having succeeded in containing COVID-19 to a reasonable level during the first wave, the second wave of COVID-19 made greater inroads into the hinterlands and peripheral districts of Odisha, burdening the already deficient public health system in these areas, as identified by the cVIs. Improved understanding of the factors driving COVID-19 vulnerability will help policy makers prioritise resources and regions, leading to more effective mitigation strategies for the present and future.
In page 10 the authors are use Bayesian procedures to fit the model. There is not any analysis of these methodologies. Comparisons with different methods It is not very clear with the authors use only the R factor. For example spatial statistics connectivity factors could be used like Moran I, or GWR models under spatial regression analysis. There is not any numerical tables or results. Also there is not any graph more regression modelling 3.2) what preprocessing and/or modelling was going to be applied and whether the intermediary data from this step is made available, 3.3) what analytical methods were selected for evaluation, why they were deemed appropriate, and what validation components are included in these models to facilitate model selection 3) In the "Computation of relative COVID-19 vulnerability indices" subsection, the process for aggregating the variables and transforming them into common scale values needs to be made transparent to facilitate reproducibility. 4) To facilitate reproducibility and transparency, in the "Regression analysis using summaries of time-varying R profiles" section, the Bayesian procedure should be described for model fitting and the process for selecting the model of best fit should be described. Currently too much emphasis is being placed on the supplemental material equation 6 and reference 32. I recommend leaving the equations in the supplemental but providing a clear textual explanation for the process within the main text. This section should clearly communicate the process being taken, why this is expected to be beneficial, and what metrics are being used for model selection of the regression models in order to convey confidence that the models described in the results are selected appropriately.
5) The "Results" section requires the greatest attention with the following points: 5.1) The first paragraph of this section spends a large amount of effort describing Figure 3. However, this does not appear to provide any greater insight than visual inspection of the plots already provides. If there is a primary point that is coming from this paragraph it is currently being lost. Suggest reworking this paragraph and focusing on a key take-away.
5.
2) The results are not properly supported with quantitative outcomes to compare differences between themes or clusters. Tests for significance should be applied and reported. Figure 4d provides support for the instantaneous R through the use of 95% CI values for each district but is still lacking in providing support for statistically significant differences between the districts.
6) The "Discussion" section should be updated to reflect the quantitative updates to the "Results" section.
7)
The "Discussion" section should discuss the practical significance of the Regression findings. For instance, how is the finding on Page 14 of 51, lines 41-46 "a unit increment in the cVI … increases the mean of the iR by an estimated quantity of 0.33." of practical use in this context? These results should be discussed further. 8) Page 18 of 51, lines 29-36 use the phrase "not significantly associated". Has a test of significance been applied? If yes, then appropriate indicators (i.e., P-value or CI) should be presented. If no, then the statement should be re-assessed.
R1.1. I have read your manuscript carefully and I do agree that the research question you are
attempting to study is timely and useful. However, I have the following concerns/comments. Response: Thank you for taking the time to review our work and raise relevant questions. Please find below a point-by-point response to your concerns.
R1.2. After reading 53 pages (Including supplementary material) what is the take-home message to the readers?
Response: Thank you for your question. In India, multiple dimensions influence vulnerability in context of COVID-19. The development of a composite cVI acknowledges the varying needs and priorities of different districts for data-driven informed planning that can be reused in case of any future outbreaks. Using Odisha as an example, we identify the associations of these cVIs with the reproduction number 'R' and estimate the extent to which changes in the cVIs impact them. This study provides a basis to continue to monitor and update the analyses as the pandemic evolves and data accumulates. This research suggests that building effective multidimensional healthcare capacity is the most promising means to mitigate future case fatalities. We have added a short paragraph on the same information as below in the manuscript now, within the conclusion section, in page 19.
Policy implications
• The factors or processes generating vulnerability and their measurement may differ in LMICs and the cVI can help in capacity building and informing responses to outbreaks in the future.
• A granular view of vulnerability can help policy makers develop and implement their response at the district level, which is the unit for planning of all public health activities for health and development agencies.
• The cVI framework provides policy makers a more nuanced understanding of vulnerability for pandemics.
• These quantitative indices will help identify indicators that need strengthening in specific geographic areas that can guide investment to overcome outbreaks, beyond the current epidemic.
Public health delivery implications
• Vulnerable districts identified by the cVI could inform strategic actions that can better prepare the state or district in case of a viral epidemic or other outbreak.
• The findings of this study can enable national authorities and partners including academia, international organizations and donors to better align health emergency planning with broader population health needs and consider strengthening health systems components for delivery of both emergency and non-emergency health services in tandem.
R1.3. The conclusion in the abstract states:
Odisha has demonstrated success in containing the Response: Thank you for raising this important point. Indeed, there is some evidence that Odisha has done well in comparison to other states in terms of handling the pandemic. To begin with, Odisha took many proactive measures such as a decentralized community-based approach at the very beginning of the pandemic. 2 At a later stage of the pandemic, Odisha was recognized for its handling of the COVID-19 pandemic by the World Health Organization (WHO). 3 By October 2020, i.e., after the first wave of the pandemic had subsided, the state had a very low fatality rate (0.42% against the national average of 1.51%). 4 Such metrics put the performance of Odisha relative to other Indian states at a better standing. We have now added these information and references to a short paragraph in the discussion section (page 18).
R1.4. If I understood correctly, the outcome of this study is the vulnerability index. Please let the readers know how your study is different from "A vulnerability index for the management of and response to the COVID-19 epidemic in India: an ecological study" By Rajib Acharya (https://www.thelancet.com/journals/langlo/article/PIIS2214-109X(20)30300-4/fulltext).
Response: Thank you for highlighting this relevant point. We have responded to the editor's comment in E8 above regarding the same query and have highlighted how our framework goes beyond both the theoretical computations involved in and applied insights resulting from the tool in the relevant paper. Further, we have included a sample description of our computational procedure in Supplementary Text Section 1.3 and an algorithm for the same in the Supplementary Materials, to clarify the specifics of our procedure, as indicated in E5 above. We have also added the following discussion as a section in the Supplementary Text and referred to it in the main manuscript (page 10) to ensure clarity in part of the readers.
Uniqueness and utility of our pipeline
One of the goals of our paper is to compute COVID-19 Vulnerability Indices (cVIs) across multiple thematic resolutions for different geographical and spatial administrative regions based on publicly reported socio-economic, demographic, health-based and epidemiological data from national surveys in India. While our algorithm has similarities with the algorithm proposed by Acharya and Porwal (2020) for computing the relative vulnerability indices for each district, there are some differences. 2 One of the key differences in our approach is that we have used additional variables and indicators which are relevant for infectious disease outbreaks and COVID preparedness (such as COVID hospital testing centers) across the themes to come up with these indices.
More importantly, while computation of vulnerability indices (indicator specific relative cVI, themespecific cVI and overall vulnerability indices) constitutes the first goal, we have taken a step further and used these multi-resolution cVIs in regression models to assess their impact on indicators of the spread of COVID-19 such as the average time-varying instantaneous reproduction number. This multi-layer aggregation approach examines the potential heterogeneities in the themed vulnerabilities across districts via exploring their association with pandemic growth metrics. Relative ranking for overall and theme specific vulnerabilities are also performed using regression models under a Bayesian paradigm using standard metrics of variable importance like the posterior inclusion probabilities. Furthermore, our paper demonstrates novelty via identification of specific target areas (for example, setting up more rural temporary healthcare facilities) for policy implementation in order to mitigate the crisis. R1.5. Figure 1 As mentioned in the response to your previous comment (R1.4), we have outlined how our framework goes beyond that of the Acharya study. We have also referred to the work in our manuscript in the methods section when we describe the vulnerability index computation scheme and have ensured that we clearly state how much of the computational pipeline is based on that work. We originally planned on including a few more variables such as wealth index, mortality due to chronic diseases and so on to develop the cVI. However, we chose only those indicators for which complete data was available from the latest survey results on the official dashboard -such as literacy rate, workforce participation, prevalence of chronic diseases. Response: Thank you for raising this concern. Regarding your first point about use of R factor, we intended to use a standard summary of the pandemic growth as the foundation for our outcome variables. The time-varying reproduction number has been used extensively in the COVID-19 context for this exact purpose. 9-13 It is easily interpretable as the average number of susceptible persons infected by a single infectious individual at the corresponding time point, and the time-varying nature of the quantity allows the users to look at the growth pattern of the pandemic across the different phases, peaks and troughs. Further, summaries computed across a specified period can provide us with a useful scalar metric of the status of the pandemic during that time window. All these features make the time-varying R an appropriate choice for our purpose.
Regarding your second point about spatial metrics, we agree it is an important consideration. Due to the small number of administrative regions (namely, districts) and using just a single Indian state for the illustration of our pipeline, we keep the association models simple and do not account for spatial correlation between the regions. The primary reason being it would involve estimation of additional correlation parameters which could potentially impact the estimation of the regression parameters.
With inclusion of more states (and hence more spatial units in the form of districts), spatial models can indeed be useful to borrow strength across regions to identify true associationsa task we leave for future work.
R2.3. There are no numerical tables or results. Also, there is not any graph more regression modeling.
Response:
We apologize for not having made this clear in our paper. In the revised paper, Figures 3-5 provide the reader with graphical representations and summaries of our results. We decided to use these for easier visualization and interpretation for the readers. The exact numerical tables corresponding to all the models have now been included as Supplementary Tables 3-4. These are also available in the repository located at https://github.com/bayesrx/COVID_vulnerability_India and are also reproduced below to ensure ease of access. We now also mention the exact coefficient estimates and corresponding standard errors in the main manuscript wherever applicable.
R3.2. Authors have not described the grading of different vulnerability indices (cVI's) in the study.
How the one has more impact than other?
Response: To clarify, in our computational pipeline that builds the covariate-specific, theme-specific and overall vulnerability indices, there is no grading or ordering involved among the different cVIs. All the covariates and themes are assigned uniform weights in the computations. The ranking comes from the Bayesian linear regression models, where the variable importance metric of choice is the posterior inclusion probability -that quantifies the association of the cVIs with the R value. The different cVIs in the models are ordered in decreasing order of this quantity, and the ones with larger posterior inclusion probabilities are interpreted to be more important as explanatory variables in the models.
R3.3. How the cleanliness of data was taken care of?
Response: Thank you for raising this important concern. We have updated the Supplementary Table 1 with the links to the exact database(s) from which each variable was collected. The exact variables in each group are also listed in Figure 1. To ensure easier access, we are attaching the updated version of the same below.
Households using clean fuel for cooking
Calculated as percentage of households reporting clean fuel for cooking
Availability of public hospitals (at district level)
Calculated as number of public hospitals (primary health center, subdivisional and above) per 10000 population Directorate of Health Services, Department of Health and Family Welfare, Government of ODISHA website General number of beds (at district level)
Total Beds Capacity (at district level)
Capacity of beds per 10000 population COVID Dashboard Govt. Of Odisha
Total ICU Beds (at district level)
ICU beds per 10000 population
Temporary Medical Camps (at district level)
Temporary medical camps per 10000 population
COVID Hospital Testing Centres (at district level)
Testing centres per 10000 population
Total HIV Positive
Percentage of total HIV positive to total tested (Male + Female) Health Management Information System (2019-20)
Plasmodium Vivax Test Positive
Percentage of plasmodium Vivax test positive to total blood smears examined
Plasmodium Falciparum Test Positive
Percentage of plasmodium Falciparum test positive to total blood smears examined
Infants Deaths due to Pneumonia
Percentage of deaths due to Pneumonia to total reported Infant deaths
Data collation and cleanliness
Survey-based data are available at a nationwide scale and quality checks are performed before they are shared publicly. After getting the data downloaded from the sources mentioned in Supplementary report. 14 The data processing involved office editing, data entry using CSPro software, verification of data entry, secondary editing, and final cleaning of data at the International Institute for Population Sciences (IIPS).
R3.4. Authors raise the question of under-reporting, however there is no such report has been come
by any agency. Author should cite any such report before making comments like this.
Response: Thank you for your comment. Underreporting of COVID-19 cases and deaths all across the world has been an extensively discussed issues in the recent literature. 15 16 For India, the scenario has been even more complicated, with certain studies indicating 5-10x underreporting for deaths and 30-40x underreporting for cases. [17][18][19][20] We have now added these references to the paper and elaborated the comment on underreporting in the discussion section, as reproduced below.
Additionally, the known issue of underreporting of COVID-19 cases due to the limited availability of Therefore, a similar analysis performed on a bigger and possibly more representative sample obtained via large-scale population-level testing may have yielded better explanations. Response: Thank you for your comments, questions, and useful suggestions. Based on the comments from the editor and all the reviewers, we have now made some significant changes and incorporated some new elements in the paper that we believe will be helpful towards the cause of transparency of data and analysis procedure, clearer elaboration of the methods as well as the interpretations of the quantitative results in context of real-time policy making and administrative decisions. We briefly summarize such key changes below. • We have also added links to the specific data sources in Supplementary Table 1.
R4.1.This paper presents an observational study of COVID-19 vulnerability indices based on
(Supplement Page 14) • We have added Supplementary Tables 3-4 to summarize the numeric outcomes of the regression models from R. (Supplement Pages 16-17) We sincerely hope that these changes will contribute positively towards making our manuscript an improved read. Please go through the rest of this response for point-by-point responses to your questions and specific descriptions of how the changes above relate to them. Objectives: COVID-19 has differentially affected countries, with health infrastructure and other related vulnerability indicators playing a role in determining the extent of its spread. Vulnerability of a geographical region to COVID-19 has been a topic of interest, particularly in low-and middle-income countries like India to assess its multi-factorial impact on incidence, prevalence or mortality. This study aims to construct a statistical analyses pipeline to compute such vulnerability indices and investigate the association between them and metrics of the pandemic growth.
R4.2. The "Dataset and
Design: Using publicly reported observational socio-economic, demographic, health-based and epidemiological data from Indian national surveys, we compute contextual COVID-19 Vulnerability Indices (cVIs) across multiple thematic resolutions for different geographical and spatial administrative regions. These cVIs are then used in Bayesian regression models to assess their impact on indicators of the spread of COVID-19.
Setting: This study uses district-level indicator and case counts data for the state of Odisha, India. and estimated coefficients all being >0 (positive association of vulnerability with pandemic growth).
Conclusions:
Having succeeded in containing COVID-19 to a reasonable level during the first wave, the second wave of COVID made greater inroads into the hinterlands and peripheral districts of Odisha, burdening the already deficient public health system in these areas, as identified by the cVIs.
Improved understanding of the factors driving COVID-19 vulnerability will help policy makers prioritize resources and regions leading to more effective mitigation strategies for the present and future.
R4.3. In the "Data description" section, the process taken to handle data collection, data The choice of variables was mostly driven by the relevance of them in a COVID-19 context and the availability of such data from a reliable source in India. We originally planned on including a few more variables such as wealth index, mortality due to chronic diseases and so on to develop the cVI.
However, we chose only those indicators for which complete data was available from the latest survey results on official dashboard -such as literacy rate, workforce participation, prevalence of chronic diseases etc.
Further, we have now added Supplementary Text Section 1.1 to describe the data collation and cleanliness, as reproduced below.
Data collation and cleanliness
Survey-based data are available at a nationwide scale and quality checks are performed before they are shared publicly. After getting the data downloaded from the sources mentioned in Supplementary
Example of computing cVIs for a given region
We exemplify our steps for the computations of the vulnerability indices at the covariate and theme levels for the Mayurbhanj district of Odisha. Similar steps have been adopted for the other districts.
First, we focus on the covariate named "General no. of beds per 10k." As per our data (available at the repository https://github.com/bayesrx/COVID_vulnerability_India), the two districts having the least number of general beds per 10k are Debagarh (0.0104 per 10k) and Boudh (0.0169 per 10k), and the two districts having the highest number of general beds per 10k are Ganjam (0.0943 per 10k) and Mayurbhanj (0.0826 per 10k). We rank the districts such that a higher rank puts a particular district in a more vulnerable position than a district with a lower rank. Since a smaller number of general beds per unit of population indicates more risk or vulnerability, the ranking for this particular variable is in the decreasing orderi.e., the lower the number of beds per 10k for a district, the lower it features in the rank list and gets a higher numeric rank. For example, as Mayurbhanj has the second-highest number of beds per 10k, it will get a rank of 2. As per the covariate-specific vulnerability index formula, the VI for Mayurbhanj corresponding to the above covariate will be (2-1)/(30-1) = 0.0345. In the same manner, we compute the VIs for other covariates for Mayurbhanj. If, on the other hand, we were focusing on a variable for which a higher value indicates higher risk or vulnerability, the rank assigned to the district of Mayurbhanj with everything else unchanged would have been (30 -2 + 1) =
29.
After computing the covariate-specific VIs in this way for all the covariates and all the districts, we compute one theme-specific VI. Let us consider the theme "Preparedness of COVID" for the purpose of illustration here. This theme comprises of the the ranking in the case of theme-specific VI computation, the rank assigned to Mayurbhanj will be (30-4+1) = 27. Hence, the overall VI for Mayurbhanj will be (27-1)/(30-1) = 0.8966.
Further, we have now included an algorithm as Supplementary Figure 1 to summarize the numeric computational pipeline of the cVIs while ensuring reproducibility. The algorithm is appended below for ease of reference.
Supplementary Figure 1. Algorithm for computing variable-specific, theme-level and overall vulnerability indices for a given set of districts and covariates.
We sincerely hope that these efforts will make the calculations more understandable and reproducible for the readership. Response: Thank you for your valuable suggestion. Based on your recommendation, we have now added a paragraph in the main text (pages 9-10) describing the model features and the estimation procedure in greater detail. For the ease of reference, the added paragraph is appended below.
R4.5. To facilitate reproducibility and transparency, in the "Regression analysis using summaries of
To fit this multiple linear regression model, we use a Bayesian model averaging procedure Thus, instead of 'selecting' a final model, we use these PIPs to rank the variables included in a model in terms of their relative importance. The procedure also provides point estimates and standard errors for each coefficient which allow us to interpret the directionality of the association of each variable with the outcome of interest.
R4.6. The "Results" section requires the greatest attention with the following points: R4.6.1. The first paragraph of this section spends a large amount of effort describing Figure 3.
However, this does not appear to provide any greater insight than visual inspection of the plots already provides. If there is a primary point that is coming from this paragraph it is currently being lost.
Suggest reworking this paragraph and focusing on a key take-away.
Response: Thank you for your comment. Our primary intention in terms of usage of Figure 3 was, in fact, the initial visual inspection of the pandemic growth across different districts, followed by more nuanced and quantified association analyses via the regression models. We have now made this clearer in the corresponding paragraph and have reworded the paragraph to have a sharper focus on the takeaways we want to highlight in our paper.
We first summarize the case incidence data across the 30 districts of Odisha at both state and district levels between the dates May 1, 2020 -Apr 15, 2021 in Figure 3. Some key takeaways can be obtained from a visual inspection of Figure 3 that allow us to understand the pattern of the pandemic growth in Odisha during the timeline of interest and interpret some of the results obtained via further and more nuanced analyses.
• Figures 3A-B • All districts showed controlled values of R during Unlock 5.0 to 7.0, but it tended to increase in the initial months of 2021. Figure 3D indicates that as of the first fortnight of April 2021, all the districts experience > 1, i.e., further growth of the pandemic.
R4.6.2 The results are not properly supported with quantitative outcomes to compare differences between themes or clusters. Tests for significance should be applied and reported. Figure 4d provides support for the instantaneous R through the use of 95% CI values for each district but is still lacking in providing support for statistically significant differences between the districts.
Response: Thank you for your comment. We believe the figure that was intended to be referred to in the question is Figure 3D and not Figure 4D, since the latter does not cover the content of interest as presented in the comment. As we have already mentioned in the methods section, the primary intended goal of the study is to look at the associations of the vulnerability indices corresponding to the different themes and variables with the outcome of interest iR and assess their relative importance in context of the pandemic growth. We achieve this via the ordering of the covariates in each model used in terms of the PIPs, as presented in Figure 5 and Supplementary Figure 2. To ensure exact numeric quantities are reported from these model fits, we have now included Supplementary Table 3 summarizing the PIPs, the point estimates and the standard errors provided by the Bayesian procedure for each coefficient in each model. For the ease of reference, we are reproducing it here. The intended use of Figure 3D and Figure 4G, on the other hand, is to summarize the differential patterns across the districts in terms of iR. The cut-offs and ranges used for this purpose to define the high, moderate and low categories have been used in other studies to discuss relative growth of the
GENERAL COMMENTS
This paper presents a revised version of ab observational study of COVID-19 vulnerability indices based on 30 districts of the eastern Indian state of Odisha. The objective of the study is clear, the topic is in scope for the journal and its readership, and the content is well written. The paper has undergone a large volume of updates which have greatly improved its presentation and quality! The methods, statistics, and metrics are now much easier to follow. The supplemental conveys needed and relevant information and the denoted repository is accessible and provides access to the data.
Most of my prior comments have been sufficiently resolved. I have only a few additional comments for the revised version.
1) The "Results" section requires some additional modifications: 1.1) The "Regression analyses using summaries of time-varying R profiles" section describes the interpretation of β as "quantify the amount of change in iR due to one unit change in the corresponding vulnerability index". In this context, what is the proper way to interpret the "RESULTS" section statements, such as "(PPI = 0.45, β>0)"? 1.
2) It appears that most β values are described as either > or < 0; however, some are given an actual quantitative value. How and why this is done should be described in the "Methods" section when the discussion of the β's is provided. 1.
3) The presentation of results is inconsistent. Some statements provide (a) the PPI and β values while others provide (b) the β and sd values. It seems that sd values should be added to each statement, where possible.
2) There are outstanding discussions of validity that need to be addressed: 2.1) A general discussion of the validity of the developed iR values is missing. How is the reader to gauge how well these values represent the data? Also, I recommend adding references with respect to assessing the validity of iR. 2.
2) The iR values are used to compare PPIs across districts. A discussion of the (a) validity and/or (b) limitations of this approach is needed. The 30 districts of Odisha, India are not uniform in geographical structure. Mobility is described as a limitation of the analysis, but environmental factors can also impact these scores. This should be elaborated on as to your views on how this impacts the results. Also, I recommend adding references that support the comparative evaluation of iR values across geographic scales or in areas with potentially significant differences in geographic features.
3) There is an inconsistency in the number of factors presented. The final paragraph of the "Introduction" mentions "five factors", but the "Data description" section and Supplemental Table 4 list six as they also include "overall vulnerability". I recommend adding the sixth factor to the presentation in the Introduction.
4) The "data source" column of Supplemental Table 1 appears to provide hyperlinks to the data sources. These links do not work in the version of the supplemental material that was accessible for review. I suggest including the URLs alongside the hyperlinks to ensure their accessibility.
REVIEWER 2
R2.1. Page 12: We use a Bayesian model averaging procedure implemented via the BMS package in R.
The authors need to rewrite this. The authors use R as a program to implement their work but firstly they need to analyze their methodology. They need to know how their analysis fits inside their paper.
Page 12: We used posterior inclusion probabilities (PIPs) provided by the BMS package as estimates of variable importance in the fitted model.
Same reasons.
Response: Thank you for your valuable input. As described in the methods section of the manuscript, the model that we use to assess the associations between the instantaneous R (iR) and the COVID- Similar to the posterior distributions of the parameters, we can compute the posterior inclusion probabilities (PIPs) for a variable by summing up the PMPs for all models out of the 2 where that variable was included. For example, the PIP for the first themed cVI can be computed in the following way.
Similar calculations can then be extended to all our theme-specific models and models using variability in R (vR) as outcome instead of iR. For the model priors ( ), we use the default choice of setting ( ) ∝ 1 i.e., uniform priors due to the lack of additional knowledge.
Simulation study to assess the selection performance of posterior inclusion probabilities computed using the BMS package
We perform a set of ground-truth simulations to mimic scenarios similar to the data structure used in our analyses to assess the performance of the Bayesian model averaging procedure described above. Fixing a sample size n and number of covariates p, we first generate the design matrix × where each element follows a (0,1) distribution independently. We choose this distribution since the range of the cVIs which serve as covariates in our COVID-19 data fall between [0, 1]. Then, we set 100a% of these p covariates ( many) to have a true (non-zero) effect on the outcome and the rest ( (1 − ) many) to have no effect on the outcome. In essence, the tuning parameter controls the sparsity of the true signals. The non-zero coefficients ( s) are generated independently from ( , + 1) distribution (to include different effect sizes in low, medium, and high ranges)to cover a range of corresponding associations. These s are then each multiplied by independent random variables taking values ±1 with probability In the panels, a: proportion of true (non-zero) signals among 10 covariates, b: minimum absolute value for the non-zero coefficients, s: standard deviation of the noise distribution. In the panels, a: proportion of true (non-zero) signals among 10 covariates, b: minimum absolute value for the non-zero coefficients, s: standard deviation of the noise distribution. In the panels, a: proportion of true (non-zero) signals among 10 covariates, b: minimum absolute value for the non-zero coefficients, s: standard deviation of the noise distribution.
. Similar to the posterior distributions of the parameters, we can compute the posterior inclusion probabilities (PIPs) for a variable by summing up the PMPs for all models out of the 2 where that variable was included. For example, the PIP for the first themed cVI can be computed in the following way.
Similar calculations can then be extended to all our theme-specific models and models using variability in R (vR) as outcome instead of iR. For the model priors ( ), we use the default choice of setting ( ) ∝ 1 i.e., uniform priors due to the lack of additional knowledge. Response: Thank you for these extremely helpful comments. Several India-specific and international COVID-19 research works in the past two years have considered time-varying R profiles as a dynamic summary of pandemic growth, 5-8 and smoothed window-specific average akin to the iR quantities in our work have also been considered in context of India. 9 We have now added all of these references in the main text. Further, to analyze and validate the Bayesian model averaging based regression procedure that associates the cVIs to the iR and computes PPIs to infer covariate importance, we have now added a section in the supplementary notes on a simulation study using BMA and PIPs across scenarios closely imitating the real data settings of our applications, as reproduced below to ensure ease of accessibility. In the panels, a: proportion of true (non-zero) signals among 10 covariates, b: minimum absolute value for the non-zero coefficients, s: standard deviation of the noise distribution. In the panels, a: proportion of true (non-zero) signals among 10 covariates, b: minimum absolute value for the non-zero coefficients, s: standard deviation of the noise distribution. In the panels, a: proportion of true (non-zero) signals among 10 covariates, b: minimum absolute value for the non-zero coefficients, s: standard deviation of the noise distribution. Response: Thank you for this valuable comment. Indeed, geographical, and environmental variability can contribute to the variability in pandemic growth and hence the iR values. Due to the low sample size setting of our real data analyses, we did not include such information in our models. With additional and more granular data, it is straightforward to extend our setting to include such covariates alongside the themed vulnerability metrics. To make this clearer, we have added discussion points on the potential limitations arising from the geographical diversity as well as the environmental factors, as reproduced below. We hope these changes will ensure clarity in terms of the reliability of the results and the utility of the framework as applied in the Indian context. Table 4 list six as they also include "overall vulnerability". I recommend adding the sixth factor to the presentation in the Introduction.
Supplementary
Response: Thank you for your comment. We are, in fact, using a total of five themes for our analyses. The overall vulnerability refers to a set of analyses based on the theme-specific vulnerability indices taken together, to assess the relative importance of each theme in pandemic growth. This is exhibited clearly in Figure 2 and described accordingly in the methods section. We have now made this clear in the introduction section as well.
|
2022-11-18T14:04:34.872Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "8941d2793a01ace1316a91798a96539922eb6c35",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3b9152088dbf5e2c99e2ef1ea60c473cf43d9055",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
88519061
|
pes2o/s2orc
|
v3-fos-license
|
The chain rule for functionals with applications to functions of moments
The chain rule for derivatives of a function of a function is extended to a function of a statistical functional, and applied to obtain approximations to the cumulants, distribution and quantiles of functions of sample moments, and so to obtain third order confidence intervals and estimates of reduced bias for functions of moments. As an example we give the distribution of the standardized skewness for a normal sample to magnitude $O(n^{-2})$, where $n$ is the sample size.
Introduction
The derivatives introduced by von Mises (1947) and their subsequent versions have wide ranging applications in statistics. Two prominent application areas are the construction of nonparametric confidence intervals and analytic bias reduction.
Suppose we want to construct a nonparametric confidence interval of level α + O(n −3/2 ) for a smooth functional T (F ) based on F say, the sample or empirical distribution for a sample of size n from F . Withers (1983) showed that the limit can be given in terms of integrals of products of von Mises derivatives evaluated at F . First one Studentizes using the asymptotic variance of n 1/2 {T ( F ) − T (F )}, a 21 (F ) = [1 2 ] T = machine learning, cusum statistics, methods of sieves and penalization, change point estimation, Hadamard differentiability, change-of-variance function, measuring and testing dependence by correlation of distances, empirical finite-time ruin probabilities (Loisel et al., 2009), nonparametric maximum likelihood estimators (Nickl, 2007), estimating mean dimensionality of analysis of variance decompositions, monotonicity of information in the Central Limit Theorem, generalizations of the Anderson-Darling statistic, M -estimation, U -statistics (Volodko, 2011), information criteria in model selection, goodness-of-fit tests for kernel regression, empirical Bayes estimation, and estimation of Kendall's tau.
The aim of this paper is to develop tools for extending the use of von Mises derivatives. In Section 2, we extend Faa di Bruno's chain rule for the derivative of a function of a univariate function to functions of a multivariate function and show how it can be applied to a function of a function of F , say T (F ) = g(S(F )), where g : R a → R is a smooth function and S(F ) a smooth functional.
In Section 3, we apply it to obtain derivatives and bracket functions for powers, products, quotients, standardized and Studentized functionals. Section 4 gives the general derivative for a moment and applies previous results to obtain expansions up to O(n −2 ) for the distribution and quantiles of functions of sample moments. As an example we give the distribution of the standardized skewness for a normal sample to magnitude O(n −2 ), where n is the sample size. Also we give confidence intervals and bias reduction methods for functions of moments.
Some of the results in the paper follow easily from Withers (1983Withers ( , 1987), see Theorems 3.1 to 3.3. But these results are not the main contributions of this paper. The main contributions are: 1) the tools developed to compute von Mises type derivatives, see Theorems 2.1 and 2.2; 2) their applications to obtain bracket functions for general functionals, see Fisher and Wishart gave unbiased estimates only for cumulants and their products: see, for example, Stuart and Ord (1987). Our two methods for bias reduction apply to any smooth functional -and our second estimate reduces to their results for the cases they consider. Also our method does not need to use unbiased estimates of cumulants to reduce the bias of functions of cumulants.
Analogous to Fisher's tables for his k-statistics and their cumulants, Appendix B gives the terms needed for bias reduction of any smooth function of one or more moments.
for r = 1, 2, . . . in the form where B rh is the partial exponential Bell polynomial defined by the coefficients in the formal expansion in powers of real ε, Comtet (1974) shows they are given by s n 1 1 · · · s nr r n 1 ! · · · n r ! : n 1 + · · · + n r = j, 1 · n 1 + · · · + r · n r = r , Theorem 2.1 provides an extension of (2.1) to the case s : R a → R b and g : R b → R.
The extension of (2.1) is In (2.4) and throughout, we use the tensor sum convention that repeated indices i 1 , i 2 , . . . are implicitly summed over their range (1, . . . , b in the case of (2.4)).
A form of the multivariate chain rule (2.4) was given in Withers (1984).
Let F be a convex set of probability measures on a measurable space (Ω, A). Suppose for x ∈ Ω, that δ x lies in F, where δ x is the measure putting mass 1 at x and 0 elsewhere. Let x, {x i } be points in Ω. Let F lie in F, and T : F → R be some functional. Define the rth derivative of T (F ) at (x 1 , . . . , x r ), as in Withers (1983). The only derivative we need give here is the first, also known as the influence function: The results stated in Withers (1983) for Ω = R s generalize immediately to general Ω. In particular, the rule (2.11) for the derivative of the rth derivative may be stated as In this way higher derivatives may be calculated from successive first derivatives. For example, the second derivative of ∞ −∞ g(x)dF (x) is zero. Now suppose for some function g : R b → R, (2.6) Applying (2.5) gives and so on, where S a·12··· is the rth derivative of S a (F ). Despite the fact that by (2.5) the derivative of a derivative is not a second derivative, the expressions (2.7)-(2.9) are precisely those for the derivatives of a function of a vector function of a vector given in (2.4). That is, where S = (S 1 , S 2 , . . .), S i = S (i) (F ). A proof that (2.10) holds for general r follows using (2.5) and induction. The result can be formally stated as follows.
Theorem 2.2 If (2.6) holds, T ·1···r is given by the chain rule for sums over all partitions (Π 1 · · · Π k ) of (1 · · · r) with i Π's of length n i . Corollary 2.1 applies Theorem 2.2 to obtain the next two derivatives.
and so on, where and so on.
Some applications
Let F be the empirical distribution of a random sample of size n from F . By Withers (1983), for a broad class of T , the cumulants of T ( F ) satisfy and so on, for F i = F (x i ), and and so on. We refer to the functionals [· · · ] as bracket functions. They are the building blocks for the cumulant coefficients a ri and the cumulant coefficients of the Studentized statistics, and hence for the Edgeworth-Cornish-Fisher expansions of the standardized form of T ( F ), and its Studentized form. They are also the building blocks for obtaining nonparametric confidence intervals and estimates of low bias for T (F ).
As a start we have these approximations to the bias, variance, and skewness of T ( F ): Theorem 3.1 lists the bracket functions needed for bias and bias reduction. Theorem 3.2 lists the bracket functions needed for Edgeworth-Cornish-Fisher expansions. Theorem 3.3 lists the bracket functions needed for nonparametric confidence intervals.
The regularity conditions needed for Theorems 3.1 and 3.2 are the same as those given in Withers (1983Withers ( , 1987. So, they are not stated here. (3.9) Proof: Follows by Theorem 5.1 in Withers (1983).
By Withers (1989), the bracket functions in Theorem 3.3 are also the terms needed for the distribution and quantiles of the Studentized form of For convenience, set T = T (F ) and So, Example 3.2 This example gives bracket functions for a product. Suppose that T (F ) = S 1 (F )S 2 (F ). Then S 1·1 S 2·23 say, So, for V (F ) = a 21 . Its bracket functions [· · · ] T 0 (and so also its cumulant coefficients) may be expressed in terms of the bracket functions [· · · ] T . For details, see Appendix A of Withers (1989).
If one makes other assumptions such as symmetry of F or a parametric form for F , then V (F ) = a 21 will generally take a simpler form. Similarly, in some circumstances one is interested in standardizing a functional in a different way, for example, replacing µ r by µ r /µ r/2 2 . The next example covers this situation for the special case of a T (F ) a function of a univariate functional.
So, the cumulant coefficients a 21 , a 11 , a 32 needed for third order inference are given by (3.1)-(3.3) in terms of the bracket functions Similarly, the cumulant coefficients a 22 , a 43 needed for third order inference are given by (3.8) in terms of the bracket functions given in Appendix A. The bracket functions needed for (3.6), (3.7) for estimates of T (F ) of bias O(n −3 ) are
Further terms are given in Appendix A.
If g(s) = s r then g i = (r) i S r−i . Putting r = −1 gives the derivatives of a quotient (−1) i = (−1) i i!.
Applications to moments
and let {µ r , κ r } be the central moments and cumulants of F . Set µ(F ) = µ and so on. Let F be the empirical distribution of a random sample of size n from F . Many authors have studied problems of moments and cumulants: see, for example, Stuart and Ord (1987). Fisher's k-statistic k r , the unbiased estimate of κ r , is given there in Section 12.9 for r ≤ 8 in terms of {s i = nµ ′ i ( F ) = n j=1 X i j }. Fisher's expressions for unbiased estimates of the joint cumulants of k-statistics are given there in Section 12.16. Wishart's unbiased estimates of products of cumulants are given there in Section 12.16 in terms of symmetric functions, which can be converted to {s i } using Appendix Table 10.
Generally one only wants approximations. (Indeed without making parametric assumptions on F only approximations are possible except for estimating polynomials in moments). One problem with these "traditional" approaches is that it is not an easy task to separate out terms beyond the first in decreasing order of importance in order to make such approximations. As noted in Section 3 the present approach does not suffer from this disadvantage.
For S(F ) a polynomial in F of degree r (for example, µ ′ r , µ r or κ r ), derivatives of order beyond r vanish.
Now simplify.
Some particular cases of the theorem are given by the following corollaries.
Example 4.3
This example is about standardized central moments. Suppose T (F ) = ν r , where ν r = µ r /µ r/2 2 . Then the [·] T needed for third order inference and bias reduction, are given by Example 3.4 with S = µ 2 and U = µ r and [11] S , [1 3 ] S , · · · given by Example 4.2 and the bracket functions For example, suppose that r = 3 and F is symmetric. Then a ri = 0 for r odd and a 21 = ν 6 − 6ν 4 + 9, a 22 = −3 (ν 8 − 5ν 6 + 7ν 4 − 3) + 12ν 6 (2ν 4 − 1) /4 +2ν 4 (107ν 4 − 489) /4 + 9 (4ν 4 − 11) , The biases of the estimators are computed by simulating ten thousand replications of samples of size n from the following distributions: standard normal, Student's t with two degrees of freedom, Student's t with five degrees of freedom, Student's t with ten degrees of freedom, standard logistic, standard Laplace. As expected, the bias reduced estimators give substantially smaller biases for each n and for each of the six distributions. The biases appear largest for the Student's t distribution with two degrees of freedom. The biases appear smallest for the normal distribution, the Student's t distribution with ten degrees of freedom, and the logistic distribution.
For example, the asymptotic variance of n 1/2 (T ( F ) − T (F )) is For r = 2, this reduces to T (F ) 2 (µ 4 µ −2 For example, as given by Section 10.6 of Stuart and Ord (1987). Also
|
2012-11-01T11:34:08.000Z
|
2012-11-01T00:00:00.000
|
{
"year": 2012,
"sha1": "ca93677e48ce9177f3cd9a21cd61cf76bca3e9a0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ca93677e48ce9177f3cd9a21cd61cf76bca3e9a0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
230632905
|
pes2o/s2orc
|
v3-fos-license
|
ASYMPTOMATIC COVID-19 CARRIERS: A QUANDARY FOR FAMILY PHYSICIAN
The current COVID-19 pandemic that started in Wuhan, China, in December 2019, infect millions globally so far. The disease is caused by an RNA virus called severe acute respiratory syndrome corona virus-2 (SARS CoV-2).The common presenting symptoms include dry cough, fever, sore throat, and malaise. A proportion of patients remain asymptomatic throughout infection. Likewise, pre-symptomatic, asymptomatic patients act as carriers that can infect others.These patients pose a management challenge for family physicians working at the primary care level.Herein we report an asymptomatic carrier of COVID-19 who presented to the primary care center for routine follow-upthat confirmed by RT-PCR test.
The current COVID-19 pandemic that started in Wuhan, China, in December 2019, infect millions globally so far. The disease is caused by an RNA virus called severe acute respiratory syndrome corona virus-2 (SARS CoV-2).The common presenting symptoms include dry cough, fever, sore throat, and malaise. A proportion of patients remain asymptomatic throughout infection. Likewise, pre-symptomatic, asymptomatic patients act as carriers that can infect others.These patients pose a management challenge for family physicians working at the primary care level.Herein we report an asymptomatic carrier of COVID-19 who presented to the primary care center for routine followupthat confirmed by RT-PCR test.
Introduction:-
The ongoing pandemic of COVID-19caused by SARS-CoV-2 affects millions globally. As of December1, 2020, worldwide confirmed infections are over 63millionin 218 countries, territories, and two international conveyances 1 .The disease brought unprecedented challenges to the healthcare sector. The presentation of COVID-19 clinical manifestations varies among individuals ranging from pre to symptomatic or asymptomatic at all. Presymptomatic or asymptomatic patients are considered as the main source of COVID-19 transmission. Not only these individuals are a silent source of disease transmission but it is a great challenge for healthcare workers in their day-to-day clinical practice.Due to their asymptomatic nature of the disease, these individuals either don't seek healthcare servicesor if they seek medical advice for any other health-related issue, they un-intentionally not bring in the notice of the treating physician unless asked otherwise.Herein, we report a case of COVID-19 that remained asymptomatic during his course of illness and consequently infect his family.
Case Presentation:
A 45years old male, a known patient of hypertension, presented to the primary care clinic for routine follow-up. There was no history of any other illness. An in-depth history of the patient revealed a friend visit, a week ago that was positive for COVID-19 which he knew latter by RT-PCR during screening. Clinical evaluationunder COVID-19 protocol was insignificant. The patient vitals were: Temp. 37.2C pulse 81/min, BP 148/90mmHg. Laboratory workup was within normal limits except for lipid profile: total cholesterol 245mg/dL, LDL 169mg/ dL, HDL 65mg/ dL. The requested chest x-ray shows no abnormal findings (Fig-1) Corresponding Author:-Liaqat Ali Khan Address:-General Directorate of Health Jazan, Kingdom of Saudi Arabia. Email: drliaqatalikhan@yahoo.com The patient was advised for nasopharyngeal swab sample for RT-PCR and home isolation. Upon a positive result of RT-PCR for the patient, his wife and children were asked for the COVID-19 test. His wife'snasopharyngeal swab result confirmed SARS CoV-2 infection and children's were negative at this point.Tele-advicehas given to the patient, to quarantine his family, and to callif any symptom arises. The patient was followed frequently via teleconsultation for the next ten days. Fortunately, the patient remained asymptomatic throughout the infection.
The publication of this case highlights the notion that asymptomatic and patients with pre-symptomatic COVID-19 may pose a management quandary for primary care physicians working at the primary level. Therefore, physicians should focus on an in-depth interview of each visiting patient specifically focusing on any history of contact and follow the standard operating guidelines. .3% of the studied population which is less than the asymptomatic patients on Diamond Princes Cruise in Japan, where 50% of the patients were asymptomaticinastudy by Kenji and colleagues. 6 Another study in a hemodialysis unit in Germanyby Albalate et al 7 .found 40.5% of their patients were asymptomatic and stressed the need for early detection of asymptomatic patients.According to a recent review by Oran & Topol 8 , the prevalence of asymptomatic carriers accounts for 40-45% and could transmit infection beyond 14 days. In light of the COVID-19 pandemic and constrain on hospitals, most of the healthcare services are directed to primary care that in turn encounter more over the desk consultations andincreases the risk of exposure to pre-or asymptomatic patients.
Conclusion:-
In the current pandemic of COVID-19, Family Physicians should be vigilant and to be focused on taking in-depth history including history of contact from each patient irrespective of the presenting symptoms to pick and screen patients of asymptomatic COVID-19.
Ethical approval
Not applicable
|
2020-12-17T09:11:42.362Z
|
2020-11-30T00:00:00.000
|
{
"year": 2020,
"sha1": "905e2b1e8ecb62edf8d2eeb4e9f42e6624eadf10",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/12069",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "ab61bc558004eaf5fe046cc23a914c4356ee4593",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209164935
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Immune Cells and Cytokines in Intestinal Wound Healing
Intestinal wound healing is a complicated process that not only involves epithelial cells but also immune cells. In this brief review, we will focus on discussing the contribution and regulation of four major immune cell types (neutrophils, macrophages, regulatory T cells, and innate lymphoid cells) and four cytokines (interleukin-10, tumor necrosis factor alpha, interleukin-6, and interleukin-22) to the wound repair process in the gut. Better understanding of these immune factors will be important for developing novel targeted therapy.
Introduction
Wound healing in the intestine is a critical process affecting the prognosis of inflammatory bowel disease (IBD) [1]. Failure of healing could result in prolonged hospitalization, critical illness, and even death. Intestinal wound healing is consisted of three cellular events: restitution, proliferation, and differentiation of epithelial cells adjacent to the wounded area [2]. After intestinal tissue damage, the initial response is dominated by a proinflammatory type 1 immune response, whereas during the wound repair process, a more anti-inflammatory type 2 immune response will dominate to promote tissue regeneration and maintain tissue homeostasis [3]. A diverse array of evolutionarily ancient hematopoietic immune cell types, including lymphocytes, dendritic cells (DCs), monocytes, macrophages, and granulocytes, participate in this process. These immune cells secrete large amounts of cytokines and growth factors to signal to local tissue progenitors and stromal cells and promote wound repair. Here, we will discuss the contribution of four major immune cell types (neutrophils, macrophages, and regulatory T cells (Treg) and innate lymphoid cells (ILCs)) and four cytokines (interleukin-10 (IL-10), tumor necrosis factor alpha (TNF-α), IL-6, and IL-22) to the wound healing process in the intestine (Figure 1).
Figure 1.
Immune cells and cytokines are contributing to intestinal wound repair. Four major immune cell types (neutrophils, macrophages, Treg) and ILCs), four cytokines (IL-10, TNF-α, IL-6, and IL- 22) and their corresponding receptors are involved in stem cell renew and wound healing process in the intestine.
Immune Cells
The innate immunity is our first line of nonspecific and rapid defense against pathogens, whereas adaptive immunity confers specific long-lasting memory. Innate immune cells include neutrophils, macrophages, and DCs. The roles of neutrophils and macrophages in wound repair are discussed in detail below. DCs are antigen presenting cells mediating T cell activation and adaptive immunity, thus playing key roles in the crosstalk between innate and adaptive immunity [4].
Neutrophils
Neutrophils are the first responding leukocytes to sites of inflammation when the intestinal epithelial barrier is breached and the gut microbiota invade [5]. Mouse neutrophils migrate to wounded tissues begins 4 h and reach peak numbers 18 to 24 h after injury [6]. Neutrophils are shortlived cells with a half-life in the circulation of approximately 1.5 h and 19 h in mice and humans, respectively [7,8]. However, proinflammatory cytokines such as TNF-α, IL-1β, and IL-6 increase the lifespan of neutrophils [9], which may contribute to the resolution of inflammation [7].
The Function of Neutrophils
The neutrophils can exert both destructive and protective effects in wound healing ( Figure 2) [10]. Excess neutrophils in injured tissues impair healing and correlate with the crypt destruction and ulceration [11,12]. During intestinal inflammation, neutrophils undergo transepithelial migration and secrete a large amount of matrix metalloproteinase-9 (MMP-9) to disrupt epithelial intercellular adhesions, which leads to enhanced epithelial injury [13]. Neutrophil-derived miR-23a-and miR-155containing microparticles also promote accumulation of double-strand breaks, which leads to impaired colonic healing [14].
Immune Cells
The innate immunity is our first line of nonspecific and rapid defense against pathogens, whereas adaptive immunity confers specific long-lasting memory. Innate immune cells include neutrophils, macrophages, and DCs. The roles of neutrophils and macrophages in wound repair are discussed in detail below. DCs are antigen presenting cells mediating T cell activation and adaptive immunity, thus playing key roles in the crosstalk between innate and adaptive immunity [4].
Neutrophils
Neutrophils are the first responding leukocytes to sites of inflammation when the intestinal epithelial barrier is breached and the gut microbiota invade [5]. Mouse neutrophils migrate to wounded tissues begins 4 h and reach peak numbers 18 to 24 h after injury [6]. Neutrophils are short-lived cells with a half-life in the circulation of approximately 1.5 h and 19 h in mice and humans, respectively [7,8]. However, proinflammatory cytokines such as TNF-α, IL-1β, and IL-6 increase the lifespan of neutrophils [9], which may contribute to the resolution of inflammation [7].
The Function of Neutrophils
The neutrophils can exert both destructive and protective effects in wound healing ( Figure 2) [10]. Excess neutrophils in injured tissues impair healing and correlate with the crypt destruction and ulceration [11,12]. During intestinal inflammation, neutrophils undergo transepithelial migration and secrete a large amount of matrix metalloproteinase-9 (MMP-9) to disrupt epithelial intercellular adhesions, which leads to enhanced epithelial injury [13]. Neutrophil-derived miR-23a-and miR-155-containing microparticles also promote accumulation of double-strand breaks, which leads to impaired colonic healing [14]. Neutrophils are a double edge sword in intestinal wound repair. Neutrophils damage intestinal mucosal through secreting MMP-9 and miRNA containing microparticles at acute phase of injury, but they can also promote wound repair through killing bacteria, modulating HIF-1α/ITF signaling and secreting pro-repair cytokines, chemokines, and growth factors.
As neutrophils have a key role in controlling microbial contamination and attracting monocytes and/or macrophages [15], individuals with too few neutrophils display not only higher risk for developing wound infections, but also delayed wound healing [16]. However, blocking neutrophil invasion or neutrophil depletion led to aggravated experimental colitis in animals, indicating a protective role of neutrophils in mucosal repair process [17].
Neutrophils kill bacteria through phagocytosis, neutrophils extracellular traps [18], antimicrobial peptides (including cathelicidins and β-defensins), microbicidal reactive oxygen species, and cytotoxic enzymes such as elastases, myeloperoxidase, and MMPs [19]. Infiltrating neutrophils deplete local oxygen to stabilize the transcription factor hypoxia inducible factor (HIF)-1α in wounded human and murine intestinal mucosa and promote resolution of inflammation. HIF-1α stabilization also protects barrier function through induction of intestinal trefoil factor (ITF) [20,21]. It has been shown that the probiotic Lactobacillus rhamnosus GG restored alcohol-reduced ITF in a HIF dependent manner [22].
In addition to eliminating bacteria and adjusting the wound microenvironment through oxygen metabolism, neutrophils promote wound repair by secreting pro-repair cytokines, chemokines, and growth factors. After dextran sodium sulfate (DSS)-induced mucosal injury, neutrophil-derived transforming growth factor-beta (TGF-β) activates MEK1/2 signaling and induces the production of the EGF-like molecule amphiregulin (AREG) in intestinal epithelial cells, which protects intestinal epithelial barrier function and ameliorates DSS-induced colitis [23].
The Regulation of Neutrophils
Antibiotic treatment of dams reduced circulating and bone marrow neutrophils via reducing IL-17-producing cells in the intestine and their production of granulocyte colony-stimulating factor (G-CSF) [24]. In contrast to the mucosal protective effects of acute HIF-1α activation described above, we have previously showed that chronic activation of epithelial HIF-2α increased the proinflammatory response [25] and cancer development [26,27]. Among various mechanisms, HIF-2α can directly regulate the expression of neutrophil chemokine CXCL1, which facilitates the recruitment of neutrophils in colitis associated colon tumor [28]. Similarly, during intestinal inflammation, the intestinal epithelial production of neutrophil chemotactic cytokine IL-8 (chemokine C-X-C motif ligand 8, CXCL8) is increased by proinflammatory cytokines IL-1β, TNF-α, or interferon-γ (IFN-γ) [29]. A recent report also showed that IFN-γ induced expression of a neutrophil ligand intercellular adhesion molecule-1 (ICAM-1) on the intestinal epithelium apical membrane, which led to enhanced epithelial permeability and facilitated neutrophil transepithelial migration [30]. Interestingly, the enhanced ICAM-1 and neutrophil binding results in decreased neutrophil apoptosis, activation of Akt and β-catenin signaling, increased epithelial cell proliferation, and wound repair [31]. Il-23 signaling is also required for maximal neutrophil recruitment after DSS treatment [32]. Neutrophils are a double edge sword in intestinal wound repair. Neutrophils damage intestinal mucosal through secreting MMP-9 and miRNA containing microparticles at acute phase of injury, but they can also promote wound repair through killing bacteria, modulating HIF-1α/ITF signaling and secreting pro-repair cytokines, chemokines, and growth factors.
As neutrophils have a key role in controlling microbial contamination and attracting monocytes and/or macrophages [15], individuals with too few neutrophils display not only higher risk for developing wound infections, but also delayed wound healing [16]. However, blocking neutrophil invasion or neutrophil depletion led to aggravated experimental colitis in animals, indicating a protective role of neutrophils in mucosal repair process [17].
Neutrophils kill bacteria through phagocytosis, neutrophils extracellular traps [18], antimicrobial peptides (including cathelicidins and β-defensins), microbicidal reactive oxygen species, and cytotoxic enzymes such as elastases, myeloperoxidase, and MMPs [19]. Infiltrating neutrophils deplete local oxygen to stabilize the transcription factor hypoxia inducible factor (HIF)-1α in wounded human and murine intestinal mucosa and promote resolution of inflammation. HIF-1α stabilization also protects barrier function through induction of intestinal trefoil factor (ITF) [20,21]. It has been shown that the probiotic Lactobacillus rhamnosus GG restored alcohol-reduced ITF in a HIF dependent manner [22].
In addition to eliminating bacteria and adjusting the wound microenvironment through oxygen metabolism, neutrophils promote wound repair by secreting pro-repair cytokines, chemokines, and growth factors. After dextran sodium sulfate (DSS)-induced mucosal injury, neutrophil-derived transforming growth factor-beta (TGF-β) activates MEK1/2 signaling and induces the production of the EGF-like molecule amphiregulin (AREG) in intestinal epithelial cells, which protects intestinal epithelial barrier function and ameliorates DSS-induced colitis [23].
The Regulation of Neutrophils
Antibiotic treatment of dams reduced circulating and bone marrow neutrophils via reducing IL-17-producing cells in the intestine and their production of granulocyte colony-stimulating factor (G-CSF) [24]. In contrast to the mucosal protective effects of acute HIF-1α activation described above, we have previously showed that chronic activation of epithelial HIF-2α increased the proinflammatory response [25] and cancer development [26,27]. Among various mechanisms, HIF-2α can directly regulate the expression of neutrophil chemokine CXCL1, which facilitates the recruitment of neutrophils in colitis associated colon tumor [28]. Similarly, during intestinal inflammation, the intestinal epithelial production of neutrophil chemotactic cytokine IL-8 (chemokine C-X-C motif ligand 8, CXCL8) is increased by proinflammatory cytokines IL-1β, TNF-α, or interferon-γ (IFN-γ) [29]. A recent report also showed that IFN-γ induced expression of a neutrophil ligand intercellular adhesion molecule-1 (ICAM-1) on the intestinal epithelium apical membrane, which led to enhanced epithelial permeability and facilitated neutrophil transepithelial migration [30]. Interestingly, the enhanced ICAM-1 and neutrophil binding results in decreased neutrophil apoptosis, activation of Akt and β-catenin signaling, increased epithelial cell proliferation, and wound repair [31]. Il-23 signaling is also required for maximal neutrophil recruitment after DSS treatment [32].
Macrophages
Intestine contains the largest pool of macrophages in the body [33]. It was long considered that, different from other tissues, embryonic-derived macrophages only populate the colon during neonatal stage. Ly6C (hi) circulating monocytes that recruited and differentiated locally into anti-inflammatory macrophages gradually replace embryonic macrophages at the time of weaning. However, a recent study found that there are three subpopulations of macrophage in the mouse gut: Tim-4+CD4+ macrophages are locally maintained, whereas Tim4-CD4+ and Tim4-CD4− macrophages are replenished from blood monocytes [34]. Another study showed that a population of self-maintaining macrophages aroused from embryonic precursors and bone marrow derived monocytes persists in the intestine throughout adulthood. Deficiency of this population leads to vascular leakage, reduced intestinal secretion and motility [35]. In mice, colonic macrophages are identified by the following marker expression profile: [36,37]. The lifespan of macrophages is at least 1-2 week [36,38].
The Function of Macrophages
Defects in macrophage differentiation may contribute to increased susceptibility to IBD [39]. Compared with blood monocytes, human intestinal macrophages display downregulated cytokine production upon bacterial products stimulation but preserve phagocytic and bactericidal activity [40]. Thus, intestinal macrophages (CX3CR1 hi) normally possess an anti-inflammatory phenotype during homeostasis via constitutive production of IL-10 [41], whereas Toll-like receptor-responsive proinflammatory macrophages accumulate in the colon and may contribute to disease severity and progression in IBD [37]. However, colonic anti-inflammatory macrophages are still present and promote tissue repair after injury [42]. Studies in mice lacking macrophages suggested that macrophages are necessary for proper epithelial regeneration after DSS injury [43]. Furthermore, Trem2 expressing macrophages are required for efficient mucosal regeneration after colonic biopsy injury [44]. In addition, macrophage-secreted WNT ligands enhance intestinal regeneration response against radiation [45]. Transfer of anti-inflammatory macrophages accelerate mucosal repair in 2, 4, 6-trinitrobenzenesulfonic acid (TNBS)-treated mice through the activation of the Wnt signaling pathway [46].
The Regulation of Macrophages
Macrophage-dependent wound repair in response to DSS-induced colonic injury is markedly diminished in germ-free mice, indicating an essential role of microbiota in macrophage-mediated wound healing [43]. Commensal microbiota-derived local signals in the intestine are essential for recruiting macrophages from circulating monocytes [33]. Breeding of mice in germ-free conditions had a detrimental effect on the number of mature macrophages populating the adult colon compared to mice house in conventional conditions.
However, the small intestine macrophages are regulated by dietary amino acids but not microbiota [47]. Mice fed a protein-free diet had significantly lower levels of IL-10-producing macrophages but not IL-10-producing CD4+ T cells in their small intestine, compared with control-diet fed mice [47]. Depletion of commensal bacteria did not affect numbers of mature macrophages in the small intestine, spleen, or bone marrow, indicating that the recruitment of macrophages to the small intestine is regulated independently of the microbiota [47]. Depletion of microbiota also has no effect on the repair of small intestinal injury [48].
Regulatory T Cells (Treg)
Treg cells are a subset of CD4+ T cells that can inhibit T helper (Th) cells through the release of anti-inflammatory cytokines, such as IL-10 and TGF-β, or by direct contact with Th cells [49]. Th1 cells are induced by IL-12 and secrets IFN-γ, whereas Th2 cells are induced by IL-4 and releases IL-5 and IL-13 [50]. Crohn's disease (CD) has been long considered to be driven by a Th1 response, whereas the notion that UC is mediated by Th2 response is still controversial [50]. There are two best-characterized subsets of Treg cells that suppress the immune response: forkhead box P3+ (Foxp3+)-positive Treg cells and Foxp3-negative type 1 Treg (Tr1) cells [51]. Foxp3+ Tregs are mainly derived from the thymus, and some travel to the intestine where they inhibit inappropriate immune reactions. Tregs are significantly reduced in peripheral blood and colonic mucosa of IBD patients [52].
The Function of Treg
Foxp3+Tregs promotes the healing of UC through endogenous vascular endothelial growth factor receptor 1 tyrosine kinase (VEGFR1-TK) signaling as mucosal repair of DSS-induced colitis is delayed in VEGFR1-TK knockout mice [53]. For Tr1 cells, in addition to secreting immunosuppressive cytokines IL-10 and TGF-β [54], they secrete IL-22 to regulate repair of the epithelium and protect barrier function of human intestinal epithelial cells [55]. It has been shown recently that patients with refractory in CD were well tolerated with ovalbumin-specific Tr1-based therapy and had a dose-related efficacy [56].
The Regulation of Treg
The microbiota affects the frequency and function of mucosal Tregs. The frequency of Tregs increased in the colon and lamina propria of the small intestine after weaning suggesting a role of the microbiota [57]. Post-weaning accumulation of Tregs was impaired in germ-free or antibiotic-treated mice compared with conventionally housed mice. In addition, germ-free mice fed fecal suspensions from conventionally housed mice saw a substantial increase in Treg levels. Indigenous Clostridium species were reported to play a central role in the induction of IL-10 producing Foxp3+ Tregs in the colon and small intestine in mice [57]. Additionally, it appears as though Clostridia bacteria have a direct role in modulating immune cell populations in the gut as many Clostridium-colonized mice were observed to have Tregs negative for Helios-a transcription factor reported to be expressed in thymus-derived "natural" Tregs. Therefore, the absence of Helios suggests that increasing levels of Tregs in the colon may be induced Treg (iTregs). Indeed, the culture of splenic CD4+ cells in the presence of supernatant of intestinal epithelial cells from Clostridium-colonized mice induced the differentiation of FoxP3-expressing cells. Furthermore, this effect was diminished by neutralizing antibody against TGF-β. Interestingly, it appears that iTregs also play a role in maintaining gut homeostasis as demonstrated in a DSS treatment model of colitis. Symptoms of colitis, such as weight loss, rectal bleeding, colon shortening, edema, mucosal erosion, crypt loss, and cellular infiltration, were all reduced in Clostridium-colonized mice treated with DSS compared to controls.
Different from microbiota-induced Treg cells, dietary antigens from solid food induce the main part of the short-lived small intestinal periphery Treg cells [58].
Innate Lymphoid Cells (ILCs)
ILCs are mainly tissue-resident lymphocytes that lack adaptive antigen receptors expressed on T cells and B cells (Figure 3) [59]. They are generally classified into three subgroups according to their cytokine and transcription factor expression, which parallel with adaptive CD4+ Th cell subsets: group 1 (ILC1), group 2 (ILC2), and group 3 (ILC3) [59][60][61]. ILC1s are dependent on the T-box transcription factor (T-bet) for their development and function, and they produce IFN-γ and TNF-α [62]. ILC2s are dependent on GATA binding protein 3 (GATA3) and RAR-related orphan receptor alpha (RORα) [63], and produce type 2 cytokines, including IL-4, IL-5, IL-9, and IL-13 [64]. ILC3s are dependent on the transcription factor RAR-related orphan receptor gamma (RORγt) and can produce IL-17 and/or IL-22 [59,65]. ILC1s react to intracellular pathogens, such as viruses and tumors; ILC2s respond to large extracellular parasites and allergens; and ILC3s combat extracellular microbes, such as bacteria and fungi [59]. In addition, a recent report identified a regulatory subpopulation of ILCs (called ILCregs) that exists in the mouse and human gut and Id3 is a fate decision marker for their development [66].
Compared with these ILC subsets, conventional natural killer (NK) cells has a similar developmental process and quick effector functions, thus NK cells are defined as cytotoxic ILCs, which parallel with adaptive CD8+ cytotoxic T lymphocytes [61]. Mature NK cells are dependent on the transcription factor eomesodermin (Eomes), and produce perforins, IFNγ, and granzymes [67]. NK cells control certain viruses such as herpesviruses and cytomegalovirus and tumors [68]. ILCs, which parallel with adaptive CD8+ cytotoxic T lymphocytes [61]. Mature NK cells are dependent on the transcription factor eomesodermin (Eomes), and produce perforins, IFNγ, and granzymes [67]. NK cells control certain viruses such as herpesviruses and cytomegalovirus and tumors [68].
The Function of ILCs
ILCs maintain tissue homeostasis but also contribute to inflammatory diseases including IBD [69]. ILCs promote the resolution of inflammation and tissue repair [70].
ILC1s have a crucial role in promoting innate immunity to intracellular pathogens, such as T. gondii, by secreting TNF-α and IFN-γ to recruit inflammatory myeloid cells [70]. Intraepithelial ILC1s expand in CD patients and depletion of intraepithelial ILC1s reduced proximal colon inflammation in the anti-CD40-induced colitis model in mice [71].
ILC2s rapidly respond to helminth parasite infection [70]. ILC2s are increased in patients with ulcerative colitis (UC) and play an important role in the tissue reparative response [72]. ILC2s secreted IL-13 binds with its receptor IL-13Rα1 and activates transcription factor Foxp1 to promote β-catenin pathway-dependent intestinal stem cell renewal [73]. In addition, IL-33 can stimulate ILC2s to produce AREG in the colon and promote intestinal epithelial cell regeneration in a model of DSSinduced colitis [74].
ILC3s promotes innate immunity to extracellular bacteria and fungi, such as Citrobacter rodentium and Candida albicans [70]. ILC3s are decreased in inflamed tissue in both CD and UC patients [72] and are required for tissue repair and regeneration in the inflamed intestine [75]. Adherent CD-associated microbiota induces the CX3CR1+ mononuclear phagocyte-derived TNF-like ligand 1A (TL1A) [76], which stimulates the production of ILC3-derived IL-22 and increases mucosal healing in human IBD [77]. ILC3s are the main source of intestinal IL-22 and the symbiotic commensal
The Function of ILCs
ILCs maintain tissue homeostasis but also contribute to inflammatory diseases including IBD [69]. ILCs promote the resolution of inflammation and tissue repair [70].
ILC1s have a crucial role in promoting innate immunity to intracellular pathogens, such as T. gondii, by secreting TNF-α and IFN-γ to recruit inflammatory myeloid cells [70]. Intraepithelial ILC1s expand in CD patients and depletion of intraepithelial ILC1s reduced proximal colon inflammation in the anti-CD40-induced colitis model in mice [71].
ILC2s rapidly respond to helminth parasite infection [70]. ILC2s are increased in patients with ulcerative colitis (UC) and play an important role in the tissue reparative response [72]. ILC2s secreted IL-13 binds with its receptor IL-13Rα1 and activates transcription factor Foxp1 to promote β-catenin pathway-dependent intestinal stem cell renewal [73]. In addition, IL-33 can stimulate ILC2s to produce AREG in the colon and promote intestinal epithelial cell regeneration in a model of DSS-induced colitis [74].
ILC3s promotes innate immunity to extracellular bacteria and fungi, such as Citrobacter rodentium and Candida albicans [70]. ILC3s are decreased in inflamed tissue in both CD and UC patients [72] and are required for tissue repair and regeneration in the inflamed intestine [75]. Adherent CD-associated microbiota induces the CX3CR1+ mononuclear phagocyte-derived TNF-like ligand 1A (TL1A) [76], which stimulates the production of ILC3-derived IL-22 and increases mucosal healing in human IBD [77]. ILC3s are the main source of intestinal IL-22 and the symbiotic commensal microbiota represses this IL-22 production via inducing epithelial expression of IL-25 [75]. In graft versus host disease, radioresistant ILC3s-produced IL-22 protects intestinal stem cells from immune-mediated tissue damage [78]. Mechanistically, IL-22 activates signal transducer and activator of transcription 3 (STAT3) signaling to increase antiapoptotic proliferative response in Lgr5+ stem cells, promoting epithelial regeneration and reducing intestinal pathology and mortality from graft-versus-host disease [79]. Moreover, dietary aryl hydrocarbon receptor aryl hydrocarbon receptor (Ahr) ligands such as glucosinolates promote IL-22 production from ILC3s and protect intestinal stem cells against genotoxic stress [80]. In addition, ILC3s produced IL-22 production also protects damage to the intestine induced by infection and chemotherapy [81,82]. Apart from IL-22, ILC3s secreted IL-17 and IFN-γ is dependent on IL-23 stimulation and is required in Helicobacter hepaticus-mediated innate colitis [83].
ILCregs suppress the activation of ILC1s and ILC3s via secretion of IL-10 and promote innate intestinal inflammation resolution induced by several inflammatory stimuli including DSS, anti-CD40 antibody, Salmonella typhimurium, and Citrobacter rodentium in Rag1 −/− mice [59].
NK cells with cytolytic potential are accumulated in colonic lamina propria of individuals with active IBD [84], and thiopurines can normalize NK cell numbers by inhibition of Rac1 activity to induce apoptosis [85]. Activated NK cells produce proinflammatory cytokines such as IFN-γ and TNF-α to augment CD4+ T cell proliferation and Th17 differentiation, which contributes to exacerbated inflammatory response [86].
The Regulation of ILCs
Commensal microbiota regulates the transcriptional gene expression and epigenetic regulation in ILCs [87]. RNA-and ATAC-seq integration identified that c-MAF and BCL6 regulate the plasticity between ILC1 and ILC3 in the intestine [69]. Moreover, the Ahr signaling is critical in regulating intestinal ILC2-ILC3 balance. This was demonstrated by the fact that Ahr knockout mice have altered gut ILC2 transcription with increased expression of anti-helminth cytokines such as IL-5 and IL-13, whereas Ahr activation increases gut ILC3 to better control Citrobacter rodentium infection [88]. Furthermore, ILC1 and ILC3 undergo retinoic-acid dependent upregulation of gut homing receptors CCR9 and α4β7, while ILC2 acquire these receptors during development in the bone marrow [89]. These gut homing receptors are also critical for optimal control of Citrobacter rodentium infection. For ILCregs, autocrine TGF-β1 is critical for their expansion during inflammation [66]. NK cells are regulated by various cytokines such as type I IFN, IL-12, IL-18, IL-15, IL-2, and TGF-β1 [90].
IL-10
3.1.1. The Source of IL-10 IL-10 production in the colon was mainly from lamina propria macrophage and regulatory T cells [91]. Macrophage-specific knockout of IL-10 had a detrimental effect on intestinal wound healing using a colon biopsy-induced injury model in vivo indicating macrophages are an important source of IL-10 [92]. In addition, intestinal epithelial cells and Th1 cells are also able to produce IL-10 [93,94].
The Function of IL-10
Analysis of biopsy-induced murine colonic wounds revealed an increase in IL-10 as soon as 24 h post-injury suggesting an upregulation during intestinal wound repair [92]. Exposure of intestinal epithelial cells to recombinant IL-10 was demonstrated to enhance wound repair in vitro whereas knockdown of IL-10 receptor ameliorated this effect. IL-10 promotes epithelial activation of cAMP response element-binding protein (CREB) and secretion of pro-repair WNT1-inducible signaling protein 1.
In a mouse model of small intestine epithelial injury induced by Indomethacin, MHC-II + CD64 + Ly6C + macrophage-derived IL-10 produced during the acute phase of injury was demonstrated to be critical for wound recovery [48].
The Regulation of IL-10
Macrophage-and regulatory T cell-derived IL-10 production was demonstrated to be microbiota-dependent in the colon, as germ-free mice responded to LPS-stimulation by producing more TNF-α and IL-6 but less IL-10 [91]. In Th1 cells, microbiota-derived short-chain fatty acids promote IL-10 production via G-protein coupled receptors 43/B lymphocyte induced maturation protein 1 signaling [94].
TNF-α
TNF-α, also known as TNF, was first identified as a tumoricidal protein that mediates endotoxin-induced hemorrhagic necrosis in sarcoma and other transplanted tumors in 1975 [95]. Later in 1984 human TNF was cloned [96].
The Source of TNF-α
TNF is produced predominantly by activated macrophages and T lymphocytes as a plasma membrane bound 26 kDa precursor glycoprotein. TNF-α converting enzyme (TACE; also known as ADAM-17) mediates the cleavage in the extracellular domain of TNF-α precursor and releases a soluble 17 kDA form [97]. In addition to macrophages and T lymphocytes lineage, a wide range of cells can produce TNF-α, including mast cells, B lymphocytes, natural killer (NK) cells, neutrophils, endothelial cells, intestinal epithelial cells (IECs), smooth and cardiac muscle cells, fibroblasts, and osteoclasts [98,99]. TNF-α is not usually detectable in healthy individuals, but elevated serum and tissue levels are found in inflammatory conditions [100], and serum levels correlate with the severity of infections [101,102].
The Function of TNF-α
TNF-α is a key regulator of inflammation and has been involved in many human diseases, including psoriasis, rheumatoid arthritis, and IBD [103]. Anti-TNF-α therapy is the best available therapeutic option to induce mucosal repair and clinical remission in IBD patients [104]. However, a recent report showed that TNF-α blockage may cause dysbiosis and increased Th17 cell population in the colon of healthy mice [104]. Another report demonstrated that TNF-α promotes colonic mucosal repair through induction of the platelet activating factor receptor (PAFR) via NF-B signaling in the intestinal epithelium. Increased PAFR expression leads to activation of epidermal growth factor receptor Src as well as increased Rac1 and FAK signaling to promote cellular migration and wound closure. Consistently, TNF-α neutralization ablates PAFR upregulation and impairs intestinal wound repair [105]. In addition, bone marrow-derived TNF-α binds to epithelial TNF receptors (TNFRs) and activates epithelial beta-catenin signaling, promotes intestinal stem cell proliferation and IEC expansion, and helps mucosal healing in chronic colitis patients [98]. This was shown as enhanced apoptosis, reduced IEC proliferation, and decreased Wnt signaling when stimulated with anti-CD3 mAb in TNF-deficient (Tnf −/−) mice [76]. TNFR2 was increased in the epithelial cells from IBD patients and disruption of TNFR2 in naïve CD8+ T cells increased the severity of colitis in Rag 2 −/− mice [106,107]. TNF-induced intestinal NF-κB activation is also crucial for prevention of local intestinal injury following ischemia-reperfusion [108].
The Regulation of TNF-α
At the transcriptional level, the TNF gene is induced in response to a diversity of specific stimuli including inflammation, infection, and stress [109]. Bacterial endotoxin specially activates TNF-α gene expression [110]. Analysis of human TNF-α promoter indicated that transcription factors such as Ets and c-Jun are involved in the transcriptional regulation of TNF-α [111]. Previously, we have also shown that HIF-2α is a positive regulator of TNF-α production in the intestinal epithelium [25].
3.3.1. The Source of IL-6 IL-6 is mainly produced by lymphocytes, myeloid cells, fibroblasts and epithelial cells [114]. Enterocyte IL-6 production is increased during inflammatory conditions such as sepsis and endotoxemia [115].
The Function of IL-6
IL-6 and its soluble receptor s-IL6R are highly elevated in the colonic mucosa of IBD [116]. The single nucleotide polymorphism rs2228145 in IL-6R associates with increased levels of s-IL6R, as well as reduced IL-6R signaling and risk of IBD [117]. A randomized clinic trial in 36 patients with active CD showed that 80% of the patients given a human anti-IL-6R monoclonal antibody biweekly at a dose of 8 mg/kg had a clinical response compared with only 31% of placebo injected patients, indicating that targeting IL-6 signaling may serve as a promising strategy for CD [118].
IL-6 promotes IEC proliferation and regeneration, and IL6-deficient mice exhibit elevated IEC apoptosis following exposure with DSS [119]. The proliferative and antiapoptotic effects of IL-6 are mainly mediated by the transcription factor STAT3, whose IEC-specific ablation leads to more severe DSS-induced colitis compared to wild-type mice [98]. In addition, the IL-6 co-receptor gp130 stimulates intestinal epithelial cell proliferation through Yes-associated protein (YAP) and Notch signaling, which leads to aberrant differentiation and promotion of mucosal regeneration [120]. Activation of YAP [121] and Notch [122] are required for mucosal regeneration after DSS challenge.
IL-22
IL-22, a cytokine of the IL-10 superfamily, was originally identified as an IL-9-induced gene in mouse T cells, and was named as IL-10-related T cell-derived inducible factor as it shares 22% amino acid identity with IL-10 [127]. IL-22 binds to a functional receptor complex composed of two chains: IL-22 receptor 1 (IL-22R1) and IL-10R2 [128].
The Source of IL-22
IL-22 is produced from many different cell types such as activated T, NK cells and CD11c+ cells [129][130][131]. As mentioned above, in the intestine ILC3s are the main source of IL-22 [75].
The Function of IL-22
IL-22 is increased in the intestine in patients with IBD as well as murine DSS colitis [132][133][134][135]. Although IL-22 increases the gene expression of other proinflammatory cytokines, such as IL-8 and TNF-α in intestinal epithelial cells, IL-22 promotes wound healing of the intestinal epithelium in vitro through stimulation of cell migration via phosphatidylinsitol 3-kinase signaling and beta-defensin-2 expression [135]. In addition, as mentioned above, IL-22 protects intestinal stem cells in graft versus host disease via activating STAT3 signaling and protects against genotoxic stress [78][79][80]. IL-22 knockout mice showed delayed recovery from biopsy forceps and DSS induced mucosal injury [129,130]. Due to decreased production of antimicrobial proteins, such as RegIIIβ and RegIIIγ, IL-22 knockout mice have increased susceptibility to Citrobacter rodentium infection [134]. A recent study showed that IL-22 induces expression of H19 long noncoding RNA in epithelial cells to promote epithelial proliferation and mucosal regeneration [136]. Exogenous IL-22 also mitigates Citrobacter rodentium infection mediated colitis in mice with depletion of CX3CR1+ mononuclear phagocytes [77]. Local gene delivery of IL-22 into the colon promotes recovery from acute intestinal injury via STAT3 mediated mucus production [137].
The Regulation of IL-22
Human intestinal ILC3 production of IL-22 is regulated by microbial stimulated IL-23 and IL-1β from CX3CR1+ mononuclear phagocytes [77]. IL-22 can be neutralized by its soluble receptor IL-22 binding protein (IL-22BP; also known as IL-22RA2), which specifically binds IL-22 and prevents its binding with membrane-bound IL-22R1 [138]. IL-22 is most highly expressed at the peak of DSS and biopsy induced intestinal tissue damage, whereas IL-22BP has the lowest expression at this time [139]. AhR also increases IL-22 production to protect against trinitrobenzene sulfonic acid-induced colitis [140]. A recent report showed that the receptor-interacting protein kinase 3 promotes intestinal tissue repair after DSS colitis via induction of IL-22 expression in a IL-23 and IL-1β dependent manner [141].
Concluding Remarks and Perspectives
In conclusion, inflammatory cells and cytokines play critical roles in intestinal tissue repair. The introduction of anti-TNF-α antibodies has already been a great advance for IBD targeted therapy. Thus, targeting the above cells and cytokines may represent novel therapies for IBD. A recent phase II clinical trial showed that a human blocking antibody against T cell and NK cell receptor natural killer group 2D induced significant clinical remission in active CD patients after 12 weeks [142].
This review only covered some of the most important immune cell types and cytokines; others may also play an important role in wound healing. For example, IL-36γ was induced during experimental colitis and human IBD in a microbiota-dependent manner [143]. IL-36R-deficient mice showed delayed recovery after DSS-induced intestinal injury with profound IL-22 reduction and impaired neutrophil accumulation. In addition, we did provide much detail about the interaction between different cell types; for example, inflammatory monocytes may inhibit neutrophil activation in a prostaglandin E2 dependent manner [144]. Also, the bidirectional interactions between macrophages and lymphocytes were previously reviewed [145].
As discussed above, microbiota is essential in regulating neutrophil recruitment, colonic macrophage development, Treg function, and gene expression of ILCs (Figure 4). Thus, it is also critical to investigate microbiota and other emerging factors such as nutrients for developing novel targeted therapy to promote intestine repair.
Conflicts of Interest:
The authors declare no conflicts of interest.
|
2019-12-05T09:07:31.863Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6ea1395435b7e0ab60304d437fd198d76658019f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/20/23/6097/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "942995a01c8ca24e2c943189cc53fc9b34d9b582",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218502374
|
pes2o/s2orc
|
v3-fos-license
|
Data Driven Estimation of Stochastic Switched Linear Systems of Unknown Order
We address the problem of learning the parameters of a mean square stable switched linear systems (SLS) with unknown latent space dimension, or \textit{order}, from its noisy input--output data. In particular, we focus on learning a good lower order approximation of the underlying model allowed by finite data. Motivated by subspace-based algorithms in system theory, we construct a Hankel-like matrix from finite noisy data using ordinary least squares. Such a formulation circumvents the non-convexities that arise in system identification, and allows for accurate estimation of the underlying SLS as data size increases. Since the model order is unknown, the key idea of our approach is model order selection based on purely data dependent quantities. We construct Hankel-like matrices from data of dimension obtained from the order selection procedure. By exploiting tools from theory of model reduction for SLS, we obtain suitable approximations via singular value decomposition (SVD) and show that the system parameter estimates are close to a balanced truncated realization of the underlying system with high probability.
I. INTRODUCTION
Finite time system identification is an important problem in the context of control theory, times series analysis and robotics among many others. In this work, we focus on parameter estimation and model approximation of switched linear systems (SLS), which are described by (1) y k = Cx k + w k Here at time k, x k ∈ R n , y k ∈ R p , u k ∈ R m are the latent state, output and input respectively. θ k ∈ {1, 2, . . . , s} is the discrete state, mode or switch with η k , w k being the process and output noise respectively. We assume that {θ k } ∞ k=1 is an i.i.d process with P(θ k = i) = p i . The goal is to learn (C, {p i , A i } s i=1 , B) from observed data {y k , u k , θ k } N k=1 when the latent space dimension n is unknown. In many cases n > p, m and it becomes difficult to find suitable parametrizations that allow for provably efficient learning. For the special case of LTI systems, i.e., s = 1, these issues were discussed in detail in [1]. It was suggested there that one can learn lower order approximations of the original system from finite noisy data. To motivate the study of such approximations, consider the following example: Assume that na << 1. This SLS is of order n which may be large. However, it can be suitably modeled by a lower dimensional SLS ("effective" order is ≤ 2 and can be checked by a simple computation of {CA i A j B} 2 i,j=1 ). The previous example suggests that in many cases the true order is not important; rather a lower order model exists that approximates the true system well. Furthermore, finite noisy data limits the complexity of models that can be effectively learned (See discussion in [2]). The existence of an "effective" lower order and finite data length motivate the question of finding "good" lower dimensional approximations of the underlying model from finite noisy data.
A. Related Work
The study of switched linear systems has attracted a lot of attention [3], [4], [5] to name a few. These have been used in neuroscience to model neuron firing [6], modeling the stock index [7] and more generally approximate nonlinear processes [8] with reasonable accuracy. The problem of realization, i.e., whether there exists a SLS that satisfies the given data (in the noiseless case), has been studied in [9], [10], [11] and references therein. Specifically, [9] provides a purely algebraic view of realization where the switching is a function of discrete input symbols. The authors in [10] consider the case when discrete events are external inputs and there are linear reset maps that reset the state after switching. Finally, the theory of realization for generalized bilinear systems is studied in [11] and typically relies on the finite rank property of a certain Hankel-like matrix. Identification of a special class of SLS known as switched ARX systems has been widely studied [8], [12], [13], [14], [15], [16]. Under the assumption that an upper bound on the model order is known, an algebro-geometric approach to system identification is proposed under the assumption that {θ k } ∞ k=1 are not observed. The algorithms there typically involve clustering and as a result suffer exponential in order sample complexity [17]. From a system theory perspective, model approximation of SLS has been very well studied [18], [19], [20]. These methods mimic balanced truncationlike methods for model reduction and provide error guarantees between the original and reduced system. Despite substantial work on realization theory, identification and model reduction of SLS, there is little work on purely data driven approaches to model approximation. More recently, [1], [21] study data driven approaches to learning reduced order approximations of the original model. However, [21] does not assume any noise in the data generating process. This work is an extension of the work in [1] to the case of SLS.
B. Contributions
In our work we study the case when {y k , u k , θ k } N k=1 is observed and we would like to learn (C, {A i , p i } s i=1 , B) from observed data. Such a case is relevant when the switches are exogenous but not a control input; for example traffic congestion (continuous state) as a function of weather conditions (discrete switches: snow, heavy rains etc.). The contributions of this paper can be summarized as follows: • We extend the techniques introduced in [1] for SLS identification. Specifically, central to our approach is finding a system Hankel-like matrix for the SLS. We show that, similar to LTI systems, an appropriate SVD of the doubly infinite system Hankel matrix gives the individual system parameters (up to similarity transformation). • Due to the presence of noisy finite data, we provide a p( s N −1 s−1 )×m( s N −1 s−1 ) dimensional estimate of the doubly infinite system Hankel matrix. We show that if we let N grow carefully with the number of samples, we can obtain an accurate (with PAC guarantees) estimate of the system Hankel matrix. • By leveraging tools from the theory of model order reduction of SLS, we provide an algorithm to obtain "good" lower order approximations of the original system directly from data. To this end, we also provide a model order selection rule to choose the best approximation of the underlying SLS than can be learned from data with high probability. The model selection rule essentially involves a hard singular value thresholding and can be shown to be minimax optimal.
It is clear that for any sequence of observed switches l N 1 , we have the corresponding output y N as Finally a measure of distance between two switched linear systems with probabilistic switches is the stochastic L 2 gain given by The first question we pose is if there exists a Hankel matrix based representation for SLS as in the case of LTI systems that captures important properties about the system. In particular, whether it is possible to find the system parameters from input-output data in the ideal case of infinite noiseless data. We will now construct a system Hankel-like matrix that indeed answers this question positively. First, we will in a lexicographic order. This can be done for example as in [23]. To summarize, every sequence Note that if s → 1, i.e., LTI system, then H (N ) becomes p(N + 1) × m(N + 1) matrix and becomes the standard Hankel matrix for LTI systems. Let H (∞) = lim N →∞ H (N ) , i.e., its doubly infinite extension. To give some intuition we present an example below
Proof. Note that
OR where each of the submatrices inÕ end in A k . Since the occurence of a switch is independent we get the desiderata by noting thatÕ = √ p k OA k .
Proposition 1 indicates that H (∞) plays the role of traditional Hankel matrix in LTI systems theory for SLS. Similar subspace based methods for system identification has been discovered in mildly different forms for HMM parameter recovery in [23], [24] or weighted automaton parameter identification in [25].
Unfortunately, we do not have access to H (∞) ; rather we only possess finite noisy data and consequently need to obtain an accurate estimateĤ (N ) of H (∞) . In order to find an estimate for the system Hankel matrix we assume that the switched linear system can be restarted multiple times. Although we believe that it is possible to relax this requirement, we enforce this assumption to ease exposition. Define the number of restarts as N S , also referred as the sample complexity. In each restart, we let the SLS run for N time steps, also known as rollout length. Let θ denote the switch, output and input respectively at rollout time k for sample t. Clearly t ≤ N S , k ≤ N . Now define the set N m l as N m l is the set of occurrences of the switch sequence m l with N m l = |N m l |. Our next result bounds the error rates obtained from the regression. The proof of the following result follows standard analysis in statistical learning literature such as [26].
An important thing to note about the bound above is that it does not hold when N l i 1 < α(m + log 1 δ ) we setΘ l i 1 = 0, i.e., when we have scarce data for a certain sequence we can not use the regression estimate as it becomes unreliable. In such cases (and some others) we setΘ l i 1 = 0; the exact details are specified below.
A. Regression Estimates
Recall Proposition 2, for any sequence l i 1 of length i the result holds with probability at . The regression estimate for l i 1 is unreliable when we do not have enough occurrences. In such a case we propose a simple estimate, i.e., we set the regression estimate to 0. Let us assume we have roll out length ofN , then we need to ensure that for all sequences of length at mostN the regression estimates hold; in that case by applying a union bound we have that for all sequences simultaneously we have with probability at s−1 appears because we are taking a union bound over sN +1 −1 s−1 sequences. One observation is that we cannot ensure the high probability bound simultaneously over all sequences up to length N S because ifN = Θ(N S ) then the regression estimate error bound becomes trivial. As a result, we define N up an upper bound for rollout length up to which we can ensure high probability bound. Define a sequence length dependent threshold γ k = α(m + log 2(s k+1 −1) Intuitively, 2N up is the least sequence length such that none of the sequences of that length can be reliably learned by regression, i.e., all sequences with length up to N up occur often enough. Furthermore, since the probability decays as the length of the sequence it suggests that no longer sequence can be learned reliably either. We show in Proposition 9 that N up is logarithmic in N S with high probability. With this we can construct an estimate of the system Hankel-like matrix as follows. LetN be the rollout length then definê is an unbiased estimator for p l i 1 :l j
1
. To see this, recall the experiment set up: we run N S identical samples of the SLS for lengthN . Then for each sample i ≤ N S , any sequence l k 1 can start at position 1, 2, . . . ,N − k + 1. Thus for sample i the number of occurrences of l k 1 is given by 1 starts at position l} , then N l k 1 is given by and it is clear that E[N l k 1 ] = p l k 1 N S (N − k + 1). For the estimates of CA l i 1 B we havê The road map for system identification can be summarized as follows.
• For a given N S we do model order selection by choosing two numbersN , r which are functions of N S . • Following that we create a finite dimensional estimatê HN of H (∞) , and fromĤN we obtain the system parameters of r-dimensional approximation of the original SLS using a balanced truncation procedure. • The error between the estimated r-dimensional approximation and the true r-dimensional approximation can be bounded by subspace perturbation bounds [1]. We now describe details of the balanced truncation below.
B. Balanced Truncation
Given the parameters of SLS in Eq. (1) define the following SLSx By our assumption the SLS in Eq. (14) is strongly stable (See [20]). Now we can use the results in [19] (specifically Eq. (25a), (25b)). To summarize there exists a linear transformation S such that where Σ is diagonal with entries arranged in descending order, then note that Σ satisfies (by definition of X 1 , X 2 ) and are the r-order balanced truncated version of the true SLS. Then the discussion in Section 4.2 in [19] provides error guarantees between the true model and its approximation. Note that in the setting of Section 4.2 in [19] This observation combined with some linear algebra similar to Section 21.6 of [27] gives us the following proposition.
with Proposition 4 makes it clear that to find r-order balanced truncated models we only need top r-singular vectors (and singular values). This observation is important because in the presence of finite noisy data estimating singular vectors corresponding to very low singular values typically requires a lot of data. Instead one could focus on simply estimating the significant singular vectors via balanced truncation. Furthermore, the stochastic L 2 distance between the original SLS and its r-order balanced truncated version is given by Proposition 3. We can now summarize our algorithm below. III. MODEL SELECTION Algorithm 1 has two hyperparametersN , r. In this section we discuss how to choose these hyperparameters as a function of N S .
A. SelectingN
Since the Hankel matrix is p( sN +1 −1 s−1 ) × m( sN +1 −1 s−1 ),N cannot be too large as it will make any algorithm infeasible (and estimation error will suffer) and indeed it cannot be too small as that will mean we only learn a small part of the dynamics (high truncation error). The key idea is to groŵ N in a controlled fashion with respect to N S . Formally, let H (N ) be H (N) padded with zeros to make it doubly infinite and define Estimation Error (18) Observe that the Frobenius norm of the differenceH (N ) − H (∞) can be represented as Clearly asN increases the truncation error decreases. For the case of estimation error we can use Proposition 2. Intuitively, it would make sense that EN grows withN (keeping N S fixed) as we are trying to estimate a larger matrix. As a result, for large enough N S , there existsN < ∞ such that TN ≤ αEN for some absolute constant α ≥ 1. The key idea will be to chooseN such that TN ≤ αEN . This idea is formalized in Proposition 10. Furthermore, for such a choice ofN we have ||H (∞) −Ĥ (N) || 2 F ≤ (1 + α 2 )T 2 N implying that we can estimate the system Hankel matrix well if TN is low.
Proposition 5. FixN , N S and δ. Then with probability at least 1 − δ we have . Here α ≥ 1 is a known absolute constant and s 0 = log (s Nup +1 −1) First we analyze E 1,l i The k+1 is the number of times a k-length sequence appears in the Hankel-like matrix. Using Proposition 8 with probability at least 1 − δ, 0≤i,j≤N −1 ||E 1,l i Recall that whenever N l i 1 :l j 1 < s 0 , i.e., scarce data, we set Θ l i We now use Proposition 2 (applied with union bound to all sequences) with probability at least 1 − δ From first part in Proposition 7 we get with probability at Then combining these observations we get Proposition 5 provides an upper bound on E 2 N almost entirely in terms of data dependent quantities. From here on we will use E δ,N (N S ) as a proxy for E 2 N . For shorthand E δ,N = E δ,N (N S ). Given this dependence of estimation error onN , N S , we find that if we setN in a data dependent fashion as follows: where α 0 is a known absolute constant and N up is given in Eq. (11).
For large enough N S , pickN as in Eq. (22). Then with probability at least 1 − δ we have is the zero padded version ofĤ (N) to make it compatible with H (∞) and α ≥ 1 is an absolute constant. Here s k = s k+1 −1 s−1 . Proof. We sketch the details of the proof here. We assume all matrices are size compatible by padding with zeros. The large enough N S is required only to ensure that there existsN < ∞ such that TN ≤ E δ,N . DefineN * = inf {N |T N ≤ E δ,N }. In generalN * is unknown as it is complex function of unknown system parameters (because of T N ). By Proposition 10 suchN * exists. However, by leveraging results from [1] specifically Proposition 12.1 and 12.2 we can show that with probability at least 1 − δ. We showN * ≥N (N S ) in Proposition 11. The other inequality follows the same steps as Prop 12.2 in [1]. Based on this observation we note for any l ≥N
This gives
Our claim follows by noting that E δ,N * ≤ E δ,log (α0)N .
The key insight of Theorem 1 is that for the choice of N (N S ) in Eq. (22) we can get a good upper bound on the error between the true system Hankel matrix, H (∞) , and its estimateĤ (N) . Furthermore this bound does not depend on the system order, n, but only data dependent quantities and some energy metrics which can be measured easily. The result in Proposition 10 (and Eq. (29)) shows that where δ s = log 1 pmax log s pmax −1 .
Eq. (24) shows that decay in error between the true system Hankel-like matrix and its estimator is roughly 1 √ N −δs (ignoring the log factors) and the error betweenH (N ) , H (∞) goes to zero asymptotically as N S → ∞.
B. Selecting r
Now that we have a consistent statistical estimator for H (∞) . We provide a way to choose r such that we can find a r-order balanced representation of the SLS. For shorthand, we will refer to the data dependent error ǫ 2 = 4α 0 E δ,N . This implies ||H (∞) −Ĥ (N ) || F ≤ ǫ and we can use Wedin-type subspace perturbation bounds [28]. Consider the following rule for selecting r The existence of τ + is not required for our results as the same discussion of Section 11.3 in [1] would apply here. Furthermore, we can also substitute τ + byτ + = inf 1≤i≤n (1 − σi+1(Ĥ (N ) ) σi(Ĥ (N ) ) ) and that performs sufficiently well.
IV. DISCUSSION
In this work we provide finite sample error guarantees for learning realizations of SLS when stability radius or order is unknown. Specifically, we construct a Hankel-like matrix of sizeN , chosen in a data dependent fashion. From this Hankel-like matrix we recover system parameters using a data dependent threshold rule in Eq. (25). Under stated assumptions, we obtain O( √ N −δs ) error rates which are also the parametric estimation error rates and are known to be optimal for the case when s = 1 (See for e.g.: [1]). Furthermore, from a computational perspective our algorithm is polynomial in the number of samples, N S , because we are doing SVD on a matrix of dimension at most psN × msN butN is logarithmic in N S with high probability and as a result the matrix size is polynomial in N S .
Due to the nature of the analysis we believe that this work can be easily extended to the case when {θ t } ∞ t=1 evolution is more complex, for e.g.: state dependent or a markov chain. Furthermore, we assumed in this paper the discrete switches are completely observable. However, in many cases the discrete state itself might be noisy or not observed. In such cases it important to predict the switch sequence and following that use the procedure described. This appears to be an interesting avenue for future work.
V. APPENDIX
Proposition 6 (Bernstein's Inequality). Let {X i } i=1 be zero mean random variables. Suppose that |X i | ≤ M almost surely, for all i. Then, for all positive t, Recall from Eq. (12) that Then by Bernstein's inequality we have Proposition 7. Fix δ, N > 0. For all sequences l k 1 with k ≤ N we have simultaneously with probability at least for some known absolute constant α ≥ 1 and s N = s N +1 −1 s−1 . Proposition 8. Fix 0 ≤N ≤ N S , then with probability at least 1 − δ we have Proof. Let s 0 = α log s N +1 −1 (s−1)δ . Now we break the sum in two parts For (i) combine ( √ p l k 1 − p l k 1 ) 2 ≤ |p l k 1 −p l k 1 | and use Proposition 7 which gives (i) ≤ l k . Since the SLS is meansquare stable and by our assumptions we have Then clearlyN * is less thanN that satisfies The last inequality is satisfied for allN * such that lN 1 occurs often enough. Furthermore, from the proof of Proposition 9, N * < N up , since N up satisfies α m + log s Nup +1 −1 Since ||Ĥ (l) − H (l) || 2 F ≤ E δ,l ≤ E δ,h and further more ||H (l) − H (h) || 2 F ≤ ||H (l) − H (∞) || 2 F ≤ E δ,l . Combining all of this we get ||Ĥ (l) −Ĥ (h) || 2 F ≤ 3E δ,h and this means thatN ≤N * .
|
2019-09-10T16:48:16.000Z
|
2019-09-10T00:00:00.000
|
{
"year": 2019,
"sha1": "e169405884463e24c35cedb318364f58f660772d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "edc42166ad187a569d49b87518352d56a2164a12",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
236598925
|
pes2o/s2orc
|
v3-fos-license
|
Smart Helmet-Based Personnel Proximity Warning System for Improving Underground Mine Safety
: A smart helmet-based wearable personnel proximity warning system was developed to prevent collisions between equipment and pedestrians in mines. The smart helmet worn by pedestrians receives signals transmitted by Bluetooth beacons attached to heavy equipment, light vehicles, or dangerous zones, and provides visual LED warnings to the pedestrians and operators simultaneously. A performance test of the proposed system was conducted in an underground limestone mine. It was confirmed that as the transmission power of the Bluetooth beacon increased, the Bluetooth low energy (BLE) signal detection distance of the system also increased. The average BLE signal detection distance was at least 10 m, regardless of the facing angle between the smart helmet and Bluetooth beacon. The subjective workload for the smartphone, smart glasses, and smart helmet-based proximity warning system (PWS) was evaluated using the National Aeronautics and Space Administration task load index. All six workload parameters were the lowest when using the smart helmet-based PWS. The smart helmet-based PWS can provide visual proximity warning alerts to both the equipment operator and the pedestrian, and it can be expanded to provide worker health monitoring and hazard awareness functions by adding sensors to the Arduino board.
Introduction
In underground mines, worker safety accidents frequently occur, owing to collisions between equipment and pedestrians or other equipment.The U.S. Bureau of Labor Statistics stated that there were 45 fatalities due to equipment collisions in underground mines in the United States between 2011 and 2019 [1].Accidents were mainly caused by being caught in running machinery, struck by powered vehicles, or compressed by equipment.According to a disaster report published by the government of Western Australia, there have been a total of 34 equipment collisions involving haulage trucks and charge-up trucks in underground mines in Western Australia since 2015 [2].
Proximity warning systems (PWSs) have been developed to prevent equipment collision accidents in underground mines.PWSs provide visual and/or audible proximity alerts to equipment operators of pedestrians or other equipment approaching within a certain distance [3].The National Institute for Occupational Safety and Health (NIOSH) in the U.S. has developed PWSs using radio-frequency identification (RFID) and electromagnetic signals.Ruff and Hession-Kunz [4] developed an RFID-based collision warning system to provide a proximity warning to equipment operators when a pedestrian approaches a front-end loader.Active tags were attached to the belt or cap of a pedestrian worker, and an RFID reader was installed on the front-end loader to recognize the unique ID of the active tag.This system sets the progressive sensing distance (near, middle, and far) and provides visual and audible alerts to the equipment operator through lamps and buzzers.Schiffbauer [5] suggested the hazardous area signaling and ranging device to provide a proximity warning to the equipment operator approaching a continuous miner.The the equipment approaches the pedestrian worker, the smart helmet recognizes the BLE signal and emits an LED light.This study analyzed the BLE signal detection distance of the smart helmet for two conditions in the underground mine.One was the Tx power condition of the Bluetooth beacon and another was the facing angle condition between the smart helmet and the Bluetooth beacon.The NASA task load index (NASA-TLX) [45] was analyzed to evaluate the subjective workload felt by the equipment driver and the pedestrian worker for PWSs based on smartphones, smart glasses, and smart helmets.
Design of the Proximity Warning System (PWS) Based on Bluetooth Beacons and Smart Helmets
The design of PWS based on Bluetooth beacons and smart helmets is summarized in Figure 1.The smart helmet worn by the worker receives a BLE signal transmitted from the Bluetooth beacon and provides a visual alert when it comes close to the beacon.The Bluetooth beacon can be attached to heavy equipment, a management vehicle, or a dangerous area at the mine site, and the attached beacon continuously transmits the BLE signal.The smart helmet can warn wearers of access to heavy equipment or vehicles and access dangerous areas and warn drivers that there are workers nearby.Visual proximity alerts are received through a smart helmet while working on the spot; therefore, both workers and drivers can quickly detect and respond to dangerous situations.The purpose of this study is to develop a smart helmet-based PWS that can provide simultaneous proximity warnings to both equipment operators and pedestrians in underground mines.Bluetooth beacons are installed on mining equipment such as dump trucks, excavators, and loaders, and pedestrian workers wear the smart helmet-based PWS.When the equipment approaches the pedestrian worker, the smart helmet recognizes the BLE signal and emits an LED light.This study analyzed the BLE signal detection distance of the smart helmet for two conditions in the underground mine.One was the Tx power condition of the Bluetooth beacon and another was the facing angle condition between the smart helmet and the Bluetooth beacon.The NASA task load index (NASA-TLX) [45] was analyzed to evaluate the subjective workload felt by the equipment driver and the pedestrian worker for PWSs based on smartphones, smart glasses, and smart helmets.
Design of the Proximity Warning System (PWS) Based on Bluetooth Beacons and Smart Helmets
The design of PWS based on Bluetooth beacons and smart helmets is summarized in Figure 1.The smart helmet worn by the worker receives a BLE signal transmitted from the Bluetooth beacon and provides a visual alert when it comes close to the beacon.The Bluetooth beacon can be attached to heavy equipment, a management vehicle, or a dangerous area at the mine site, and the attached beacon continuously transmits the BLE signal.The smart helmet can warn wearers of access to heavy equipment or vehicles and access dangerous areas and warn drivers that there are workers nearby.Visual proximity alerts are received through a smart helmet while working on the spot; therefore, both workers and drivers can quickly detect and respond to dangerous situations.
Design of BLE Transmission Units Using Bluetooth Beacon
Bluetooth beacons periodically transmit information, including the general-purpose unique identifier of the beacon and media access control (MAC) address through the BLE signal.The intensity of the BLE signal transmitted by the Bluetooth beacon is expressed as Tx power, and the unit is dBm.The received intensity of the BLE signal can be quantified using the RSSI value.RSSI is represented in a negative form by a value between −99 dBm and −35 dBm.The propagation distance of the BLE signal may vary depending on the signal transmission intensity and direction of the signal propagation of the Bluetooth beacon.An increase in the BLE signal transmission intensity increases the signal propagation distance.The signal propagation direction is bidirectional, and the signal can be spread uniformly in all directions, but this limits the propagation distance.The BLE signal is first propagated relative to the Bluetooth beacon when the signal is transmitted as the Figure 1.Overview of personal proximity warning system (PWS) using smart helmet.
Design of BLE Transmission Units Using Bluetooth Beacon
Bluetooth beacons periodically transmit information, including the general-purpose unique identifier of the beacon and media access control (MAC) address through the BLE signal.The intensity of the BLE signal transmitted by the Bluetooth beacon is expressed as Tx power, and the unit is dBm.The received intensity of the BLE signal can be quantified using the RSSI value.RSSI is represented in a negative form by a value between −99 dBm and −35 dBm.The propagation distance of the BLE signal may vary depending on the signal transmission intensity and direction of the signal propagation of the Bluetooth beacon.An increase in the BLE signal transmission intensity increases the signal propagation distance.The signal propagation direction is bidirectional, and the signal can be spread uniformly in all directions, but this limits the propagation distance.The BLE signal is first propagated relative to the Bluetooth beacon when the signal is transmitted as the directional signal.The change in RSSI according to the BLE signal transmission intensity and the direction of the radio wave of the Bluetooth beacon was previously analyzed [46].
Bluetooth beacons can communicate with peripheral devices in three ways: point-topoint, broadcast, and mesh.The point-to-point method is a method of exchanging data by pairing the master device transmitting a large amount of data and the slave device receiving data at 1:1.The broadcast method is a method in which the observer receives information when the broadcast periodically transmits its ID information to the peripheral devices.Bluetooth beacons are mainly broadcast missions, and observers mainly use PCs and smartphones.The mesh method is connected to several master and slave devices [46].
In this study, RECO beacons (Perples, Seoul, Korea) were used as BLE transmission devices.RECO beacons are certified by institutions in Korea, the United States, Europe, and Japan and meet global beacon standards (Table 1).Figure 2 shows examples of heavy equipment and vehicles at the mine site with RECO beacons.A Bluetooth beacon was installed on the back of the room mirror on the front of the truck, and a Bluetooth beacon was provided on the front of the heavy equipment.The Bluetooth beacons set the directional signal such that the signal could be propagated further.The signal transmission strength and period of the beacons were set to −4 dBm and 1 s, respectively.directional signal.The change in RSSI according to the BLE signal transmission intensity and the direction of the radio wave of the Bluetooth beacon was previously analyzed [46].Bluetooth beacons can communicate with peripheral devices in three ways: point-topoint, broadcast, and mesh.The point-to-point method is a method of exchanging data by pairing the master device transmitting a large amount of data and the slave device receiving data at 1:1.The broadcast method is a method in which the observer receives information when the broadcast periodically transmits its ID information to the peripheral devices.Bluetooth beacons are mainly broadcast missions, and observers mainly use PCs and smartphones.The mesh method is connected to several master and slave devices [46].
In this study, RECO beacons (Perples, Seoul, Korea) were used as BLE transmission devices.RECO beacons are certified by institutions in Korea, the United States, Europe, and Japan and meet global beacon standards (Table 1).Figure 2 shows examples of heavy equipment and vehicles at the mine site with RECO beacons.A Bluetooth beacon was installed on the back of the room mirror on the front of the truck, and a Bluetooth beacon was provided on the front of the heavy equipment.The Bluetooth beacons set the directional signal such that the signal could be propagated further.The signal transmission strength and period of the beacons were set to −4 dBm and 1 s, respectively.
Design of BLE Receiver Units Using an Arduino Board
Arduino is an open source electronic platform based on easy-to-use hardware and software (Table 2).The Arduino board reads the input data, including sensor illumination and button pressing and converts it into output data.Because the Arduino board and software are open sources, users can independently build boards to adjust the system to meet specific needs [48].In this study, a smart helmet was developed to develop a wearable personal PWS for workers.The smart helmet was made by combining an Arduino Uno board, Bluetooth BLE module (FBL780BC, Table 3), LED strap, and two-leg LEDs with the safety helmet worn by mining workers.Figure 3a,b show the exterior shape of the equipment divided into front and rear parts.The smart helmet provides visual warnings through LED straps (using two-leg LEDs), and receiving power through portable batteries.The Bluetooth BLE module (FBL780BC) supports Bluetooth Low Energy, a low-power function based on Bluetooth 4.1.The circuit diagram was used to visualize the connection method of the Arduino board, LED, and Bluetooth module as shown in Figure 4.The process of the operating algorithm of the smart-helmet PWS is illustrated in Figure 5.After the BLE signal was received via the Bluetooth BLE module attached to the smart helmet, it was compared to the MAC address of the Bluetooth beacon stored in the database.If the MAC addresses of the Bluetooth beacons are matched, the LED strap and two leg LEDs are turned on for 30 s to provide a visual alert to the worker and the driver and, if not, they are not turned on.The system is designed to operate repeatedly through infinite loops when power to the Arduino board is turned on.software (Table 2).The Arduino board reads the input data, including sensor illumination and button pressing and converts it into output data.Because the Arduino board and software are open sources, users can independently build boards to adjust the system to meet specific needs [48].
In this study, a smart helmet was developed to develop a wearable personal PWS for workers.The smart helmet was made by combining an Arduino Uno board, Bluetooth BLE module (FBL780BC, Table 3), LED strap, and two-leg LEDs with the safety helmet worn by mining workers.Figure 3a,b show the exterior shape of the equipment divided into front and rear parts.The smart helmet provides visual warnings through LED straps (using two-leg LEDs), and receiving power through portable batteries.The Bluetooth BLE module (FBL780BC) supports Bluetooth Low Energy, a low-power function based on Bluetooth 4.1.The circuit diagram was used to visualize the connection method of the Arduino board, LED, and Bluetooth module as shown in Figure 4.The process of the operating algorithm of the smart-helmet PWS is illustrated in Figure 5.After the BLE signal was received via the Bluetooth BLE module attached to the smart helmet, it was compared to the MAC address of the Bluetooth beacon stored in the database.If the MAC addresses of the Bluetooth beacons are matched, the LED strap and two leg LEDs are turned on for 30 s to provide a visual alert to the worker and the driver and, if not, they are not turned on.The system is designed to operate repeatedly through infinite loops when power to the Arduino board is turned on.
Performance Evaluation of Personal PWS based on Smart Helmets
To evaluate the performance of the developed smart helmet-based personal PWS, a field experiment was conducted at the Sungshin Minefield underground limestone mine (37°17′12″ N, 128°43′53″ E) located in Jeongseon-gun, Gangwon-do, Korea.Figure 6 shows the tunnels that have been tested in the field on a two-dimensional and three-dimensional map and an actual photograph.As shown in Figure 2, a Bluetooth beacon was attached to the back of the room mirror in front of the truck and the front of the heavy equipment, and the workers wore a smart helmet programmed with a personal PWS system.A Bluetooth module capable of receiving BLE signals is placed at the back of the helmet to recognize the proximity of equipment outside the worker's view.A field experiment was conducted to measure the detection distance of the BLE signal received by the smart helmet by adjusting the Tx power of the Bluetooth beacon.The angle between the Bluetooth beacon and the smart helmet was adjusted to measure the detection distance of the smart helmet receiving the BLE signal
Performance Evaluation of Personal PWS Based on Smart Helmets
To evaluate the performance of the developed smart helmet-based personal PWS, a field experiment was conducted at the Sungshin Minefield underground limestone mine (37 • 17 12" N, 128 • 43 53" E) located in Jeongseon-gun, Gangwon-do, Korea.Figure 6 shows the tunnels that have been tested in the field on a two-dimensional and three-dimensional map and an actual photograph.As shown in Figure 2, a Bluetooth beacon was attached to the back of the room mirror in front of the truck and the front of the heavy equipment, and the workers wore a smart helmet programmed with a personal PWS system.A Bluetooth module capable of receiving BLE signals is placed at the back of the helmet to recognize the proximity of equipment outside the worker's view.A field experiment was conducted to measure the detection distance of the BLE signal received by the smart helmet by adjusting the Tx power of the Bluetooth beacon.The angle between the Bluetooth beacon and the smart helmet was adjusted to measure the detection distance of the smart helmet receiving the BLE signal.Figure 7a shows the smart helmet measuring the detection distance of receiving the BLE signal for each Tx power.The Bluetooth module that receives the BLE signal was installed at the rear of the helmet, and the Bluetooth module and Bluetooth beacon attached to the vehicle were arranged to face each other.The Bluetooth beacon, attached to the truck, approached a pedestrian standing on a mineway transport route 100 m away at a speed of 10-20 km/h.We then measured the detection distance at which the personal PWS receiving the BLE signal began warning pedestrians.The Tx power was set at 4 dBm intervals-from −12 dBm to 4 dBm-and measured 10 times for each Tx rower (50 times total).
Figure 7b shows an experiment that measures the detection distance of a smart helmet receiving a BLE signal via adjusting the angle between the Bluetooth beacon and the smart helmet.Similar to the above experiment, the truck approached at speeds of 10-20 km/h, and the detection distance at which the warning commenced was measured.The angles between the smart helmets and beacons were set at 45 • intervals-from 0 • to 180 • , and measured 10 times for each angle (50 measurements in total).Figure 7a shows the smart helmet measuring the detection distance of receiving the BLE signal for each Tx power.The Bluetooth module that receives the BLE signal was installed at the rear of the helmet, and the Bluetooth module and Bluetooth beacon attached to the vehicle were arranged to face each other.The Bluetooth beacon, attached to the truck, approached a pedestrian standing on a mineway transport route 100 m away at a speed of 10-20 km/h.We then measured the detection distance at which the personal PWS receiving the BLE signal began warning pedestrians.The Tx power was set at 4 dBm intervals-from −12 dBm to 4 dBm-and measured 10 times for each Tx rower (50 times total).Figure 7b shows an experiment that measures the detection distance of a smart helmet receiving a BLE signal via adjusting the angle between the Bluetooth beacon and the smart helmet.Similar to the above experiment, the truck approached at speeds of 10-20 km/h, and the detection distance at which the warning commenced was measured.The angles between the smart helmets and beacons were set at 45° intervals-from 0° to 180°, and measured 10 times for each angle (50 measurements in total).
Subjective Workload Assessment of Smart Helmet-Based Personal PWS
Workload is a quantitative measure of the amount of mental stress a person experiences while performing tasks within a particular system [51].Workload is affected by psychological (focus on work and anxiety), physical (physical difficulties and difficulty in controlling machines), temporal (deadlines), and environmental (noise and relationships with colleagues) factors [52].If the workload is not properly adjusted when designing the system, overload can occur, and the work efficiency can be reduced.Therefore, it is necessary to improve the work efficiency by designing and operating a system with minimal workload.
Subjective workload evaluation can be performed using a questionnaire.It is frequently used in human-machine system development [53].Representative subjective workload evaluation methods include the NASA-TLX [45], subjective workload assessment techniques [54], and workload profile techniques [55].In this study, subjective workload was evaluated using the NASA-TLX method.The psychological, physical, and temporal effects on workers using the personal PWS while wearing smart helmets and working at the mining site were evaluated.The NASA-TLX is a multidimensional grading procedure that estimates the overall workload score based on a weighted average of six factors [56]: mental, physical, temporal, overall performance, effort, and frustration.These workload parameters are defined as follows:
Mental demand: how many mental and cognitive skills are needed to accomplish this task? Physical demand: how much physical ability do you need to perform this task? Temporal demand: how much duress did you feel due to the rate or pace at which you performed multiple tasks? Overall Performance: how successfully do you think you have achieved the goals of this task?
Subjective Workload Assessment of Smart Helmet-Based Personal PWS
Workload is a quantitative measure of the amount of mental stress a person experiences while performing tasks within a particular system [51].Workload is affected by psychological (focus on work and anxiety), physical (physical difficulties and difficulty in controlling machines), temporal (deadlines), and environmental (noise and relationships with colleagues) factors [52].If the workload is not properly adjusted when designing the system, overload can occur, and the work efficiency can be reduced.Therefore, it is necessary to improve the work efficiency by designing and operating a system with minimal workload.
Subjective workload evaluation can be performed using a questionnaire.It is frequently used in human-machine system development [53].Representative subjective workload evaluation methods include the NASA-TLX [45], subjective workload assessment techniques [54], and workload profile techniques [55].In this study, subjective workload was evaluated using the NASA-TLX method.The psychological, physical, and temporal effects on workers using the personal PWS while wearing smart helmets and working at the mining site were evaluated.The NASA-TLX is a multidimensional grading procedure that estimates the overall workload score based on a weighted average of six factors [56]: mental, physical, temporal, overall performance, effort, and frustration.These workload parameters are defined as follows: • Mental demand: how many mental and cognitive skills are needed to accomplish this task?The response of the worker to these the six workload parameters was evaluated.All the parameters except for "Overall Performance" (scoring from good to bad) were graded from low to high with values between 0 and 100 (in increments of 5).The weights of the six parameters were calculated using pairwise comparisons, and the overall workload score was calculated by averaging the product of each factors score and weight.Figure 8 is a schematic showing the experiment conducted.Three equivalent experiments were performed under the same experimental conditions to compare the effect on the subjective workload.In this study, the subjective workload evaluation was performed on 10 experimental subjects aged 24 to 26 years old (average age was 24.9 years) at the same location where individual PWS performance was evaluated.More than half (60%) of the test subjects said they had knowledge of smart glasses, and the majority (80%) said they had no knowledge of smart helmets.The test subjects used (a) a smartphone-based personal PWS (driver's position), (b) a smart glass-based personal PWS (worker's position), and (c) a smart helmet-based personal PWS (worker and driver's position).For this experiment, we used the smartphone-based personal PWS by Baek and Choi [13], the smart glass-based personal PWS by Baek and Choi [25] and the smart helmet-based personal PWS developed in this study.
goals?
Frustration Level: how much discomfort have you felt while working on this task?
The response of the worker to these the six workload parameters was evaluated.All the parameters except for "Overall Performance" (scoring from good to bad) were graded from low to high with values between 0 and 100 (in increments of 5).The weights of the six parameters were calculated using pairwise comparisons, and the overall workload score was calculated by averaging the product of each factors score and weight.
Figure 8 is a schematic showing the experiment conducted.Three equivalent experiments were performed under the same experimental conditions to compare the effect on the subjective workload.In this study, the subjective workload evaluation was performed on 10 experimental subjects aged 24 to 26 years old (average age was 24.9 years) at the same location where individual PWS performance was evaluated.More than half (60%) of the test subjects said they had knowledge of smart glasses, and the majority (80%) said they had no knowledge of smart helmets.The test subjects used (a) a smartphone-based personal PWS (driver's position), (b) a smart glass-based personal PWS (worker's position), and (c) a smart helmet-based personal PWS (worker and driver's position).For this experiment, we used the smartphone-based personal PWS by Baek and Choi [13], the smart glass-based personal PWS by Baek and Choi [25] and the smart helmet-based personal PWS developed in this study.In the experiments, the test subject stood at the center of the transport route and examined the condition of the transport route (worker's position) or boarded a truck or loader (driver's position) to approach the subject.The smartphone provided a proximity warning to the driver with a hazard warning image.Smart glass provides a proximity alert to a worker with a hazard warning image.The smart helmet turned on the LED to provide a visual warning to both the driver and worker.In one case, the test subject boarded a loader or truck (driver's position) and when the device sensed that the worker was nearby, the vehicle was stopped temporarily.The worker passed only after confirming the evacuation.In another case, the test subject examined the transport route's maintenance status (worker's position), and the operation was stopped when the device sensed that a vehicle was approaching.The subject evacuated to the side of the transport route, and only after the vehicle had passed did the operation resume.
Each of the 10 test subjects performed experiments (a) to (c) in random order, and after the experiment, the workload was examined according to the NASA-TLX procedure.
Results
Figure 9a shows the worker wearing a smart helmet when a BLE signal is not received, and Figure 9b shows the worker wearing a smart helmet when a BLE signal is received.The MAC address of the Bluetooth beacons to be attached to the mining equipment was stored in a personal PWS application program, and the smart helmet PWS was designed to provide visual alerts through LEDs when the BLE signals were received.Through the visual alarm, through LEDs, both the worker and driver can recognize the danger in advance and prevent accidents.
In the experiments, the test subject stood at the center of the transport route and examined the condition of the transport route (worker's position) or boarded a truck or loader (driver's position) to approach the subject.The smartphone provided a proximity warning to the driver with a hazard warning image.Smart glass provides a proximity alert to a worker with a hazard warning image.The smart helmet turned on the LED to provide a visual warning to both the driver and worker.In one case, the test subject boarded a loader or truck (driver's position) and when the device sensed that the worker was nearby, the vehicle was stopped temporarily.The worker passed only after confirming the evacuation.In another case, the test subject examined the transport route's maintenance status (worker's position), and the operation was stopped when the device sensed that a vehicle was approaching.The subject evacuated to the side of the transport route, and only after the vehicle had passed did the operation resume.
Each of the 10 test subjects performed experiments (a) to (c) in random order, and after the experiment, the workload was examined according to the NASA-TLX procedure.
Results
Figure 9a shows the worker wearing a smart helmet when a BLE signal is not received, and Figure 9b shows the worker wearing a smart helmet when a BLE signal is received.The MAC address of the Bluetooth beacons to be attached to the mining equipment was stored in a personal PWS application program, and the smart helmet PWS was designed to provide visual alerts through LEDs when the BLE signals were received.Through the visual alarm, through LEDs, both the worker and driver can recognize the danger in advance and prevent accidents.Table 4 shows the statistics of the detection distance measurement when a proximity alarm was provided according to the change in the Tx power of the Bluetooth beacon, and Figure 10 shows the average detection distance per Tx power as a graph.The average detection distance is 2.9 m at −12 dBm, 6.0 m at −8 dBm, 27.1 m at −4 dBm, 62.7 m at 0 dBm, and 66.9m at 4 dBm.As the Tx power increased, the smart helmet's BLE signal detection distance also increased.Table 4 shows the statistics of the detection distance measurement when a proximity alarm was provided according to the change in the Tx power of the Bluetooth beacon, and Figure 10 shows the average detection distance per Tx power as a graph.The average detection distance is 2.9 m at −12 dBm, 6.0 m at −8 dBm, 27.1 m at −4 dBm, 62.7 m at 0 dBm, and 66.9m at 4 dBm.As the Tx power increased, the smart helmet's BLE signal detection distance also increased.Table 5 shows the statistics of the sensing distance measurement when a proximity alarm was provided according to the facing angle between the smart helmet and the Bluetooth beacon, and Figure 11 shows the average sensing distance per angle.The average sensing distance was measured to be over 20 m for angles of 0°, 45° and 90°, approximately 20 m for an angle of 135°, and 10 m for an angle of 180°.Therefore, it was confirmed that the average BLE signal detection distance was at least 10 m, regardless of the facing angle between the smart helmet and Bluetooth beacon.Table 5 shows the statistics of the sensing distance measurement when a proximity alarm was provided according to the facing angle between the smart helmet and the Bluetooth beacon, and Figure 11 shows the average sensing distance per angle.The average sensing distance was measured to be over 20 m for angles of 0 • , 45 • and 90 • , approximately 20 m for an angle of 135 • , and 10 m for an angle of 180 • .Therefore, it was confirmed that the average BLE signal detection distance was at least 10 m, regardless of the facing angle between the smart helmet and Bluetooth beacon.Figure 12 is a radial plot of the average value of the scores of the six workload parameters evaluated in three experiments for four types on 10 subjects.When the subjects used a smartphone-based personal PWS, scores on mental demand, time demand, physical demand, frustration, effort, and overall performance were all higher than when using a smart helmet-based personal PWS.This may be due to the fact that when a subject used a smartphone-based personal PWS while driving, they had to repeatedly check the smartphone screen to check whether the worker was approaching the vehicle and felt apprehensive due to increased eye movement.Conversely, when using a smart helmetbased personal PWS, the subjects could concentrate only on driving and receive a visual alert through the LED light of the smart helmet worn by the worker; therefore, both hands were relatively free compared to when using a smartphone-based personal PWS.Consequently, less workload is required to perform the task.Similar to using a smartphonebased personal PWS, when workers wore smart glasses-based personal PWS, mental, physical, and frustration scores were higher than when wearing smart helmet-based personal PWS.In particular, the frustration score differed the most because they were unfamiliar with wearing smart glasses (glasses slipping and wearing ordinary glasses under smart glasses), whereas the feeling of wearing a smart helmet was similar to that of a general safety helmet.Figure 12 is a radial plot of the average value of the scores of the six workload parameters evaluated in three experiments for four types on 10 subjects.When the subjects used a smartphone-based personal PWS, scores on mental demand, time demand, physical demand, frustration, effort, and overall performance were all higher than when using a smart helmet-based personal PWS.This may be due to the fact that when a subject used a smartphone-based personal PWS while driving, they had to repeatedly check the smartphone screen to check whether the worker was approaching the vehicle and felt apprehensive due to increased eye movement.Conversely, when using a smart helmetbased personal PWS, the subjects could concentrate only on driving and receive a visual alert through the LED light of the smart helmet worn by the worker; therefore, both hands were relatively free compared to when using a smartphone-based personal PWS.Consequently, less workload is required to perform the task.Similar to using a smartphonebased personal PWS, when workers wore smart glasses-based personal PWS, mental, physical, and frustration scores were higher than when wearing smart helmet-based personal PWS.In particular, the frustration score differed the most because they were unfamiliar with wearing smart glasses (glasses slipping and wearing ordinary glasses under smart glasses), whereas the feeling of wearing a smart helmet was similar to that of a general safety helmet.Figure 12 is a radial plot of the average value of the scores of the six workload parameters evaluated in three experiments for four types on 10 subjects.When the subjects used a smartphone-based personal PWS, scores on mental demand, time demand, physical demand, frustration, effort, and overall performance were all higher than when using a smart helmet-based personal PWS.This may be due to the fact that when a subject used a smartphone-based personal PWS while driving, they had to repeatedly check the smartphone screen to check whether the worker was approaching the vehicle and felt apprehensive due to increased eye movement.Conversely, when using a smart helmetbased personal PWS, the subjects could concentrate only on driving and receive a visual alert through the LED light of the smart helmet worn by the worker; therefore, both hands were relatively free compared to when using a smartphone-based personal PWS.Consequently, less workload is required to perform the task.Similar to using a smartphonebased personal PWS, when workers wore smart glasses-based personal PWS, mental, physical, and frustration scores were higher than when wearing smart helmet-based personal PWS.In particular, the frustration score differed the most because they were unfamiliar with wearing smart glasses (glasses slipping and wearing ordinary glasses under smart glasses), whereas the feeling of wearing a smart helmet was similar to that of a general safety helmet.Figure 13 shows the total workload score calculated in the four experiments divided into the driver side (a) and the pedestrian side (b).The drivers who used the smartphonebased personal PWS scored approximately 32 points, whereas drivers who used the smart helmet-based personal PWS scored approximately 6.3 points.Workers using the smart glasses-based personal PWS scored approximately 30.6 points, whereas workers using the Figure 13 shows the total workload score calculated in the four experiments divided into the driver side (a) and the pedestrian side (b).The drivers who used the smartphonebased personal PWS scored approximately 32 points, whereas drivers who used the smart helmet-based personal PWS scored approximately 6.3 points.Workers using the smart glasses-based personal PWS scored approximately 30.6 points, whereas workers using the smart helmet-based personal PWS scored approximately 5.9 points.The smart helmet helped to increase work efficiency by effectively providing proximity warnings for equipment or vehicles to the driver and worker simultaneously while freeing both hands of the driver and worker.Moreover, compared to smart glasses, the wearability of the smart helmet was convenient, which helps reduce worker stress.Figure 13 shows the total workload score calculated in the four experiments divided into the driver side (a) and the pedestrian side (b).The drivers who used the smartphonebased personal PWS scored approximately 32 points, whereas drivers who used the smart helmet-based personal PWS scored approximately 6.3 points.Workers using the smart glasses-based personal PWS scored approximately 30.6 points, whereas workers using the smart helmet-based personal PWS scored approximately 5.9 points.The smart helmet helped to increase work efficiency by effectively providing proximity warnings for equipment or vehicles to the driver and worker simultaneously while freeing both hands of the driver and worker.Moreover, compared to smart glasses, the wearability of the smart helmet was convenient, which helps reduce worker stress.
BLE Signal Propagation in Underground Mines
In underground mines, there are several structural and environmental factors that cause diffraction, reflection, and interference of BLE signal propagation.For example, there exist excavations with crossings inclined at 90 degrees and curved sections.These structures make stable line-of-sight propagation impossible.The mine wall has a high
BLE Signal Propagation in Underground Mines
In underground mines, there are several structural and environmental factors that cause diffraction, reflection, and interference of BLE signal propagation.For example, there exist excavations with crossings inclined at 90 degrees and curved sections.These structures make stable line-of-sight propagation impossible.The mine wall has a high roughness that causes diffraction and reflection of the signal.In addition, radio signal attenuation occurs due to the rock mass.All mine areas have high relative humidity, and dust particles are suspended in the air.Electrical installations for power supply exist throughout underground mines.As a result, signal interference may occur due to electromagnetic fields.Therefore, it is necessary to perform BLE signal testing to understand the effects of disturbing factors that interfere with stable BLE signal propagation in underground mines.
Advantages of Smart Helmet-Based PWS
Personal PWSs using a smart helmet have the following notable advantages at the mining site.First, this system can solve the problems that occurred in existing PWSs.The driver had to repeatedly check the smartphone to receive the proximity warning alerts.It caused a decrease in the operators' concentration on driving.Smart glasses caused discomfort when workers wore regular glasses or when they slipped.On the other hand, smart helmets can provide visual proximity alerts to both the operator and the pedestrian without work interruption, enabling quick identification of dangerous situations and quick evacuation.Workers who are wearing regular glasses, industrial goggles, and soundproof headsets can also wear smart helmets without discomfort.
Second, smart helmet-based PWS is relatively easy to use for workers.Operators and workers who use existing PWSs need to operate a touchpad controller to execute PWS application.However, subjects who participated in the NASA-TLX test tended to feel difficulty in such an operation.On the other hand, the smart helmet-based PWS is a relatively convenient and easy to execute system because only a power supply is required.
Finally, the proposed personal PWS can be implemented and utilized in the mining site at a relatively low cost.Since this system utilizes Arduino, an open-source hardware, relatively low cost is required to constitute system components (i.e., microcontroller board and sensors).Therefore, it is possible to distribute multiple sets of smart helmets and Bluetooth beacons to the worksite, regardless of the size of the mine.
Conclusions
In this study, we developed a personal PWS that uses a smart helmet to receive BLE signals from Bluetooth beacons and provides visual proximity alerts to pedestrians and equipment operators.The smart helmet-based PWS could provide bidirectional proximity warnings to equipment operators and pedestrians in mines.A performance evaluation was conducted at an actual underground mine site to evaluate the performance of the personal PWS developed in this study.The BLE signal detection distance of the smart helmet was measured according to the Tx power of the Bluetooth beacon and the facing angle between the Bluetooth beacon and the smart helmet.The average BLE signal recognition distance was 2.9 m at −12 dBm, 6.0 m at −8 dBm, 27.1 m at −4 dBm, 62.7 m at 0 dBm, and 66.9 m at 4 dBm.As the Tx power of the Bluetooth beacon increased, the BLE signal recognition distance of the smart helmet increased.In addition, when considering the facing angle between the smart helmet and Bluetooth beacon with a Tx power of −4 dBm, it was confirmed that the average BLE signal detection distance was at least 10 m regardless of the facing angle.The workload of the individual PWS for 10 subjects was quantitatively analyzed using the NASA-TLX evaluation method.The use of smart helmets to provide visual proximity alerts reduced mental effort and stress, and freed the hands of workers to maintain work efficiency.The overall workload score calculated when using the smart helmet was lower than when using the smart phone-based PWS and the smart glass-based PWS.Therefore, a smart helmet is suitable for implementing personal PWSs at the mining site.
In future work, smart helmet-based personal PWS can be expanded by adding sensors to the Arduino board.For example, a heart rate sensor or an alcohol sensor can be added to check the condition of the worker.Furthermore, by adding a temperature, humidity, methane gas, and carbon monoxide sensor, the environment at the mine site can be monitored, and when a high concentration of harmful gases is detected, the pedestrian worker can be warned of danger.The worker could then follow appropriate protocols to ensure safety.
Figure 1 .
Figure 1.Overview of personal proximity warning system (PWS) using smart helmet.
Figure 3 .
Figure 3.The component of the smart helmet.The rear part (a) and the front part (b) of the helmet.
Figure 3 .
Figure 3.The component of the smart helmet.The rear part (a) and the front part (b) of the helmet.
Figure 5 .
Figure 5.The process of the operating algorithm for the smart helmet-based personal proximity warning system.
Figure 5 .
Figure 5.The process of the operating algorithm for the smart helmet-based personal proximity warning system.
Figure
Figure7ashows the smart helmet measuring the detection distance of receiving the BLE signal for each Tx power.The Bluetooth module that receives the BLE signal was installed at the rear of the helmet, and the Bluetooth module and Bluetooth beacon attached to the vehicle were arranged to face each other.The Bluetooth beacon, attached to the truck, approached a pedestrian standing on a mineway transport route 100 m away at a speed of 10-20 km/h.We then measured the detection distance at which the personal PWS receiving the BLE signal began warning pedestrians.The Tx power was set at 4 dBm intervals-from −12 dBm to 4 dBm-and measured 10 times for each Tx rower (50 times total).
Figure 7 .
Figure 7. Overview of the performance test of the smart-helmet based PWS.(a) Experimental model of BLE signal detection distance measurement performed by considering the Tx power of the Bluetooth beacon; (b) BLE Signal detection distance measurement model according to the perception angle between the Bluetooth beacon and the smart helmet.
Figure 8 .
Figure 8. Overview of the subjective workload assessment.(a) Type 1: truck drivers wearing the smartphone-based PWS; (b) type 2: pedestrian workers wearing the smart glasses-based PWS; (c) type 3: truck drivers and pedestrian workers wearing the smart helmet-based PWS.
Figure 9 .
Figure 9. Experimental results showing the performance of the smart-helmet based PWS.(a) Worker wearing the smart helmet when no BLE signal is received; (b) worker wearing the smart helmet when a BLE signal is received.
Figure 9 .
Figure 9. Experimental results showing the performance of the smart-helmet based PWS.(a) Worker wearing the smart helmet when no BLE signal is received; (b) worker wearing the smart helmet when a BLE signal is received.
1
Standard deviation 2 Maximum value 3 Minimum value.
Figure 10 .
Figure 10.Average BLE signal detection distance of smart helmet according to Tx power of Bluetooth beacon (m).
Figure 10 .
Figure 10.Average BLE signal detection distance of smart helmet according to Tx power of Bluetooth beacon (m).
19 Figure 11 .
Figure 11.Average BLE signal detection distance of the smart helmet according to the facing angle between the smart helmet and Bluetooth beacon (m).
Figure 11 .
Figure 11.Average BLE signal detection distance of the smart helmet according to the facing angle between the smart helmet and Bluetooth beacon (m).
Figure 11 .
Figure 11.Average BLE signal detection distance of the smart helmet according to the facing angle between the smart helmet and Bluetooth beacon (m).
Figure 12 .
Figure 12.Average value of the evaluation scores of the six workload parameters of the NASA-TLX according to the type of experiment.(a) Type 1: workload of truck drivers when using the smartphone-based PWS; (b) Type 2: workload of the truck drivers when using the smart helmet-based PWS; (c) Type 3: workload of the pedestrian workers when using the smart glass-based PWS, and (d) Type 4: workload of the pedestrian workers when using the smart helmet-based PWS.
Figure 12 .
Figure 12.Average value of the evaluation scores of the six workload parameters of the NASA-TLX according to the type of experiment.(a) Type 1: workload of truck drivers when using the smartphone-based PWS; (b) Type 2: workload of the truck drivers when using the smart helmet-based PWS; (c) Type 3: workload of the pedestrian workers when using the smart glass-based PWS, and (d) Type 4: workload of the pedestrian workers when using the smart helmet-based PWS.
ure 12 .
Average value of the evaluation scores of the six workload parameters of the NASA-TLX according to the type experiment.(a) Type 1: workload of truck drivers when using the smartphone-based PWS; (b) Type 2: workload of the ck drivers when using the smart helmet-based PWS; (c) Type 3: workload of the pedestrian workers when using the art glass-based PWS, and (d) Type 4: workload of the pedestrian workers when using the smart helmet-based PWS.
Figure 13 .
Figure 13.Results of the overall workload score assessment.(a) Overall workload score according to the type of experiment on the driver side; (b) overall workload score according to the type of experiment on the pedestrian side.
Figure 13 .
Figure 13.Results of the overall workload score assessment.(a) Overall workload score according to the type of experiment on the driver side; (b) overall workload score according to the type of experiment on the pedestrian side.
Table 4 .
Statistical analysis results of BLE signal detection distance (m) of the smart helmet according to the Tx power of Bluetooth beacon.
Table 5 .
Statistical analysis results of the BLE signal detection distance (m) of the smart helmet according to the facing angle between the smart helmet and Bluetooth beacon.
BLE Signal Recognition Distance (m) Facing Angle between the Smart Helmet and the Blue- tooth Beacon (Degree) 0°
1 Standard deviation 2 Maximum value3Minimum value.
Table 5 .
Statistical analysis results of the BLE signal detection distance (m) of the smart helmet according to the facing angle between the smart helmet and Bluetooth beacon.
BLE Signal Recognition Distance (m) Facing Angle between the Smart Helmet and the Bluetooth Beacon (Degree)
1 Standard deviation 2 Maximum value3Minimum value.
|
2021-08-02T00:05:51.106Z
|
2021-05-11T00:00:00.000
|
{
"year": 2021,
"sha1": "a9dca723484cf8288c952c464124f267d5d4fcba",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/10/4342/pdf?version=1620726884",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aeef59d9f23a6922eeec44238d5b07beb99f1ca2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
254212818
|
pes2o/s2orc
|
v3-fos-license
|
Papuloerythroderma of Ofuji (PEO) successfully treated with acitretin
Papuloerythroderma of Ofuji (PEO) is a rare skin disorder, which affects predominantly older males of Asian or Caucasian descent with a male to female ratio of 4:1. It is characterized by pruritic flat-topped erythematous papules, which might progress into erythroderma. Exanthema typically spares the skin folds, which is known as the deck-chair sign. Common laboratory findings include lymphopenia, peripheral eosinophilia, and elevated serum IgE. Histological image shows nonspecific inflammatory reaction and needs to be correlated with clinical and laboratory findings [1, 2]. PEO should remain a diagnosis of exclusion after eliminating more frequent causes of itch and eczematous rash, such as atopic dermatitis of the elderly [3].
Introduction
Papuloerythroderma of Ofuji (PEO) is a rare skin disorder, which affects predominantly older males of Asian or Caucasian descent with a male to female ratio of 4:1. It is characterized by pruritic flat-topped erythematous papules, which might progress into erythroderma. Exanthema typically spares the skin folds, which is known as the deck-chair sign. Common laboratory findings include lymphopenia, peripheral eosinophilia, and elevated serum IgE. Histological image shows nonspecific inflammatory reaction and needs to be correlated with clinical and laboratory findings [1,2]. PEO should remain a diagnosis of exclusion after eliminating more frequent causes of itch and eczematous rash, such as atopic dermatitis of the elderly [3].
Case 1
An 85-year-old man was referred with a 2-year history of pruritic nonconfluent erythematous papules sparing the skin folds. No systemic underlying cause had been identified. Blood work-up showed lymphopenia, eosinophilia, and elevated serum IgE. Histology shows edematous changes of papillary dermis with mixed inflammatory infiltrate comprising multiple mast cells, eosinophils, and isolated plasma cells. The diagnosis of PEO was made based on the typical clinical and laboratory findings and compatible histopathological features. Systemic treatment with acitretin (25 mg/day) combined with topical corticosteroids led to complete resolution of the skin manifestations after 3 months of treatment.
Case 2
A 95-year-old man was referred to our dermatology clinic with a 3-month history of pruritic papular exanthema (Fig. 1a), now progressing to erythroderma sparing the skin folds (Fig. 1b). Indolent multiple myeloma has been diagnosed 4 years prior to rash onset.
Histology showed nonspecific interstitial and perivascular dermatitis with multiple eosinophils (Fig. 2). Lymphopenia and eosinophilia further supported the diagnosis of PEO. Immunophenotypic blood analysis identified a monotypic TRBC1+CD4++CD3++CD5++CD7-CD26-population reaching 4.9% of lymphocytes with a blood Tcell receptor (TCR) clone. The patient was started on combination therapy with oral acitretin (10 mg/day) and topical corticosteroids with a complete skin response after 3 months (Fig. 1c). Patient continues to be monitored for a possible cutaneous T-cell lymphoma (CTCL) onset.
Discussion
Approximately half of the reported PEO cases have been described as idiopathic. The other half has been defined as secondary PEO, associated with various systemic diseases, including malignancies, infections, and medications. Among those, cutaneous T-cell lymphoma (CTCL) has been identified as the most frequent underlying cause, found in half of the patients with secondary PEO [2]. The relationship between PEO and CTCL remains poorly understood but due to their well-established association, individuals with PEO should be closely monitored for possible CTCL onset [4]. Blood immunophenotyping and TCR clonality studies are necessary, while physicians should have a low threshold for perform-ing repeated skin biopsies. We regarded patient 1 as an idiopathic PEO. In patient 2, we considered the PEO as a possible paraneoplastic dermatosis due to multiple myeloma although the possibility of a future CTCL emergence cannot be excluded. PEO pathogenesis is poorly elucidated and development of effective treatment guidelines thus remains challenging. As skin-homing Th2/Th22 cells seem to play a role in the pathogenesis of PEO, use of dupilumab, human monoclonal antibody against IL-4Rα, has recently been investigated [5]. Nevertheless, due to recent reports on worsening or progression of CTCL after dupilumab treatment [6], such treatment should be carefully considered in the PEO context, and only after exclusion of underlying CTCL. Herein, we report successful treatment of two PEO patients using combination therapy of oral retinoids and topical corticosteroids. Complete resolution of the dermatological manifestations was reached in both patients after 3 months of treatment. Patient 1 has subsequently continued acitretin treatment as maintenance therapy at a lower dose (10 mg/day), while patient 2 decided to stop treatment due to side effects after 3 months. Complete skin response persisted in both patients at 1 year after diagnosis. Our findings correlate well with recent reports suggesting higher success rates of oral retinoids compared to psoralen plus ultraviolet-A radiation (PUVA) therapy or oral corticosteroids, yet with a more favorable safety profile and patient tolerance [2].
Funding Open access funding provided by University of Lausanne
Declarations
Conflict of interest G. Blanchard and E. Guenova declare that they have no competing interests.
Ethical standards Discussed patients gave consent for their photographs and medical information to be published in print and online and with the understanding that this information may be publicly available.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2022-12-04T18:06:13.226Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a5ec587356ebfa78eb9461053b047cdb1b31970f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40629-022-00236-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8a84326284579b708ecff434fd5287883a6d5825",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
235229296
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Evaluation of Accuracy of Different Apex Locators: Propex IQ, Raypex 6, Root ZX, and Apex ID with CBCT and Periapical Radiograph—In Vitro Study
Objectives This study aimed to validate the accuracy of working length (WL) measurements obtained with the newly introduced Propex IQ apex locator and to compare it with the latest generations of other electronic apex locators, CBCT, and conventional periapical radiographs by using the actual WL measurements obtained by using an endodontics microscope as a reference. Materials and Methods Thirty-five extracted single-rooted human mandibular first premolars with curvatures from 10° to 20° were selected according to the inclusion and exclusion criteria and cut at the cementoenamel junction to achieve a standard reference point for WL determination. The actual WL was obtained by inserting a size-15 k-file in the root canal until the tip of the file was visible under an endodontic microscope. The definitive WL was measured using Propex IQ (Dentsply Sirona), Raypex 6 (VDW Dental), Root ZX (Morita), and Apex ID (Kerr Dental). In addition, radiographic WL was obtained using periapical radiographs and CBCT. One-way ANOVA was used for comparisons of the WL values, with a p value < 0.05. The percentage of success of each method for determination of the definitive WL was assessed using cross-tabulation and chi-square tests. Results CBCT radiographs and Propex IQ apex locator yielded the most accurate WL measurements in comparison with the actual WL measurements (p < 0.05). Raypex 6, Root ZX, and Apex ID yielded more accurate WL measurements than conventional periapical radiographs (p < 0.05). Periapical radiographs yielded the least accurate WL measurements in comparison with the actual WL values (p < 0.05). Conclusions Within the limitations of this study, the Propex IQ apex locator showed higher accuracy than Raypex 6, Root ZX, and Apex ID for WL determination in the root canal. Nevertheless, CBCT radiographs yielded the maximum accuracy for WL measurements.
Introduction
e goal of endodontic treatment is to eliminate infection and inflammation in the root canal and periapical area after irreversible pulp pathosis, and this is achieved by cleaning and shaping the canals to remove bacteria and debris and then filling the canal with three-dimensional root canal filling to prevent further infection in the apical area, alleviate pain, and preserve the tooth. Extrusion and the presence of core filling material beyond the root canal are potential irritants and they are considered as a possible cause of failure by some authors, whereas other authors consider them to be an indication of canal patency up to the apical foramen [1][2][3].
In clinical endodontics, the working length (WL) is defined as the distance between the reference point coronally and the physiologic foramen apically (ending at the apical constriction). Incorrect WL determination of the root canal can result in residual bacterial infection, which can lead to an enormous defect in the root end area, causing loss of the apical seal, endodontic treatment failure, and major flare-up problems [4]. Different methods have been used to locate the apical foramen and to measure the WL of root canals. ese include conventional periapical radiographs, electronic apex locators, tactile evaluations, and other methods. e most common method of WL measurement is based on periapical radiographs alone, wherein the clinician uses these radiographs to visualize the extent of a file inserted in the canal and its relationship to the radiographic apex. However, this procedure is associated with multiple limitations, including subjectivity, image magnification, distortion errors, exposure of the patient to radiation, and superposition of anatomical structures [4]. e practice of estimating the WL by measuring the length of the root from the radiographic apex to the crown and then reducing 0.5-1 mm from the measurement has also been reported to be unreliable and inaccurate due to distortion of radiographic images [5][6][7].
Cone-beam computed tomography (CBCT) is an important technique that was introduced to dentistry in 1998 and has shown high potential for clinical applications with greater accuracy than periapical radiography [8]. CBCT has been shown to contribute to treatment planning, diagnosis, treatment, and prognosis of different diseases, in addition to its importance in research [9,10]. CBCT images can show the root canal angles, height of the curvature, and location of the major foramen, which are not identifiable with sufficient precision in periapical radiography [11,12]. e development and production of electronic apex locators for locating the canal terminus are a major innovation in root canal treatment. An electronic root length measurement method was first suggested by Custer [13] in 1918, after which the idea was revisited by Suzuki in 1942 [14]. However, it was Sunada [15] who, in 1962, used these principles to build a simple device that relied on direct current to detect the WL. Subsequently, electronic apex locators have undergone substantial improvements that have greatly increased their accuracy and adaptability.
Sunada stated that the apical constriction is the most important anatomical landmark because it has a resistance of 6500 Ω, which confers it with unique electronic characteristics [15]. Apex locators generate a direct current of known voltage (V) and include an ammeter that measures the intensity (I) of the current after it passes through the file and is recaptured by the labial hook [15]. An electronic component calculates the V\I ratio and deducts the resistance at the level of the canal where the instrument is located. e screen displays 0 when the resistance is 6500 Ω, which is how the clinician estimates that the tip of the file is at the apical constriction [15].
Although apex locators function with the same principle, the areas detected by different devices may differ. Whereas most manufacturers' manuals state that the devices detect the apical constriction, Morita (Dentaport ZX) suggests that their device detects the apical foramen and not the constriction. ey also advised that the operator should stop advancing the file when the reading shows 0.5 on the screen in order to locate the constriction [16].
Apex locators showed equal or higher accuracy than radiographic in many in vivo, ex vivo, and in vitro studies [17][18][19]. ese locators are useful when the apical portion of the canal system is hidden by some anatomical structures. Moreover, they help reduce the treatment time and radiation dose, which may be higher with conventional radiographic methods. However, the main problems associated with the use of electronic apex locators are that they cannot be used in cases of perforations, patients with cardiac pacemakers, and fractures of the root and that their accuracy is questionable in cases of root resorption, immature apices, swelling, and hemorrhage [20,21]. e current study aimed to examine the accuracy of Propex IQ, a recently introduced electronic apex locator for which no accuracy data from in vitro or in vivo studies are currently available in the literature, and to compare it to the latest generation of other commercially available apex locators. Furthermore, the accuracy of these apex locators was compared to those of other commonly used methods for determining WL, namely, periapical radiography and CBCT.
Materials and Methods
e sample size (n) was calculated using an online Statistics Calculator link, and an a priori sample size calculator for Student's t-test was used to estimate the minimum sample size for the one-tailed t-test study, considering a probability level of 0.05, anticipated effect size of 0.9 based on similar studies, and a statistical power level of 0.8. e representative sample size was 35 teeth.
irty-five extracted human mandibular first premolars with curved and single root canals were kept in 5.2% sodium hypochlorite for 2 h and then stored in hydrogen peroxide solution until use in this study. Each tooth was marked at the cementoenamel junction (CEJ), placed inside a special acrylic mold, and stabilized by wax.
en, the crown of each tooth was cut at the CEJ by using a saw machine (IsoMet 1000 Precision Cutter; Buehler, Düsseldorf, Germany) to provide a standard reference point for all WL measurements.
Periapical radiographs were taken for all teeth preoperatively to evaluate the curvature (10°-20°) and to check for any internal defects ( Figure 1). Patency was checked with a size-10 k-file (Dentsply Maillefer, Ballaigues, Switzerland). e selected teeth were cleaned using an ultrasonic dental scaler (Guilin Woodpecker Medical Instrument Co., Ltd., China) to remove any debris from the root surface. Teeth were also examined under an endodontic microscope at 20x magnification (Extaro 300; Zeiss, Germany) to determine the apex maturity and root surfaces and to detect possible fractures or any defects as part of the inclusion and exclusion criteria (Table 1).
Root canals were irrigated with 5 mL of 5% sodium hypochlorite (NaOCl, Werax, Izmir, Turkey). Before starting the WL measurements, each tooth was placed inside the Protrain mold (Simit, Italy), which is a special mold designed to simulate the oral environment for extracted teeth. is mold facilitates standardization by allowing a standard tooth position, standard X-ray imaging for all teeth, a standard SLOB technique, and a standard pathway for apex locators to complete the electrical circuit ( Figure 2).
2
International Journal of Dentistry
Actual WL Determination Using a Microscope.
e actual WL was measured as a control value by inserting a size-15 Kfile (Dentsply Maillefer, Ballaigues, Switzerland) with a double stopper to decrease the chance of stopper movement during measurements. e file was inserted in the root canal until its tip could be observed at the apical foramen under a microscope and then withdrawn 0.5 mm, after which the length between the file tip and reference point was measured with a digital caliper (Allendale Electronics Ltd.). Each measurement was repeated three times by three independent authors, and the mean value was recorded as the representative measurement of that sample.
Radiographic WL Determination Using Periapical
Radiographs. After placing the tooth in the Protrain mold, two conventional periapical radiographs were taken for each tooth. e first radiograph was used to evaluate the tooth on
Inclusion Exclusion Lower first premolars
Teeth other than lower first premolars Curved canal with a curvature between 10 and 20°Curved canal of less than 10°or more than 20°S ound, noncracked, nonworn, or nonfractured tooth Worn, carious, resorbed, cracked, fractured, filled, and malformed teeth Initial apical file must be K-file size 10 or 15 Initial apical file more than K-file size 15 No calcifications or internal defects of the root canal Calcified canal or pulp stones Single-rooted teeth Multirooted teeth the basis of the tooth selection criteria and to determine the radiographic tooth length and the estimated WL (whole tooth length-0.5 mm). e second radiograph was also taken using the Protrain mold after inserting a size-15 K-file up to the estimated WL to obtain the radiographic WL, which was calculated as the total file length inside the canal + the distance between the tip of the file on the radiograph and the root end (determined using an internal digital ruler of the digital radiograph software)-0.5 mm.
Electronic WL Determination Using Apex Locators.
Four well-known electronic apex locators were used in this study: Propex IQ (Dentsply Maillefer, Ballagiue, Switzerland), Raypex 6 (VDW, Munich, Germany), Root ZX (J Morita Corp., Kyoto, Japan), and Apex ID (Sybron Endo). Selected and prepared teeth were placed inside the Protrain mold; the roots were embedded in the mold, leaving approximately 5 mm of the coronal root surface exposed; and the labial clip of the apex locator was attached to the mold ( Figure 2).
To obtain the electronic WL measurement, a size-15 K-file with double stoppers was connected to each apex locator and used to determine the electronic WL in each root canal. e canals were irrigated with 5.0% NaOCl. Subsequently, cotton pellets and paper points were used to dry the tooth surface and to eliminate the excess irrigation solution, after which Propex IQ, Raypex 6, Root ZX, and Apex ID were used. Each file was attached to the apex locator file holder and was gradually introduced inside the canal while carefully monitoring the apex locator screen. e file was progressed in each canal until the apex locator screen indicated that the file was outside the root canal (beyond the WL), which was accompanied by a warning sound and red bars. e file was then regressed very slowly to the point where it showed the apical constriction and indicated the WL where the endodontic treatment should terminate. Each electronic apex locator was used according to the manufacturer's instructions. ree measurements were obtained by three different authors, and the mean of these three consecutive measurements was recorded as the representative electronic WL measurement of each canal for the corresponding device.
Radiographic WL Determination by CBCT.
Two conebeam computed tomography (CBCT) images were acquired for each tooth (Planmeca Promax 3D, Finland). Each group of six teeth was inserted separately in a special mold made from putty to facilitate the imaging procedure in the CBCT device. e first image was used to determine the CBCT radiographic tooth length and the estimated WL (whole tooth length-0.5 mm). e second image was obtained after inserting a size-15 K-file to the exact estimated WL of each tooth to obtain the CBCT radiographic WL: total file length inside the canal + the distance between the tip of the file and the root end (measured by the internal digital ruler of CBCT software)-0.5 mm.
Statistical Analysis.
e collected data were analyzed using Statistical Package for Social Sciences (SPSS) for Windows software, version 20 (SPSS Inc., Chicago, IL, USA). One-way analysis of variance (ANOVA) was used in this study with pvalues < 0.05. e percentage of success of each electronic apex locator in finding the exact WL was assessed using cross-tabulation and chi-square tests.
Results
e mean value of the actual WL measurements obtained with the endodontic microscope was 14.74 ± 1.23 mm. CBCT yielded WL measurements closest to the actual WL (mean � 14.70 mm), followed by Propex IQ (mean � 14.66 mm). e least accurate WL measurement was obtained using conventional periapical radiographs (mean � 14.01 mm) (Figure 3 and Table 2).
e WL values obtained with the four electronic apex locators were not significantly different; however, the WL measurements obtained using conventional radiographs were significantly different from the actual WL values (p < 0.010) ( Table 3).
e results of this study were divided into three groups to validate the WL values obtained by each electronic apex locator by comparing the differences between each device and the actual WL values separately (Table 4). Positive values indicated measurements that were overextended from the actual WL, while negative values indicated measurements that were underextended from the actual WL, whereas values within 0.5 mm from the actual WL were considered coinciding measurements. e WL measurements obtained using CBCT radiographs and Propex IQ apex locator showed all WLs within ±0.5 mm from the actual WL, while Raypex 6, Root ZX, and Apex ID showed most WLs within ±0.5 mm. However, some WLs obtained with these locators were <0.5 mm and >0.5 mm from the actual WL, except for Root ZX, which had no WL in the <0.5 mm category. Lastly, most radiographic WLs were >0.5 mm from the actual WL.
Discussion
e results of this study highlight the differences between various methods of WL determination, in addition to providing comparative data for the different commercially available electronic devices for measuring the WL in curved single-rooted canals. Correct and definitive determination of the WL is the primary factor for successful endodontic treatment. e histological results after root canal treatment have been shown to be superior when instrumentation and obturation are limited to the apical foramen than beyond this anatomical landmark. us, accurate determination of the location of the intended apical constriction is an important factor in the success of root canal treatment [22].
In this study, we used four different well-known apex locators and both conventional periapical radiographs and CBCT to compare the WLs measured using these techniques to the actual WL, which was determined by a microscope for each tooth separately. No published literature has investigated the use of the Propex IQ electronic apex locator in determining the WL, since it was recently introduced in the market. erefore, we examined the accuracy of these devices in curved single-rooted extracted mandibular first premolars [23]. e results of the current study demonstrated that WL measurements using CBCT radiographs and the Propex IQ apex locator were the most accurate, while conventional radiographs yielded the least accurate WL measurements. e WL measurements obtained with Raypex 6, Root ZX, and Apex ID showed acceptable accuracy in comparison with those obtained with Propex IQ and better accuracy than measurements obtained with conventional radiographs. is finding was in agreement with the study conducted by Adriano et al., who performed in vitro comparisons between apex locators and direct and radiographic techniques for determining the root canal length in primary teeth [17]. On the other hand, the findings of our study contradict those International Journal of Dentistry reported by Midhun Mohan and Susila Anand, who found that electronic apex locators are not superior to conventional radiographs in determining WL [24]. Janner et al. published the first study that compared the accuracy of WL measurements using preexisting CBCT scans with those obtained using standard techniques such as electronic apex locators, and they observed a high correlation between both methods [25]. Tchorz et al. found that CBCT is a useful tool for planning endodontic treatment, visualizing complex root canal anatomies, and estimating root canal length [26]. However, the application of CBCT exclusively for root canal length measurement is not yet recommended, since the benefits may not always outweigh the potential risks of the additional radiation [27]. In this regard, each endodontic patient should be evaluated individually, and when more evidence is needed, CBCT should only be considered when normal imaging does not yield adequate information for proper management of the case [28].
Our data also demonstrated that CBCT allowed better WL determination than electronic apex locators, which contradicts the findings reported by González-Rodríguez et al., who showed that electronic measurements were more reliable than CBCT scans for WL determination [18]. is difference may have occurred because identification of the apical constriction required higher magnification, and a stereomicroscope (920-25) was used in their study [18]. Jorge Paredes Vieyra et al. also reported that electronic apex locators showed higher accuracy and predictability than digital radiographs, and there were no significant differences in accuracy between Root ZX, Raypex 6, and Apex ID [19], whereas in another in vitro study, Root ZX exhibited higher accuracy than Apex ID in determining the WL of curved molar canals [29].
Yolagiden et al. conducted a study to compare four electronic apex locators in detecting a position 0.5 mm short of the major foramen, and their results showed that Apex ID allowed acceptable determination of the WL and its accuracy was similar to those of Raypex 5 and Raypex 6 [30]. Nevertheless, a −0.5 mm difference in the accuracy of electronic apex locators has been considered acceptable in various studies [29,31], while others considered an acceptable range of ±1.0 mm [32].
In summary, the accuracy of WL measurements and the comparisons among electronic apex locators, radiographs, and CBCT images remain topics of debate. Since the existing data are insufficient, more research with a larger variety of methods and techniques is required to emphasize the improvements achieved with these devices for better endodontic practice. However, conventional periapical radiographs appear to have lower accuracy than all the electronic apex locators evaluated in this study.
Conclusions
Within the limitations of this study, CBCT-based radiographic measurements were the most accurate method for determining the WL of the root canal. However, the WL measurements obtained by Propex IQ were more accurate than those obtained with the other electronic apex locators and very close to those obtained with the CBCT radiographs. Conventional radiographs were less accurate and cannot be used to determine the WL of the canal. Although Raypex 6, Root ZX, and Apex ID showed no significant differences in their accuracies for determination of the WL of root canals, they were not as accurate as Propex IQ.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Consent
For this type of study, formal consent was not required.
Disclosure
is research was part of the Graduation Project at the College of Density, Ajman University.
Conflicts of Interest
None of the authors have any conflicts of interest to declare related to this study. Table 4: Number of teeth (n) and frequencies of WL measurements (%) that were less, more, or within 0.5 mm from the actual WL. Both groups (<0.5 mm and >0.5 mm) were considered to indicate errors in determining the WL of each tooth. WL measurements obtained using CBCT radiographs and Propex IQ showed no errors, whereas the other WL determination methods showed several errors. 6 International Journal of Dentistry
|
2021-05-29T05:17:55.282Z
|
2021-05-04T00:00:00.000
|
{
"year": 2021,
"sha1": "f02a8b7ee1c7ad5833c6d5ee9993d30ba56d6d3c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijd/2021/5563426.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f02a8b7ee1c7ad5833c6d5ee9993d30ba56d6d3c",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
84843862
|
pes2o/s2orc
|
v3-fos-license
|
How socioeconomic status moderates the stunting-age relationship in low-income and middle-income countries
Introduction Reducing stunting is an important part of the global health agenda. Despite likely changes in risk factors as children age, determinants of stunting are typically analysed without taking into account age-related heterogeneity. We aim to fill this gap by providing an in-depth analysis of the role of socioeconomic status (SES) as a moderator for the stunting-age pattern. Methods Epidemiological and socioeconomic data from 72 Demographic and Health Surveys (DHS) were used to calculate stunting-age patterns by SES quartiles, derived from an index of household assets. We further investigated how differences in age-specific stunting rates between children from rich and poor households are explained by determinants that could be modified by nutrition-specific versus nutrition-sensitive interventions. Results While stunting prevalence in the pooled sample of 72 DHS is low in children up to the age of 5 months (maximum prevalence of 17.8% (95% CI 16.4;19.3)), stunting rates in older children tend to exceed those of younger ones in the age bracket of 6–20 months. This pattern is more pronounced in the poorest than in the richest quartile, with large differences in stunting prevalence at 20 months (stunting rates: 40.7% (95% CI 39.5 to 41.8) in the full sample, 50.3% (95% CI 48.2 to 52.4) in the poorest quartile and 29.2% (95% CI 26.8 to 31.5) in the richest quartile). When adjusting for determinants related to nutrition-specific interventions only, SES-related differences decrease by up to 30.1%. Much stronger effects (up to 59.2%) occur when determinants related to nutrition-sensitive interventions are additionally included. Conclusion While differences between children from rich and poor households are small during the first 5 months of life, SES is an important moderator for age-specific stunting rates in older children. Determinants related to nutrition-specific interventions are not sufficient to explain these SES-related differences, which could imply that a multifactorial approach is needed to reduce age-specific stunting rates in the poorest children.
► While household socioeconomic status is widely recognised as a key determinant of stunting, little is known on how it moderates the stunting-age relationship.
What are the new findings?
► Stunting rates are similar in newborn children from households of low and high socioeconomic status but diverge markedly between the sixth and 20th month of life. ► Differences between children from poor and rich households cannot simply be explained by the presence or absence of determinants that are modifiable through nutrition-specific interventions but are also strongly moderated by determinants related to nutrition-sensitive interventions.
What do the new findings imply?
► The relationship of socioeconomic status and stunting varies substantially with child age, highlighting the importance of considering age-specific analyses when research on determinants of undernutrition is conducted. ► Reducing the high age-specific stunting rates in children from poor households may require a multifactorial approach building on both nutrition-specific and nutrition-sensitive interventions.
AbsTrACT
Introduction Reducing stunting is an important part of the global health agenda. Despite likely changes in risk factors as children age, determinants of stunting are typically analysed without taking into account age-related heterogeneity. We aim to fill this gap by providing an indepth analysis of the role of socioeconomic status (SES) as a moderator for the stunting-age pattern.
Methods Epidemiological and socioeconomic data from 72 Demographic and Health Surveys (DHS) were used to calculate stunting-age patterns by SES quartiles, derived from an index of household assets. We further investigated how differences in age-specific stunting rates between children from rich and poor households are explained by determinants that could be modified by nutrition-specific versus nutrition-sensitive interventions.
results While stunting prevalence in the pooled sample of 72 DHS is low in children up to the age of 5 months (maximum prevalence of 17.8% (95% CI 16.4;19.3)), stunting rates in older children tend to exceed those of younger ones in the age bracket of 6-20 months. This pattern is more pronounced in the poorest than in the richest quartile, with large differences in stunting prevalence at 20 months (stunting rates: 40.7% (95% CI 39.5 to 41.8) in the full sample, 50.3% (95% CI 48.2 to 52.4) in the poorest quartile and 29.2% (95% CI 26.8 to 31.5) in the richest quartile). When adjusting for determinants related to nutrition-specific interventions only, SES-related differences decrease by up to 30.1%. Much stronger effects (up to 59.2%) occur when determinants related to nutrition-sensitive interventions are additionally included. Conclusion While differences between children from rich and poor households are small during the first 5 months of life, SES is an important moderator for age-specific stunting rates in older children. Determinants related to nutrition-specific interventions are not sufficient to explain these SES-related differences, which could imply that a multifactorial approach is needed to reduce age-specific stunting rates in the poorest children.
InTroduCTIon
The high prevalence of stunted growth (defined as height-for-age of more than two SDs below the median of a healthy reference population), with nearly one in four children worldwide affected and insufficient progress to meet internationally agreed-on targets, 1 2 is a major challenge for the global health community. Two influential The Lancet series have identified a range of key interventions for the reduction of undernutrition with a particular emphasis on a 'window of opportunity' of 1000 days from conception to 2 years of age. [3][4][5] Part of the motivation to focus on this stage of development stems from the observation that height-for-age z-scores rapidly decline during the first 2 years of life and tend to remain rather stable thereafter. 6 7 While broad agreement exists on the importance of reducing stunting for child development, 8 criticism has been raised BMJ Global Health about a disproportionate focus on behavioural interventions to tackle the problem of undernutrition. 9 This is despite the observation that determinants that are likely to capture the broader socioeconomic environment faced by mothers and children are more robustly related to children's nutritional status than direct mother-level or child-level determinants. 10 11 Surprisingly, despite the importance of the socioeconomic environment, the moderating effect of income or household socioeconomic status (SES) for the age pattern of stunting or, alternatively, height-for-age z-scores (HAZ) has not been systematically described. Existing studies are largely limited to rough split-sample analyses and only a small set of countries. 12-21 While Alderman and Headey 22 make use of a much larger data set, consisting of Demographic and Health Surveys (DHS) implemented in 57 countries, to describe various determinants of HAZ in children below the age of 5 years, they limit their focus to seven age groups and only base their analysis on SES terciles rather than quartiles. Both aspects are likely to obscure important nuances in the relationship of SES and age-specific stunting prevalence. Using a similar sample, Rieger and Trommlerová 23 provide graphical representations for the age-specific relationship between HAZ and a range of relevant determinants, including household wealth quartiles, but neither examine SES-related patterns in detail nor investigate potential variability across world regions or income groups. Moreover, none of these studies discusses how different age patterns of stunting rates in children from low-SES versus high-SES households can be explained by the presence or absence of modifiable risk factors.
Given the limitations of the extant literature, our study aims to add to the scientific discourse in three major ways: first, we graphically analyse the age profile of stunting in children aged 59 months and younger and highlight important nuances in age-specific stunting rates which could be overlooked if the 'critical window of opportunity' of 1000 days is understood as a uniform stage of development. To this end, we assembled a very large data set of 72 DHS, substantially exceeding the geographic coverage of previous analyses of this type. 6 7 Second, we provide an in-depth analysis of household SES as a moderator for the observed stunting-age pattern and investigate how these profiles differ across country income groups and world regions. Third, we show how observed differences in age-specific stunting rates are attenuated once the presence or absence of important modifiable determinants of undernutrition is accounted for.
MeTHods data sources
The data for the present analysis were obtained from recent DHS conducted in a total of 72 countries. 24 Started in 1984, DHS are an ongoing project, administered by BMJ Global Health ICF International, and yield nationally representative cross-sectional data for women aged 15-49 as well as their children below the age of five. 25 We downloaded all available surveys conducted before 28 September 2017 and dropped all survey rounds that did not include a module on anthropometric measurement for children below the age of five. From the remaining set of surveys, we limited the attention to the latest survey by country to ensure that the information presented in this article is as recent as possible.
outcome measures
The outcomes of this study were stunting and severe stunting, which indicate low or very low height-for-age and are often interpreted as measures of chronic undernutrition. DHS applied a standardised protocol for the assessment of height to ensure comparability across countries 26 : following WHO guidelines, enumerators were instructed to measure children below the age of 24 months in a lying position and children aged 24 months or more in a standing position. If, despite this rule, a child below the age of 24 months was measured standing up instead of lying down, 0.7 cm were added to measured height, while 0.7 cm were subtracted if a child aged 24 months or older was measured lying down. Measurement was conducted using a wooden measurement board and height values were recorded with a precision of 1 mm.
To ensure data quality, enumerators were provided with at least 3 days of training on the correct measurement of height and received feedback from team supervisors during data collection in case data quality issues occurred.
DHS routinely report HAZ based on the 2006 WHO reference population 27 for surveys conducted in 2007 or later, using child age calculated from the day of interview and the day of birth. Since not all countries included in our analysis had surveys conducted after 2006, relying on z-scores reported by DHS only would have resulted in a reduction in sample size. Instead, we calculated HAZ based on the WHO reference population directly with the Stata macro 'igrowup_stata' 28 using the same input information as DHS. Children were then classified as stunted if their height-for-age was at least two SDs below the median of the WHO reference population (ie, z-score less than −2) and as severely stunted if the z-score was smaller than −3 (z-scores below −6 or above 6 were considered implausible and dropped from the analysis).
For data collected in Nepal (representing 0.5% of the analysis sample), exact child age could not be calculated due to inconsistencies in calendar formats, such that we made use of HAZ reported by DHS. Similarly, where DHS did not report the exact dates of interview and birth but still reported HAZ based on the WHO reference population, these were used directly (6.6% of cases in analysis sample). Finally, in 3.4% of cases, neither the exact date of birth/interview nor HAZ were available and we calculated z-scores using rounded age in months, which was reported in all surveys. We assessed the sensitivity of our core results to the exclusion of children for whom z-scores were calculated based on rounded age.
Main independent variables
To display stunting rates by age in the main analysis, we rounded the precise child age to the nearest integer (ie, full months) or made use of rounded age in months as reported by DHS if the precise age was unavailable. In additional analyses, we further used 12 age groups. With the exception of the last age group, intervals were defined such that the lower end always contained its limit while the upper end did not. The following age groups were created (in months): '0 to less than 5', '5 to less than 10', '10 to less than 15', '15 to less than 20', '20 to less than 25', '25 to less than 30', '30 to less than 35', '35 to less than 40', '40 to less than 45', '45 to less than 50', '50 to less than 55' and '55 to 59'.
Household SES was derived by calculating a survey-specific asset index from a principal component analysis of the following assets: electricity, radio, television, refrigerator, bicycle, motorcycle, car, phone, as well as binary measures for floor quality, wall material and roof material. Similar asset indices have been used in the past by us and others. [29][30][31] To construct the index, we made use of the first principal component only, as is standard practice. 32 Calculations were conducted using the Stata command factor (Stata V.14). As the number of missing values varied across surveys, we excluded assets on a survey-specific basis if more than 2% of observations exhibited a missing value or if the item was either present in all or none of the households. The validity of this approach was assessed in a sensitivity analysis based on an alternative 5% cutoff. Finally, households were grouped into survey-specific SES quartiles using the created asset index. Of these, the SES-specific analysis focused on the poorest and richest quartile only.
statistical analysis and covariates
In order to illustrate the age-specific relationship of relative wealth and anthropometric failure, we pooled all surveys and graphically depicted stunting and severe stunting prevalence by age in months and SES quartile. Moreover, we repeated this exercise grouping countries by World Bank income classification at the beginning of a survey as well as six World Bank regions (using the 12 age groups rather than age in months to ensure statistical power).
In addition to this descriptive exercise, we sought to explain SES-related differences in stunting patterns by estimating linear probability models. To this end, we built on the framework of the Scaling-up Nutrition Movement 33 and distinguished between determinants that are modifiable by nutrition-specific interventions and those modifiable by nutrition-sensitive interventions. Nutrition-specific interventions aim to address immediate determinants of undernutrition, such as adequate food and nutrient intake, parenting practices and the absence of infectious diseases. 34 In contrast, nutrition-sensitive No data on stunting, age and household SES is presented for the initial sample due to a high number of missing values. The share of children living in households of each SES quartile differs from 25% as birth rates tend to be higher in low SES households and ties in household asset scores can occur. "n/a" = not applicable (statistics on stunting, child age and household SES not calculated in initial sample due to missing values) SES, socioeconomic status.
interventions focus on the underlying determinants of undernutrition, including food security, caregiving resources, healthcare infrastructure and access as well as environmental aspects such as hygiene and drinking water safety. 34 In the category of determinants that are modifiable by nutrition-specific interventions, we considered whether a child was breast fed within the first hour after birth, ever received vitamin A supplements, took iron supplements in the last 7 days before the interview, was administered drugs for intestinal parasites in the last 6 months or has received the BCG vaccination as well as the first diphtheria, pertussis and tetanus vaccination. Moreover, we considered whether the stool of the last-born child is disposed safely and if the household uses non-solid fuels for cooking, as these are likely to capture healthcare behaviour and parenting practice.
As determinants related to nutrition-sensitive interventions, we considered whether the household has access to a high-quality water source and adequate sanitation, whether the child was delivered in a health facility (rather than at home), as well as the mother's educational level, given that education is likely to influence the income-generating capability of the household and thus improve food security. The choice of indicators was based on previous studies 10 11 assessing the relative importance of stunting determinants and on data availability. A detailed definition of each indicator is provided in the online supplementary appendix table S1.
The degree to which these determinants are able to explain SES-related differences in undernutrition patterns was assessed with the help of linear probability models with age group-SES interaction terms (and their main effects) and by predicting the difference in the age group-specific stunting profile for the poorest and the richest quartile. Similarly, all previously mentioned determinants were included both as main effects and age group interaction terms to allow for age-specific effects. Moreover, given the potential for confounding at the regional or country level, all regression models were adjusted for survey-level fixed effects. We further adjusted for the urban versus rural location of households (which may potentially overlap with household SES), the number of children ever born to the mother, the sex of the child and whether the child is a twin. As for household SES, survey fixed effects and all control variables were included as BMJ Global Health main and interaction effects to guard against age-specific confounding. Finally, we present additional data on the bivariate association of maternal education and SES quartiles, because education has previously been described as a proxy for SES. 35 Since not all surveys featured the full range of covariates, we distinguished between a main analysis sample, which was used for descriptive analyses, including the full number of countries and children, and a reduced adjusted sample, which was used for the regression analyses only. In all cases, SEs were adjusted for clustering at the primary sampling unit level. In DHS, these were typically defined as enumeration areas used in a country's population census. 25 While presented results are unweighted in this article, the online supplementary file 1 provides an additional sensitivity analysis using sampling weights. Weights were rescaled to make their sum equal to the total population in 2016 for each country, such that each country entered the average with its global population share. A comparison of key sample statistics for the initial sample, the main analysis sample and the adjusted sample is presented in table 1. While the initial sample and the main analysis sample are similar with respect to the covered years and world regions, the adjusted sample contains slightly more recent data (no observations surveyed before 2005) and no observations from the Middle East and North Africa (compared with 8.3% in the initial sample). Nevertheless, key child-level, maternal-level and household-level statistics are similar across samples.
Age group: 21-59 months In children aged 21-59 months, we do not find (severe) stunting rates to notably exceed those measured in children of 20 months of age. Instead, both for the full sample and for the quartile-specific analysis, stunting rates tend to be smaller in children aged 4 years and older (40.4% (39.2;41.6) in month 21 compared with 35.9% (34.2;37.6) in month 59 in the full sample). Despite this pattern-with the exception of the richest quartile in the case of severe stunting-undernutrition prevalence in children aged 21-59 months is always larger than that observed in children shortly after birth. Moreover, the gap in stunting prevalence between poor and rich children observed in children of 20 months of age, exists to a similar extent in those aged 21-59 months. Again, this pattern is confirmed when using population figures as sampling weights (online supplementary appendix figure S1).
sensitivity to methodological choices
We investigate the sensitivity of these results to key methodological choices in the online appendix. In the online supplementary appendix figure S2, we show how predicted stunting rates would differ (compared with figure 2), if we excluded the 3.4% of children for whom stunting rates could only be calculated based on rounded age. Moreover, in the online supplementary appendix figure S3, we investigate changes in prevalence occurring when a 5% tolerance of missing values is applied for the calculation of the asset index rather than the 2% cut-off used for the main analysis. Accordingly, in both sensitivity tests, the resulting predicted prevalence deviates from the pattern shown in figure 2 by less than ± 1 percentage points in all age groups and subsamples.
Variability across countries
In figure 3, we show predicted stunting prevalence for the full sample as well as by SES separately for low-income countries (LICs) and middle-income countries (MICs). While stunting prevalence is overall higher in LICs compared with MICs, the two country groups exhibit patterns that are similar to what is observed in the pooled sample. Importantly, in both LICs and MICs, differences between the poorest and the richest quartile are small in the first 5 months of life but are larger in children aged 20 months. Analogous results for severe stunting are provided in the online supplementary appendix figure S4.
We further divide the sample into six World Bank regions (East Asia and Pacific, Europe and Central Asia, Latin America and the Caribbean, Middle East and North Africa, South Asia, Sub-Saharan Africa) in the online supplementary appendix figures S5 to S10. As for the pooled sample, we find in all regions that stunting rates are lower during the first 5 months of life than in children aged 20-25 months. Finally, with the exception of East Asia and Pacific, where only a relatively small sample size (N=17 882) is available, we find stunting rates to be similar for the poorest and richest quartile in the first age group, with a clear subsequent divergence in stunting rates.
differences between rich and poor households A potential reason for the difference in stunting rates in children from poor and rich households is the presence or absence of determinants that are modifiable by nutrition-specific and nutrition-sensitive interventions. We present in table 2 differences in predicted stunting prevalence between the lowest and richest quartile by age group based on a reference model only adjusted for survey fixed effects, household location and the maternal-level and child-level characteristics mentioned in the Methods section (all interacted with age group), a model that additionally controls for determinants modifiable by nutrition-specific interventions and a model further adjusted for determinants modifiable by nutrition-sensitive interventions. Again, we find that differences in BMJ Global Health predicted stunting levels are small in the first age group but higher for older children regardless of model choice. Accounting for determinants modifiable by nutrition-specific interventions is associated with a moderate reduction in the difference between children from poor and rich households compared with the reference model. Overall, the absence or presence of these factors explains less than one-third of the gap between children from poor and rich households in all age groups. In contrast, when additionally adjusting for determinants modifiable by nutrition-sensitive interventions, we find that attenuation effects strictly increase for all age groups, reaching up to 59.2% in the age group 45 months to less than 50 months. Across age groups, maternal education is moderately associated with SES (Cramér's V=0.25; see online supplementary appendix table S4 in for full cross-tabulation).
dIsCussIon
This study describes age and wealth patterns of stunting among 416 181 children based on DHS conducted in 72 countries. Similar to previous studies, 6 7 we find that, during the first 2 years of life, older children tend to exhibit much larger stunting rates than younger ones. In particular, we observe strong differences in undernutrition rates between children aged six and 20 months, while no notable differences exist between children younger than 6 months. Moreover, although the richest and poorest quartile perform similarly during the first 5 months, differences in stunting prevalence between children aged 6 and 20 months are substantially more pronounced for the poorest quartile. Despite differences in the extent and the exact onset of this divergence, the pattern is surprisingly robust across World Bank income groups and regions. Finally, we build on the framework of the Scaling-up Nutrition Movement 33 and show that adjusting for determinants modifiable by nutrition-specific interventions is associated with a small to moderate attenuation of the age-specific wealth quartile difference in stunting rates. In contrast, when additionally controlling for determinants modifiable by nutrition-sensitive interventions, we observe a much larger mitigation of stunting differentials in all age groups.
There is a broad consensus that the first 1000 days after conception constitute a critical period for the prevention of undernutrition. [3][4][5] While the overall age patterns identified by the present study support this notion, it is worth stressing that, given the important nuances in the age profile of stunting in children below the age of 2 years, more attention should be devoted to the health and living conditions occurring at various developmental stages rather than treating the first 1000 days as a uniform stage of development. Moreover, SES is a key moderator for the stunting-age pattern, as the differences in prevalence between children aged up to 6 months versus 20 months or older are substantially lower for the richest quartile than for the poorest quartile. Large SES-related differences in child undernutrition rates hence do not appear to be the result of intrauterine growth retardation but tend to develop later when children are directly exposed to the household's living conditions. Unfortunately, socioeconomic inequalities in stunting have been shown to be highly persistent across time, 36 and our regression-based results suggest that determinants modifiable by nutrition-specific interventions on their own are not associated with a substantial attenuation in stunting rate differentials. Instead, differences between the poorest and richest quartile were mitigated by up to 59.2% once we additionally accounted for determinants modifiable by nutrition-sensitive interventions. While such an exercise does not represent causal evidence, this result and previous evidence on age-pooled data 10 11 suggest that stunting is a complex phenomenon and it might require a multifactorial approach to overcome the high undernutrition rates observed in the socioeconomically disadvantaged.
Several limitations apply to this study. While DHS made use of standard protocols for anthropometric measurement in order to ensure cross-country comparability, an analysis of survey data from 52 countries collected between 2005 and 2014 revealed that quality differences exist across surveys, although no systematic patterns by World Bank region or income group were identified. 26 Moreover, as DHS do not contain information on income, we used household assets to derive socioeconomic status, which may have caused us to miss out certain dimensions of relative poverty. With that said, since the pioneering work of Filmer and Pritchett, 32 who have shown that asset-based measurement can provide a valid proxy for household wealth in the absence of income or expenditure data, the use of asset indices has become a widely spread practice. A further limitation is the loss of observations in the sample deduction process, which limits the geographic representativeness of this study. In particular, no observations from the Middle East and North Africa were available for the analysis of determinants related to nutrition-specific and nutrition-sensitive interventions. Nevertheless, we showed that key characteristics on the child, maternal and household level stayed very similar despite the exclusion of missing values.
Moreover, we are constrained to the analysis of cross-sectional data. Hence, in order to be able to interpret the identified patterns as trends, we would need to assume that children in older age groups represent the future state of children who are currently younger. However, given that relatively few changes in the nutrition-related environment can be expected over the course of a maximum of 5 years, we consider this assumption BMJ Global Health plausible. The use of observational data further implies that the assessment of the attenuation effect of determinants modifiable by nutrition-specific vs nutrition-sensitive interventions does not necessarily represent causal mechanisms. Lastly, the choice of these determinants is limited by data constraints. While a previous study found dietary diversity to be important, 10 we did not construct a comparable measure given the varying availability of nutritional data across different DHS. This limitation also implies that it was not possible to account for exclusive breastfeeding without imposing strong assumptions.
ConClusIon
We highlight important patterns of age and SES as moderators for stunting in children younger than 5 years. Our results show that the window of opportunity during the first 1000 days since conception is not a uniform stage of development but rather contains important nuances with respect to the exact timing of growth faltering, which does not appear to start before the sixth month of life, as well as to the performance of children from relatively poor and rich backgrounds. Studies analysing SES as a determinant of stunting need to take this heterogeneity into account, rather than pooling children into large age groups. Moreover, we argue that a stronger focus of the nutrition community on a multifactorial approach, building on both nutrition-specific and nutrition-sensitive interventions, may help to reduce age-specific stunting rates in children of low SES.
Contributors CB, SV and SVS jointly conceptualised the study. CB analysed the data and drafted the manuscript. CB, SV and SVS contributed to the interpretation of results and writing.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient consent for publication Not required.
ethics approval For this type of study ethics approval is not required as all analyses have been conducted with publically available secondary data.
Provenance and peer review Not commissioned; externally peer reviewed. license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
|
2019-03-23T13:03:00.675Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "3a290f050e52c123403e0e0134c7704849abacfb",
"oa_license": "CCBYNC",
"oa_url": "https://gh.bmj.com/content/bmjgh/4/1/e001175.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0c44dda0567bae627e586f488d1dd2da6b277c02",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226205584
|
pes2o/s2orc
|
v3-fos-license
|
Beyond the scope and the glue: update on evaluation and management of gastric varices
Gastric varices are encountered less frequently than esophageal varices. Nonetheless, gastric variceal bleeding is more severe and associated with worse outcomes. Conventionally, gastric varices have been described based on the location and extent and endoscopic treatments offered based on these descriptions. With improved understanding of portal hypertension and the dynamic physiology of collateral circulation, gastric variceal classification has been refined to include inflow and outflow based hemodynamic pathways. These have led to an improvement in the management of gastric variceal disease through newer modalities of treatment such as endoscopic ultrasound-guided glue-coiling combination therapy and the emergence of highly effective endovascular treatments such as shunt and variceal complex embolization with or without transjugular intrahepatic portosystemic shunt (TIPS) placement in patients who are deemed ‘difficult’ to manage the traditional way. Furthermore, the decisions regarding TIPS and additional endovascular procedures in patients with gastric variceal bleeding have changed after the emergence of ‘portal hypertension theories’ of proximity, throughput, and recruitment. The hemodynamic classification, grounded on novel theories and its cognizance, can help in identifying patients at baseline, in whom conventional treatment could fail. In this exhaustive review, we discuss the conventional and hemodynamic diagnosis of gastric varices concerning new classifications; explore and illustrate new ‘portal hypertension theories’ of gastric variceal disease and corresponding management and shed light on current evidence-based treatments through a ‘new’ algorithmic approach, established on hemodynamic physiology of gastric varices.
gastroesophageal varices (GOV), representing 70% of all GV, followed by Type 2 GOV in 21%.The highest risk of bleeding is associated with Type 1 IGV followed by Type 2 GOV. Acute variceal bleeding is a severe complication of cirrhosis that can lead to death in one-third of affected patients at 6 weeks. Even though only 10-30% of variceal bleeds are related to GV, it is associated with higher transfusion requirements, uncontrolled bleeding, rebleeding, and death. GVs bleed less frequently than esophageal varices but tend to bleed more severely. The severity of PH, as measured by the hepatic venous pressure gradient (HVPG), is beneficial in determining the risk of bleeding, rebleeding, and uncontrolled bleeding from esophageal varices. However, in GVs, the risk of bleeding is not entirely dependent on the degree of PH, but more related to the size of varices, the wall tension, and presence of red color signs over varix [5,6]. A thorough understanding of the anatomy and pathophysiology of pertinent collateral pathways is required to decide on the best possible treatment option(s) for bleeding from GVs, beyond current recommendations.
Endoscopic evaluation-based diagnosis and classification
Stadelmann, in 1913 described the formation of GVs in association with PH. Esophago-gastro-duodenoscopy, the gold standard test for diagnosing gastroesophageal varices classifies them according to size as small (< 5 mm) or large (> 5 mm) [7]. Endoscopic ultrasonography (EUS) can additionally assess collateral pathway anatomy and identify of perforating veins which improves treatment response monitoring in real-time [5,7,8]. Nonetheless, EUS is not recommended as the primary tool for assessment due to limited availability and need for expertise. Capsule endoscopy in grading and diagnosis of esophageal and GV has an accuracy of 90% with pooled sensitivity and specificity of 83% and 85%, respectively [9,10]. However, it is limited to patients unwilling for conventional invasive procedures. Liver stiffness < 20 kPa along with platelet count of > 150,000 per microliter is associated with < 5% chance of having high-risk varices. However, this non-invasive measurement is not validated in GVs [11]. Initially, GV was classified into F1-mild, F2-moderate and F3-severe, 'forms' (Choi classification) and after that into those associated with splenic vein thrombosis and those associated with cirrhosis or its absence thereof. Hoskins and Johnson in 1988 provided the first full descriptive classification of GV, into three types, based on the relation and extension of GV with the esophagus. Hashizume based variceal descriptions on the underlying vascular anatomy and presence of red color signs. Sarin classification was based on the location that aid in the choice of therapy. The Sarin GV classification is the most commonly followed, which is also endorsed by the Baveno [3,6,7]. Multiple other systems for describing GV came into being, which are of historical importance. These include the Iwase and Arakawa classifications, the Japanese Society for Portal Hypertension (JSPH) modification of the Hashizume classification, and the Italian Endoscopic Classification. Another simple classification differentiates GV into primary and secondary, the latter occurring after band ligation and eradication of esophageal varices (Additional file 1: Table 1) [5,7,8].
The relevance of collateral pathway anatomy in gastric varices
The GVs are generally described and therapeutic decisions made based on their location and relationship with esophageal varices. Understanding the complex GV system is important in deciding on therapeutic options beyond endoscopic interventions. In general, via hepatofugal pathways, GV drain into the systemic circulation through two types of collateral systems. These are the gastroesophageal system, between the left gastric vein and the azygous vein and the gastrophrenic system between the gastric veins in the posterosuperior gastric wall; and left inferior phrenic vein at the gastrophrenic ligament near the bare area of the stomach. In isolated splenic vein thrombosis, the collateral circulation pathways form in hepatopetal manner [8].
The Type 1 GOV (of Sarin classification) drains through the esophageal and paraesophageal collateral veins; Type 2 GOV through inferior phrenic and esophageal veins; Type 1 IGV through left inferior phrenic vein and Type 2 IGV, in sinistral PH, through gastric veins. The afferent vein for Type 1 GOV is the anterior left gastric vein while for Type 2 GOV, it is the short gastric and posterior gastric veins, while in both, efferent are esophageal and paraesophageal veins. In IGV, the afferent is a gastric or splenorenal shunt and the inferior phrenic vein which terminate in the inferior vena cava [7,12]. The GV also drains into the splenorenal shunt through the gonadal vein, or the gastrocaval shunt into the inferior vena cava through the inferior phrenic or pericardiophrenic vein. IGVs drain through hypertrophied inferior phrenic vein and left renal vein at the left adrenal vein in 85% [12,13]. These detailed collateral and portosystemic shunt descriptions paved way for hemodynamic classifications that provided deeper anatomical insights on which interventional radiology-based management decisions for endoscopically difficult to control bleeding may be adeptly chosen, a cognizance lacking in the original, standard classification systems (Fig. 1).
Cross-sectional imaging-based evaluation of the gastric variceal complex
Spontaneous portosystemic shunts (SPSS) are large collaterals that develop between the portal and systemic venous circulation that hypertrophy and enlarge to accommodate high blood volume and flow with increasing severity of PH. These can be divided into left and right-sided or central shunts. Left-sided shunts are those that are present to the left of midline or the left of the splenic confluence and mesenteric veins. The most common left-sided shunt is the gastrorenal shunt, which is present in 10% of patients with PH but is notable in 85% with GVs [14,15]. The gastric variceal system consists of the gastrorenal shunt, the central part that is gastric varix proper, and the associated afferent portal venous collateral feeder vessels. The variceal complex consists of the afferent limb (portal inflow), a central portion (varix proper), and an efferent limb (systemic outflow). The portal inflow feeder vessels do not directly communicate with gastric varix proper and take part in the formation of varices outside the gastric wall, called para/ extra-gastric or false GV. True gastric varix is the intragastric submucosal portion that bleeds into the lumen. Intragastric and the para-gastric varices together form the central portion of the gastric variceal complex. The extra-gastric and intragastric components may communicate with each other through a single or multiple perforator vein(s). The dominance anatomy of the portal inflow vessels is of great importance. In some patients, the dominant afferent vessel is the coronary or left gastric vein, while in others, it is the posterior gastric vein. In a highly complex gastric variceal system, triple dominance can be noted with multiple feeder systems (afferent limbs). When the short gastric veins become dominant afferent vessels in GV formation (usually in splenic or PV thrombosis), the variceal complex extends over the fundus, body, cardia, antrum and gastric outlets. This corresponds to the 'diffuse-type' of GV as per Iwase and Arakawa classification and which is absent from the Sarin classification [15,16]. Verma and colleagues recently reported on the twenty-year experience of diagnosis and treatment of GV at a large tertiary university in which the authors described Type 3 GOV (esophageal varices with gastric varices extending over body, antrum, and pylorus), in 10.5% of patients, previously described by Iwase and Arakawa [17] The efferent or outflow from the gastric varix proper can be as simple as a single gastrorenal shunt or may become complicated with multiple outflow channels due to the involvement of inferior phrenic or pericardiophrenic veins. As the severity of PH increases, the shunt flow increases, and the shunt grows and travels caudally and posteriorly, reaching the retroperitoneal and other regions sometimes undergoing duplication at the site of drainage. Understanding the complex anatomy of the GV system and associated hemodynamic classifications are significant in planning a multitude of managements for bleeding GV such as endoscopic cyanoacrylate therapy only or shunt occlusion with or without variceal embolization, endoscopic ultrasound-guided coiling or transjugular intrahepatic portosystemic shunt placement [7,8,[14][15][16].
Hemodynamic classification of gastric varices
The recent classification of GVs, is based on the hemodynamics of afferent and efferent flow rather than location and extent. These classifications, related to afferent and efferent circulation, improve therapeutic options beyond conventional endoscopy-based treatments (Figs. 2, 3).
Based on afferent/inflow hemodynamics
Kiyosue classification divides GV into three types. In Type 1, a single afferent vein supplies varix; in Type 2, multiple afferent veins supply the GV and in Type 3, single or multiple afferent veins supply the GV through a shunt (indirectly). The commonest afferent vein in Type 1 is the left gastric vein or coronary vein and in Type 2, the left gastric vein and posterior gastric vein. In the Saad-Caldwell classification based on dominance, Type 1 GV are associated with a single afferent vein (left gastric vein); in Type 2, the afferent vein is the posterior gastric vein or the short gastric veins; in Type 3, equal dominance is noted between multiple afferent veins, and in Type 4, multiple afferent veins form in presence of splenic vein thrombosis (Table 1) [15,16,18].
Based on efferent/outflow hemodynamics
In the Kiyosue classification of the gastric variceal system based on the outflow, four types are described. In Type A, the GVs are associated with a single draining shunt, most commonly the gastrorenal shunt. In Type B, drainage occurs through the gastrorenal shunt and associated multiple collateral veins. Type C GVs are associated with multiple shunts without additional collaterals. In Type D, multiple collateral veins are present without large shunts.
In the Hirota-BORV classification, the descriptions are similar to Kiyosue (Type A-D) but with the addition of Type E, in which the gastrorenal shunt is too large for transvenous retrograde balloon occlusion. In such situation, an antegrade approach is more feasible for shunt and variceal embolization (Table 2) [15,16].
Based on balloon occluded retrograde transvenography
Hirota classification is specifically based on real-time features of angiographic opacification of gastric varices (from grade 1-5). In Grade 1, GVs are well opacified without evidence of collateral circulation, while in Grade 5, the opacification of varices occurs minimally due to the presence of large shunt and rapid volume run-off. In the Fukuda classification, Type 1 includes GVs associated with the dominant left gastric vein, while in Type 2, the left gastric vein supplies the esophageal component of the variceal complex while the posterior or short gastric veins supply the gastric component. Type 3 include both left and right feeder vein dominant gastric variceal complex while Type 4 is associated with purely right-sided dominant supply. Matsumoto and colleagues classified GVs based on predicted aggravation of esophageal varices after embolization procedures. In Matsumoto Type 1 there is associated portosystemic flow in the gastrorenal shunt, and Type 2 portosystemic shunt flow is absent. In both, subtype A is associated with hepatopetal flow, while subtype B is associated with hepatofugal flow in the left gastric vein. Worsening of esophageal varices is associated with Matsumoto Type 1B in which after shunt embolization, backward flow into the left gastric vein results in increasing grades of esophageal varices (Additional file 1: Table 2) [18,19]. Clinical significance of hemodynamics (inflow and outflow) based classification and associated treatment of GVs during shunt occlusion procedures, beyond endoscopic management is shown in Fig. 4.
Primary prophylaxis of gastric variceal bleeding
In patients with GVs who have not bled, similar to the prevention of acute variceal bleeding from esophageal varices, the use of nonselective beta-blockers has been suggested. The role of endoscopic cyanoacrylate glue injection and endoscopic band ligation (EBL) as options for primary prophylaxis in gastroesophageal varices remain unclear. In a study conducted from a single center in India, endoscopic glue injection was found to be associated with lower bleeding and mortality compared to nonselective beta-blockers [20]. Kang et al. demonstrated the long-term efficacy of prophylactic cyanoacrylate glue therapy in 27 patients with high-risk GVs with 6-months cumulative survival of 75% [21]. The Baveno VI consensus and American Association for the Study of Liver Diseases recommend the use of non-selective beta-blockers [22]. Bhat and colleagues studied the primary prophylaxis of gastric variceal bleeding using EUS guided glue injection and found that only 5% bled at 449 days follow up. Further studies on EUS based therapy for prevention of bleeding in GV are lacking [23]. In the study by Koziel et al. on EUS-guided obliteration of GVs using vascular coils only or coils with CYA injections for primary and secondary prophylaxis for GV haemorrhage, technical success was 94% without serious complications [24]. Nonetheless, this was a small series with retrospective methodology and inherent bias. Primary TIPS is not recommended for prevention of GV bleeding. Balloon-retrograde transvenous occlusion (BRTO) and it's variant techniques such as coil-assisted retrograde transvenous occlusion (CARTO), plug-assisted retrograde Table 1 Hemodynamic classification of gastric varices based on portal outflow/efferent system
Classification system
Clinical relevance
Kiyosue classification
In Type A, shunt occlusion as the treatment of modality would suffice to control variceal bleeding not controlled with endoscopic therapy. In type B, feasibility of shunt occlusion might be less and hence transjugular intrahepatic portosystemic shunt placement is a better option to obliterate all of the collateral pathways In type C, transjugular intrahepatic portosystemic shunt placement along with shunt emobilization of large portosystemic shunts could be the best option in ideal candidates In Type D, in the presence of endoscopic failure, transjugular intrahepatic portosystemic shunt placement could become the best option transvenous occlusion (PARTO), balloon antegrade transvenous occlusion (BATO) and our group described novel techniques such as the 'direct' (D)-PARTO or direct coil-assisted antegrade transvenous occlusion (CAATO) are not evaluated in high-quality randomized trials for prevention of first gastric variceal bleeding and hence cannot be recommended as primary prophylaxis [25].
Management of acute gastric variceal bleeding and secondary prophylaxis
On diagnostic endoscopy, gastric variceal bleeding is confirmed in the presence of active bleeding from a visualized varix, presence of adherent clot or stigmata of recent haemorrhage over the GV and recurrent bleeding in a patient with PH and presence of GV in the absence of other identifiable sources of bleeding [26].
The general measures for initial optimization of clinical status to prevent further deterioration due to acute gastric variceal bleeding are similar to those followed in esophageal variceal bleeding. This includes airway protection through endotracheal intubation to prevent aspiration, maintaining minimum systolic blood pressure of 70 mm Hg for performing urgent diagnostic and therapeutic endoscopy and the judicious use of packed red cells for target hemoglobin levels between 7 and 8 g/dL (21% haematocrit). Volume expansion and coagulation correction using fresh frozen plasma or plasma expanders lead to severe adverse clinical events in patients with cirrhosis and variceal bleeding and must be avoided. A conventional dose of two fresh frozen plasma units can only replace 10% of the clotting factors. Large volume coagulation correction can lead to worsening PH, sepsis, sinister systemic immunomodulation, and rebleeding. In cirrhosis, a minimum platelet count 56,000/mL corresponds to adequate thrombin generation and is the ideal target for endoscopic interventions. Similarly, maintaining fibrinogen level > 120 mg/dL also improves haemostatic effects [27][28][29]. Although the use of vasoactive agents for the reduction in portal pressure and control of rebleeding specific to gastric variceal bleeding is unavailable in literature, the same line of supportive therapy as for esophageal variceal bleeding, is currently recommended. Wang et al. in their systematic review and meta-analysis showed that there was no difference between vasopressin/terlipressin and somatostatin/
Classification system
Clinical relevance
Hirota classification
Only endoscopic guided or endoscopic ultrasound guided therapy may help in obliteration of varices of Type 1 and 2 Transjugular intrahepatic portosystemic shunt placement is ideal for Type 3 and 4 related bleeding Transjugular intrahepatic portosystemic shunt placement and shunt embolization is ideal in Type 5 Grade 1: gastric varices well opacified without any collateral vein evidence Grade 2: contrast opacification in gastric varices for ≥ 3 min in the presence of small and few collateral veins Grade 3: contrast opacification of gastric varices partial and disappears within 3 min with medium to large collateral veins which were few in number Grade 4: non-contrast opacification of gastric varices and presence of many large collaterals Grade 5: shunt cannot be occluded because of very large size of shunt and rapid blood flow octreotide in the prevention of re-bleeding after the initial treatment of bleeding esophageal varices [30]. Antibiotic prophylaxis, lactulose for the prevention of hepatic encephalopathy along with other supportive measures that include varying degrees of organ support depending on the severity of systemic dysfunction is mandated in GV bleed [31]. In a patient with active bleeding that preclude endoscopic treatment, temporizing measures such as intragastric balloon tamponade can be utilized. These devices can only be placed for a maximum of 24 h within which definitive treatment has to be carried out. Given its large volume capacity, a Linton-Nachlas tube is considered ideal for gastric variceal bleeding [32].
Endoscopic band ligation
Endoscopic band ligation (EBL) is the initial treatment of choice in the management of acute esophageal variceal bleeding. Initially, several small patient series demonstrated that EBL was safe and effective for bleeding GV. Two randomized controlled trials comparing EBL to cyanoacrylate glue therapy showed that initial Fig. 4 Clinical significance of afferent venous inflow (a-c) and outflow (d-g) of gastric varices during shunt embolization procedure. In type 1 gastric varices with ideal anatomy for occlusion, post sclerosant injection varices fully fill and are completely obliterated (a1, a2); in type 2 varices, with multiple afferents, the sclerosant tends to flow toward low-pressure gastric collateral increasing risk portal vein thrombosis [(b1, b2 (arrows)]; in type 3, the sclerosant tends to flow in the direction of large shunt [(c1, c2 (arrows)]; in type A (d1, d2), sclerosant completely fills the varices without run-off; in type B the sclerosant flows into the systemic veins (e1, arrows) and hence the associated high flow collateral vein needs additional gelfoam occlusion (e2, arrows) before sclerosant injection; in type C in presence of both gastrocaval (f1, arrow) and gastrorenal shunts the sclerosant tends to flow into a systemic vein through the second shunt. Hence the outflow shunt is occluded first with gelfoam (f2, arrow); in type D gastric varices (g), without draining veins, transjugular intrahepatic portosystemic shunt placement is ideal choice for complete variceal complex obliteration; h classical pre (h1, h2) and post (h3, h4) computed tomography demonstration of obliteration of gastric varices associated with a single large efferent shunt. Illustrations used in this figure is created by listed author (Sasidharan Rajesh) haemostasis was lower and rebleeding rates higher (63% and 72% at 2 and 3 years respectively) in the former. In the absence of cyanoacrylate glue, EBL can be considered in patients with Type 1 GOV bleeding for initial control of bleeding until further definitive management can be undertaken [33,34].
Sclerotherapy
Injection sclerotherapy for GV has been demonstrated to be less effective than what is noted with esophageal varices. The agents used for sclerotherapy include ethanolamine oleate, sodium tetradecyl, glucose solutions, and acetic acid. High blood flow within the GV results in the early flush of injected sclerosants, reducing its efficacy. In such situations, larger volumes of injection can be contemplated. However, in reality, it leads to adverse events such as febrile illness, severe retrosternal discomfort, ulcerations, mediastinitis, embolization in the presence of large portosystemic shunts and perforations that can result in approximately 50% mortality. The rebleeding rates with sclerotherapy alone can be as high as 90%, of which 50% bleeds are secondary to injection site ulcerations. Sclerotherapy has greater success for control of bleeding and prevention of rebleeding in esophageal variceal disease [35,36]. Currently, EBL or cyanoacrylate glue injection is considered the treatment of choice for Type 1 GOV bleeding and cyanoacrylate glue injection for Type 2 GOV and isolated GV. Some authors have used EBL along with sclerotherapy for management of Type 1 GOV bleeding with an injection of 1 mL of sclerosant above the site intended for band ligation. The success rate for haemostasis with this approach is close to 90% with the risk of rebleeding in 33%. EBL should only be performed in patients with bleeding from small Type 1 GOV in which both the mucosal and contralateral wall of the vessel undergoes complete suction into the ligator, without which the likelihood of band detachment is high leading to ulceration of the overlying vessel and catastrophic secondary bleeding [35,36].
Endoscopic cyanoacrylate glue therapy
N-butyl-2-cyanoacrylate (NBC) is a monomeric tissue adhesive that rapidly polymerize on contact with blood leading to hardening of varix, cast formation, and obturation. The NBC is the most commonly employed agent for glue therapy and undergoes polymerization within 20 s of contact with blood inside the variceal lumen. Lipiodol (ethiodized oil composed of iodine combined with ethyl esters of fatty acids of poppyseed oil, primarily as ethyl monoiodostearate and ethyl di-iodostearate) or normal saline is sometimes used to avoid occlusion in the endoscopy channel. A 1:1 mixture is usually recommended and can reduce the risk of embolization. It is recommended that a 3.7 mm width channel endoscope be utilized for ease in glue administration. Some newer glue products such as 2-octyl-cyanoacrylate and NBC mixed with methacryloyloxy-sulfolane do not require dilutional agents due to the slow polymerization time [37,38]. In a Cochrane Database Review, a meta-analysis of three randomized controlled trials comparing cyanoacrylate glue therapy versus EBL demonstrated both therapies to be effective for control of bleeding, but significantly lower rates of rebleeding was noted with the former. These studies included mostly Type 1 GOV bleeds and utilized NBC [39].
Endoscopic thrombin injection and inorganic haemostatic powder spray
Another treatment modality infrequently used in gastric variceal bleed control is thrombin injection in which catalysation of fibrinogen to fibrin with additional platelet function augmentation enhances cot formation within the bleeding varix. This is an attractive alternative to glue therapy where expertise is unavailable and has fewer side effects and systemic complications. Five millilitres of thrombin have the potency to coagulate one litre of blood in less than a minute. Even though thrombin treatment was found beneficial in controlling bleeding from GV, especially Type 2 GOV in small single-center series, highquality studies were lacking [40][41][42][43]. Recently, Lo and colleagues, in a prospective randomized trial, showed that endoscopic thrombin injection was similar to glue injection in achieving successful haemostasis of acute GV bleeding but with higher incidence of complications associated with the latter [44]. Few reports demonstrating the use of inorganic absorbent powder TC-325 haemostatic spray (Hemospray ® , Cook Medical, IN, USA) in patients with refractory gastric variceal bleeding after the failure of glue injection therapy has been published in the literature. The issue with haemostatic powder spray is that it can act only in the presence of active bleeding during endoscopy [45].
Endoscopic-ultrasound guided therapy for gastric varices
EUS color Doppler can help distinguish GV from gastrointestinal tumors and prominent gastric folds and allows real-time confirmation of GV obliteration through precise identification of perforating feeder vessels and accurate delivery of tissue adhesive decreasing the amount of glue injected and reducing the risk of embolization (Fig. 5). Romero-Castro et al. performed a proof-of-concept study on EUS-guided glue therapy for bleeding GV and utilized lipiodol to localize feeder vessels before glue use accurately [46]. Lee et al. showed that late rebleeding rate beyond 48 h was significantly lower in patients with GV bleeding receiving EUS guided glue injections every 2-weeks until eradication [47]. In another single center study, 90% of patients experienced complete haemostasis after glue injection into the afferent vessels confirmed on color Doppler, without rebleeding events in the short term [48].
EUS-guided coiling of GV was shown to enhance haemostasis in multiple series. The metal coils, made from synthetic stainless-steel fibre induce clot formation and thrombosis of varix. Usually, a 19-G access needle is utilized, but 22-G needles for the deployment of smaller coils are also available. Levy and colleagues were the first to report on EUS-guided coiling of ectopic GV. A multicenter cohort study by Romero-Castro and colleagues demonstrated that there were no significant differences between glue injection compared to coiling for haemostasis of bleeding GV at 180-days. Nevertheless, the mean endoscopic session time and the number of sessions required for variceal obturation was lower in patients receiving EUS-guided coiling [49,50]. The combined use of EUS-guided coil placement along with cyanoacrylate glue injection results in en-mass 'scaffold' formation which is associated with very efficient control of bleeding and reduction in the rate of rebleeding. Combined EUS-guided therapy promoted gastric variceal eradication in 96% of treated patients with a single sitting with only 16% experiencing rebleeding over a follow-up period of 6-months without any minor or major adverse events [51]. Similar findings were demonstrated by Bhat et al. in their study on 100 patients. However, adverse events in the form of pulmonary embolism and self-limited abdominal pain occurred in 5% [52]. A recently performed systematic review and meta-analysis showed that EUS combination therapy with coil embolization and glue injection was a preferred strategy for the treatment of GV over EUS-based monotherapy [53]. ing from esophageal varices is well documented. Even though TIPS can promote haemostasis in acute GV bleeding, varices can persist and bleed at lower portal pressures than esophageal varices. Previous retrospective studies have shown that in patients with GV haemorrhage undergoing TIPS placement in the absence of adjuvant variceal embolotherapy, the GV remained patent in 65% with rebleeding in 27% and 90-days mortality of 15%. Another study also reported that 50% of patients post TIPS had persistence of GV with 27% rebleed rates [54,55]. A meta-analysis comparing TIPS to endoscopic variceal sclerotherapy (EVS) in the management of GV bleeding in terms rebleeding, hepatic encephalopathy and survival demonstrated improved benefits of TIPS in the prevention of GV rebleeding that was associated with an increased risk of encephalopathy with comparable survival between study groups [56].
Endovascular therapy for bleeding gastric varices
Various theories have contemplated the ineffectiveness of TIPS alone for complete control of bleeding from GV. The 'proximity' theory states that esophageal varices are well decompressed after TIPS since the left gastric vein supplying the varices are small and close enough to benefit from decompression through shunt creation. The GV, on the other hand, is farther away, larger, and associated with multiple afferents depending on the collateral anatomy of the variceal complex. As per the 'throughput' theory, low-pressure shunts from large-calibre inflow and outflow vessels associated with GV compete with and effectively decompress the TIPS stent leading to the persistence of varices. The 'recruitment' theory states that new afferent vessels form after treatment of a gastric variceal system post TIPS due to the complexity in afferent and efferent flow pathways, all of which do not undergo decompression or undergo only partial embolization unexpectedly. In such situations, shunt occlusion and TIPS may be more effective than TIPS alone [57,58].
Retrograde or antegrade transvenous embolization of gastric varices The American College of Radiology
Appropriateness Criteria Committee on interventional radiology recently recognized BRTO as an alternative to TIPS in specific clinical situations for treatment of GV. As per current conservative practice, BRTO is reserved for those patients who are ineligible for TIPS. However, with improvement in understanding of hemodynamic physiology associated with the variceal disease, this has changed to incorporate a combination of endovascular therapies. A meta-analysis on post-procedure outcomes in 1016 patients who underwent BRTO for management of bleeding GV demonstrated technical success, i.e., complete thrombosis of the GV on short-term follow up imaging and control of active bleeding among 96.4% patients. Absence of rebleeding and no bleeding in high-risk GV was notable in 97.3% on follow up. However, most studies were retrospective in nature and included patients who underwent primary prophylactic BRTO for high-risk GV [59]. In another meta-analysis, on clinical outcomes in GV bleeding among 353 patients undergoing TIPS (n = 143) or BRTO (n = 210), it was found that no significant differences were notable with respect to technical success, haemostasis and complication rates between both treatments. Nevertheless, rebleeding and hepatic encephalopathy were significantly lesser in those who underwent BRTO [60]. Adverse events associated with conventional BRTO include fever, chest pain, gastrointestinal symptoms, haemoglobinuria, ascites, and pleural effusion. It was shown that the occlusion of a large gastrorenal shunt could increase the hepatic venous pressure gradient by up to 44% from the baseline. BRTO was found to aggravate pre-existing esophageal varices (ranging from 30 to 68%), leading to variceal bleeding even though associated death is never reported. In this context, a pre-shunt-occlusion endoscopy and prophylactic band ligation of large or high-risk esophageal varices are prudent.
In some patients, with GV bleeding, the combination of endovascular procedures could be more efficacious than singular treatments which in turn depends on the variceal collateral pathway anatomy. For example, as per the afferent flow classification, patients with Kiyosue type 1 GV can be easily managed with only shunt embolization. In contrast, TIPS placement would benefit those patients with GV and associated multiple collateral afferents in the absence of a dominant shunt (Type 2 of Kiyosue classification). Alternatively, in patients with afferent and efferent shunts as well as multiple collaterals (such as Kiyosue or Saad Caldwell Type C2), a combination of TIPS and shunt embolization could be more beneficial. Shunt embolization, along with TIPS placement, negates the high flow through the shunt, reduces rebleeding rates, improves TIPS efficacy, shunt patency and flow and decreases the incidence of hepatic encephalopathy [32,58,59]. In patients with large portosystemic shunts, it is not uncommon to notice an attenuated portal vein that is difficult to cannulate for the TIPS procedure. Shunt embolization improves portal vein inflow and increases portal vein diameter making a technically challenging TIPS procedure far more comfortable to perform. A combination of multiple embolization techniques, such as inflow modulation through coils or balloon occlusion followed by sclerosant injection and outflow modulation utilizing a plug, can lead to complete embolization of the variceal system with a reduction in sclerosant migration to untargeted regions. There are no published multicenter series of randomized trials on TIPS and combination shunt embolization procedures. Saad and colleagues reported outcomes in 36 patients undergoing BRTO procedure for gastric variceal bleeding in whom 9 underwent simultaneous TIPS placement. It was shown that the ascites and hydrothorax free rate for BRTO versus BRTO + TIPS at six months and one year was 58% and 29% compared to 100% and 100%, respectively. A significant reduction in recurrence of haemorrhage was also noted in the combination group demonstrating the fact that TIPS improved the PH burden developing after BRTO. Another prospective randomized controlled trial of TIPS alone versus TIPS with adjunctive left gastric vein embolization found a significant reduction in 180 days overall rebleeding rate in the embolization group [61,62]. In a meta-analysis that compared the incidence of shunt dysfunction, variceal rebleeding, encephalopathy, and death between patients treated with TIPS alone and those treated with combined variceal embolization it was shown that variceal embolization during TIPS procedure improved the prevention of rebleeding, but no significant differences were identified concerning shunt dysfunction, encephalopathy, or mortality [63]. Thus, the treatment of gastric variceal bleeding has evolved through the years and is currently far from the current standard recommendations to better suit the patient, dependent on the hemodynamic classification and with reasonable control of portal hypertensive complications. An algorithm for treatment decisions regarding gastric variceal bleeding is shown in Fig. 6.
Conclusion
Gastric variceal haemorrhage is associated with high rebleeding rates and mortality than esophageal variceal bleeding. Endoscopic cyanoacrylate glue therapy is the current standard recommendation for the management of gastric variceal bleeding. However, with a better understanding of the anatomic and hemodynamic components associated with the gastric variceal system, advanced options for bettering clinical outcomes are in evolution. These include EUS assisted combination approaches and multiple endovascular techniques including TIPS and shunt embolization or their combinations that can be offered to patients, depending on the underlying liver disease severity, collateral pathway anatomy, affordability and availability of technical expertise.
|
2020-10-31T13:51:31.637Z
|
2020-10-30T00:00:00.000
|
{
"year": 2020,
"sha1": "7abec6b73e94a382a87c6e766052d2ae66932781",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-020-01513-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7abec6b73e94a382a87c6e766052d2ae66932781",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240155956
|
pes2o/s2orc
|
v3-fos-license
|
Traceable calibration system for non-conventional current sensors with analogue or digital output
— This paper describes the setup of a traceable calibration system for conventional or non-conventional current sensors even with digital output. The system is lately developed at PTB within the frame of the European project “FUTUREGRID II”. Details of the system components are presented. The absolute phase errors of the two-channel generator related to the pulse per second time reference of the global positioning system can be configured to almost zero. The accuracies of the reference current transformers are within ±10 µA/A and μrad at power frequency. The sampled value receiver box is firstly validated for the sample rate of 4 kHz according to the IEC standard 61869-9 [1].
Digitalization is the core of the future electrical power grid. The instrumentations in power grid substations are moving from an "iron and copper" state to a more multifaceted and intelligent grid, where the stability under increasingly complex and challenging conditions is required for real-time capable control and monitoring systems. The European project "FutureGrid II" is committed to developing new techniques and calibration services with special emphasis on accurate timing for the measurement equipment with digital input and output according to IEC 61850-9-2 protocol [2]. Such services are not yet widely available in national metrology institutes worldwide [3]. In general, instrumentation in electricity transmission and distribution systems are responsible for monitoring from general metering to protection and grid diagnostics. Intensive research and development of new technologies for power generation, e.g., renewable technologies such as solar, wind, hydro, geothermal, and ocean-thermal conversion plants, accelerate the supervision of Power Quality (PQ) and Phasor Measurement Units (PMUs) [4]. In this context, the concept of the digital instrumentation, i.e. instrument transformers (ITs) associated with merging unit (MU) or stand-alone merging unit (SAMU) [5] emerges. The evolution for an electrical power grid with digital instrumentation is in full swing and poses a wide range of challenges for all stakeholders. Investments in innovative technologies today create future-proof power grids characterized by reliability, efficiency, and sustainability [6], [7].
The input of the FutureGrid II project contributes to the long-term research and innovation activities and the European Union's energy transition and the modernisation of the electricity grid. One of the objectives of FutureGrid II is to develop new test systems for the dynamic characterisation of the ITs for PQ measurements and to exploit the necessary metrological infrastructure and standards. The calibration systems support dynamic testing of emerging nonconventional transformers, i.e. analogue and digital ITs, including tests with typical waveforms associated with PQ and synchronised phasor measurements. In this paper, the setup of a calibration system for rated currents up to 2 kA is introduced for any kind of analogue current sensors (conventional or nonconventional current transformers) or current transformers (CTs) with digital output, which includes the associated MU/SAMU.
II. SETUP OF THE CALIBRATION SYSTEM FOR THE CTS
The setup of the calibration system for the current transformers is shown in Fig. 1. The calibration system is appropriate for any kind of analogue current sensors (conventional or non-conventional CTs) or CTs with digital output, which include the associated MU/SAMU. The calibration system is mainly made up of a This is an author-created, un-copyedited version of an article accepted for publication in 2021 IEEE 11th International Workshop on Applied Measurements for Power Systems (AMPS), DOI: 10.1109/AMPS50177.2021.9586012. The presented work is used in the project 17IND06 (FutureGrid II) which have received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. high current generation system (marked as red block), a set of analogue reference CTs (marked as green block) with associated precision resistors [8] and a precision 2-channel measuring system (marked as purple block) [9], [10]. The synchronization signals (marked as "Sync." in blue) and the SV receiver box (shown as a grey block) are necessary when the output of the device under test (DUT) is a sampled value (SV) data stream.
The high current generation system consists of a programmable two-channel arbitrary waveform generator, a lowpass filter, a transconductance power amplifier and the high current generating transformers. Firstly, the arbitrary waveform generator, e.g., the Agilent 33500B Series with resolution of 16 Bit, generates different test waveforms, which reproduce PQ phenomena. The lowpass filter is to used limit the slew rate and the bandwidth of the generated signal with adjustable amplitudes. The generated signal is in turn amplified by the transconductance power amplifier (up to 270 Vrms / 70 Arms, DC to 15 kHz) and by the high current generation transformers with several rated current ranges from 5 A to 1 kA rated. The laboratory generation capabilities are tested with frequencies ranging from 50 Hz to 5 kHz. Sinusoidal waveforms up to 2 kA, dual-tone or multi-tone waveforms up to 2 kA and amplitude modulated waveforms are generated. The limitation at frequencies above 500 Hz is caused by the internal stray inductance of the high current generation transformer.
The synchronization plays a significant role for the phase measurement processes of the SV-based devices and the digital signal processing between different devices within the substation. The synchronization signals can be obtained by two laboratory global positioning system (GPS) receivers: the "Meinberg GPS 162" and the "NI-PXI-6683". The synchronization signals are regarded as the time references for the calibration systems and are transmitted as 10 MHz, pulse per second (PPS) and IEEE 1588-2008 (PTPv2) protocol. Besides, the White Rabbit LENs [11], which are completely Ethernet-based networks, are used for the time data transmission and synchronization in the laboratory with subnanosecond precision.
The deduction of the calibration error for the current transformers is split into two parts. For the conventional CT calibrations, the DUT is regarded as an analogue CT. The complex voltage ratio ΓXN of the measuring system, ΓXN = UX / UN, is measured by a Digitizer and two calibrated shunts: ZN and ZX. Using the ratio FIU,N of the reference current-to-voltage (C-to-V) transformer, FIU,N = UN / Iin = UN / Iout · Iout / Iin = ZN · FI,N, and the ratio FIU,X of the DUT C-to-V transformer, FIU,X = UX / Iin = UX / Iout · Iout / IP = ZX · FI,X, the current ratio FI,X of the DUT CT can be calculated as follows: Besides, the FI,X for a CT (DUT) can be defined as where Kn is the transformer ratio of the rated primary and secondary currents Kn = IP,r / IS,r, εi and δi represent the ratio error and the phase error of the CT. By combining (1) and (2), the errors of the analogue CT are deduced.
For a digital instrument transformer, the ratio FI,X can be expressed by Imu / Iin (or equivalently Umu / Uin). Imu represents the complex fundamental rms current that is calculated from the Discrete Fourier Transform (DFT) of the SVs. Iin refers to the measured primary current by means of the reference CT FI,N, the highly accurate shunt ZN and the measured voltage UN of the digitiser. The measured complex primary current is determined according to Iin = UN / (ZN · FI,N). In order to relate the two measurements, a sinusoidal signal Uref is generated as the phase reference by the waveform generator, programmed without any phase offset (arg{Uref} = 0°) at channel CH I. This phase reference, which is synchronised to the GPS PPS signal, is connected the Digitiser channel UX (UX = Uref). The complex voltage ratio ΓXN is measured by a Digitizer ΓXN = UX / UN = Uref / (Iin · ZN · FI,N). The ratio error εi of the digital DUT can be calculated as follows: where the rms current Iin is determined by The phase error δi of the digital DUT can be calculated as where the absolute phase of the complex current arg{Iin} is determined by arg{Iin} = arg{Uref} -(arg{ΓXN} + arg{ZN} + arg{FI,N}).
III. COMPONENTS OF THE CALIBRATION SYSTEM
The detailed components for the current transformer calibration system, shown in Fig. 1, deal with a programmable two-channel generator for the generation system, a reference current-to-voltage (C-to-V) transformer set as analogue reference CTs and an SV receiver box for the CTs with digital output.
A. Program of the two-channel generator
The LabVIEW program is designed to control and extend the waveform generation of the two-channel generator with three main functions: hardware communication, generation setups and the data stream generation. The flow chart of software is shown in Fig. 2. time duration TW (shown in the purple block), the arbitrary waveforms are calculated as a data set. Finally, this data set is sent to the generator.
The phase corrections are used for the alignment of the arbitrary waveforms to the reference PPS signal. The phase corrections are differentiated into standard corrections and optionally usable, small non-standard corrections for compensating the generator-internal phase shifts. To avoid the time delay between the reference PPS and the generated signals, a numerical method is designed. The "ideal" mathematical time-quantized voltage waveform is expressed as [12], [13]), φstep = 2π / N ("1-step" phase correction [12], [13]), N = fs / f (samples per period -SpP) and y refers to the user-defined non-standard phase corrections.
B. The reference C-to-V transformer
The reference C-to-V transformer consists of a current transformer and an associated precise measuring resistor. The rated primary currents of the C-to-V transformer set ranges from 8.3 A to 1500 A. Four commercial zero-flux CTs with rated currents of 50 A (CT50), 200 A (CT200 [8]), 600 A (CT600) and 2 kA (CT1500) were used to establish the reference CTs . To convert the different secondary currents of these CTs into a 1 V output voltage, a resistor box was built, containing six precision resistors from 1 Ω to 20 Ω.
As an example, the symmetrical winding configuration of the added primary of the CT1500 is illustrated in detail in Fig. 3 (a) from a vertical view. The connection scheme for the different primary windings Np = 1 to 4 of the CT1500 is shown in Fig. 3 The usability and expected metrological characteristics of the window type zero-flux transformer were improved by implementing several symmetrical primary windings to the CT.
The resistor box consists of two power supplies, two separate channels and six precision resistors. The power supply with ±15 V DC voltage operates the CTs. The integration of two channels into one box allows a simple and monolithic calibration facilities for the reference CT as against the DUT. The nominal values of the resistors were selected from 1 Ω to 20 Ω. The detailed designs of one measuring resistor are schematically shown in Fig. 4 from various views. Each measuring resistor Rm is designed to be made up of numbers of resistors R in parallel, an associated printed circuit board (PCB) and a large heatsink. Each single resistor R was selected with a power rating of 1.5 W. The construction of the resistors R in parallel for one Rm value, e.g., twenty 20 Ω resistors in parallel for a 1 Ω measuring resistor, benefits to reduce the power loss inside the resistor. The PCB is designed to minimize the ac errors (εm and δm) of the measuring resistor by minimizing the effective magnetic field for the resistors. The heatsink is developed later around the resistors.
C. SV receiver box
The self-designed SV receiver is developed to connect SVbased devices with the LabView-based software platform in a computer with integrated analysis of the SV. with the standard IEC61869-9, are sent from the SV-based devices, e.g., SAMU/MU, through an Ethernet port. The SV receiver transmits the SVs to the computer through a USB port according to IEC61850-9-2 communication protocol.
The flow chart of the LabView-based software platform, shown in Fig. 5, illustrates the detailed processes according to different function modules. Firstly, the VISA-Resourcename and parameters (SpP; A, a, NF and q: the resampling parameters; N, Fs: the sampling parameters) are required for the data acquisition. After the transfer of the data (SVs), the channel for the synchronisation (SynChnnel) is selected and the frequency of the signal is determined. The resampling processing [14] can be chosen when the detected signal frequency is not synchronised to the given sampling parameters. The FFT spectrum analysis is supposed to determine rms values and phase angles of the voltages and currents of the SV in the frequency domain. Based on these values, the required current Imu and its phase arg{Imu} can be extracted for calculating the current error and phase displacement -see (3) and (4) -of the SV-based device.
IV. VALIDATIONS OF THE CALIBRATION SYSTEM
The validations of the CT calibration system, corresponding to the described components in Section III, are divided into three parts: results of the programmable twochannel generator, characterisation of the reference C-to-V transformer set and validations of the SVs.
A. Calibration results of the programmable two-channel generator
The calibrations with the standard phase corrections (Pi / N & 1 step phase corrections) for the programmable twochannel generator were divided into two aspects: the characterization of the absolute phase shift between the generated signals of the generator and the GPS PPS time reference and the characterization of the relative phase shift between the two generated signals of the two channels. The absolute phase error Δφabs was determined from the time difference Δtabs between the zero gear of the generated phase reference signal tCHI and the positive edge of the PPS clock tPPS. An oscilloscope has been used for determining this time difference Δtabs = tCHI -tPPS. The relative phase error Δφrel between the two channels was measured by the precision 2channel measuring system [15].
The investigated parameters for the calibration measurements are classified into two groups for each characterization aspect: the generation parameters of the SV data stream (fs, i.e., the SpS and TW) and the general settings of the signal parameters (f0, A0, φ0 and fh, Ah, φh for a dual-tone signal with the h th overtone as well as f, A, φ for a sinusoidal or square signal). a) Absolute phase errors Δφabs: For example, a measurement result for deriving the absolute phase errors Δφabs is shown in Fig. 6, which contains many modified details from the oscilloscope screen shot. The used parameter values were regarded as default values for the general measurement. Besides, since the resolution of the oscilloscope is in nanosecond range, the oscilloscope measurement almost overlaps the generated signals from the two-channel generator. For the measurement shown in Fig. 6, the time difference Δtabs (marked as red in Fig. 6) is obtained by the generated signal of one channel (shown as a pink curve) and the reference PPS clock (shown as a yellow curve). To avoid the reading error due to the reflection of the electric signals in the coaxial cable, the times tCHI / tCHII and tPPS were all directly determined at the beginning of the rising edge of the square wave signals or at the beginning of the positive edge for the sinusoidal signals. Finally, the absolute phase errors were calculated as Δφabs = 2πf · Δtabs.
Preliminary investigations on the absolute phase errors Δφabs of the generated signals from the two-channel generator showed that the amplitude linearity (A0), the phase linearity (φ0) of the generated signals and the time window for the SV generation of the generator (TW = 1 s or 10 s) had no significant influence on Δφabs. Moreover, the absolute phase errors of a sinusoidal signal had almost the same errors as its corresponding dual-tone signals (Ah = 10%A0, h up to 51). By varying fundamental frequency of the signals at 50 Hz, 60 Hz and 100 Hz, the absolute phase errors (Δφabs_50Hz = -7.7 μrad, Δφabs_60Hz = -10.6 μrad, Δφabs_100Hz = -15.6 μrad) had small a difference of below 10 μrad with SpS = 100,000.
In order to have a better recognition of the results from the oscilloscope's display, the square wave signals were generated for the measurements of SV generation with various SpS. To avoid the surrounding disturbance of the signals at power frequency, the signal frequency was set as 52 Hz. Moreover, different measurement setup, e.g., various cable length for the connections, leads to a small drift for the absolute phase errors in µrad range. The results regarding the different SpS ranging from 5,000 to 500,000 with a certain measurement setup are shown in TABLE I. The generated signals are square wave signals with parameters of U = 5 V, f0 = 52 Hz The results from TABLE I demonstrate that the absolute phase errors of the two channels of the generator were almost This is an author-created, un-copyedited version of an article accepted for publication in 2021 IEEE 11th International Workshop on Applied Measurements for Power Systems (AMPS), DOI: 10.1109/AMPS50177.2021.9586012. The presented work is used in the project 17IND06 (FutureGrid II) which have received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. identical with each other for a certain SpS. With increasing of the SpS, the absolute phase errors changed from around -1846 μrad with SpS = 5,000 to around +38 μrad with SpS = 500,000. After several attempts, the absolute phase errors approached to zero with SpS = 116,000. For the calibrations of a CT with a digital output, the absolute phase errors in TABLE I can be used directly as the phase corrections to the PPS signal, arg{UPPS}, for the reference phase, arg{Uref}, in (4). b) Relative phase errors Δφrel: The calibration measurements were executed by five aspects: the amplitude linearity (from 2 % · A0 to 100 % · A0) of the investigated signal, the phase linearity (φ0 from 0 degree to 180 degree) of the investigated signal, various generator clock rate (fs from 10 kHz to 500 kHz) for the SV generation of the generator, various time window (TW from 1 s to 10 s) for the SV generation of the generator and the influence by a dual-tone signal with an odd harmonics (up the 49 th order). One generated signal of the generator was set as an invariant reference signal (A0 = 5 V, f0 = 52 Hz), which was connected to the reference channel of the measuring system. The parameters: fs (default: fs = 100 kHz), TW (default: TW = 1 s) and the option for Pi / N & 1 step phase corrections were programmed to be varied contemporaneously for both channels of the generator. As a result, the relative phase errors between the generated signals from two-channel generator stay below 10 urad for the complete investigations.
B. Characterisation of the reference C-to-V transformer
The final accomplished set of reference current transformers and the associated precise measuring resistor box are presented in Fig. 7. The reference CTs, shown in Fig. 7 (above) from link to right, are respectively the CT50, CT200, CT1500 and CT600. The front panel of the associated precise measuring resistor box is presented in Fig. 7 (below). According to the previous work [8], the characterisation of the C-to-V transformers was carried out by separating the behaviour at power frequency and the frequency response. The errors of the current transformers (εi(f) and δi(f)) are represented by εi(f) = εi(f0) + Δεi(f) and δi(f) = δi(f0) + Δδi(f), where εi(f0) and δi(f0) refer to the CT errors at power frequency and Δεi(f) and Δδi(f) refer to the frequency response of the CTs. A step-up calibration was especially used when the frequency response of the CTs is determined. So far, the calibration measurements at power frequency of the reference current transformer set have been completed. The errors of the reference current transformer set were within ±10 µA/A and μrad at 50 Hz. In addition, the initial frequency response measurements of the first two CTs (CT50 and CT200) showed that the ac errors at a frequency of 12 kHz were below 0.1 % and 0.2 crad with the expanded uncertainties are below 0.01 % and 0.03 crad (k = 2).
Furthermore, the errors of the self-developed measuring resistors are represented by a ratio error εR and a time constant τZ, where εR was obtained by the calibration at power frequency and τZ was derived from the phase behaviour of the frequency response measurements. As a consequence, the calibrated results of the 6 measuring resistors are listed in TABLE II. The ratio errors of the 6 measuring resistors were below ±2 μΩ/Ω at power frequency with expanded uncertainties below ±3·10 -6 (k = 2). The time constant was about below ±1 ns with the attained expanded uncertainties below ±4 ns.
C. Validation of the SVs
To validate the accuracy of the received SVs from the SV receiver box, the plausibility of the generated SVs needs to be validated in the first place. Therefore, the validation measurements the SVs are assigned into two parts: the validation of the SV generation and the validation of the SV receiving.
For laboratory validation of the SV generation, a microcontroller-based SV generator box and the corresponding LabView-based program were developed for sending the SV-data over the ethernet using the IEC91850-9-2 communication protocol. The basic module box is a 32-bit ARM Cortex-M4 CPU with an ethernet and a USB port. Three-phase four-wire (L1, L2, L3, LN) currents and voltages with 1 mA current resolution and 10 mV voltage resolution are programmed as the SV data stream though the USB port. A one-period signal is initially programmed to the SV generator box and repetitively sent to the SV-based devices. The beginning of an SV data set is signalized by a rising edge on the PPS output. The sample rates for the SV generation were configured as 4000 Hz (Application Service Data Unit: ASDU = 1), 4800 Hz (ASDU = 1 or ASDU = 2), 5760 Hz (ASDU = 1) and 14400 Hz (ASDU = 6). In addition, the SV generator box was considered as a substitute for the devices with the digital output (e.g., a SAMU) and can be used for the validation of an SV-based measuring device (e.g., for digital energy meters) without any numerical loss.
The validation of the SV generation mainly deals with the round-off function in LabVIEW program. The errors were determined by the currents and voltages difference between the SVs directly generated by LabVIEW and the SVs sent to the SV-based devices. The results of the SV generation for L1 This is an author-created, un-copyedited version of an article accepted for publication in 2021 IEEE 11th International Workshop on Applied Measurements for Power Systems (AMPS), DOI: 10.1109/AMPS50177.2021.9586012. The presented work is used in the project 17IND06 (FutureGrid II) which have received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme. are presented in Fig. 8 for instance. Dual-tone signals (f0 = 50 Hz, I0 = 1 kA, φ0_I = 0 °, U0 = 100 kV, φ0_U =0 °) with the 3 rd harmonics (10 % amplitude, zero phase offset) were generated for the three-phase currents and voltages. Fig. 8. The errors of generated voltages and currents (L1) in SV generator program As shown in Fig. 8, the voltage errors stay within ±5 mV and the current errors stay within ±0.5 mA, as anticipated due to the 1 mA current resolution and 10 mV voltage resolution of the standardised data stream [1], [2]. It can therefore be concluded that the sampled values, sent by the SV generator box, are plausible and numerically accurate within their resolutions. Corresponding to the SV generation for Fig. 8, the received SVs of L1 from the SV receiver box are shown in Fig. 9. The diagram shows that the SV receiver box received the repeated one-period signal (t > 0.02 s) from the SV generator box. Moreover, the differences between the sent and received SVs equalled to zero. This means that the SV receiver box receives exactly what is sent to it. Eventually, similar results were obtained by L2, L3 and LN as well.
V. CONCLUSION
In conclusion, the proposed calibration system has been applied for calibrating at least three different types of analogue current sensors (conventional inductive CTs, electronic CTs and Rogowski coils) and two SAMUs from different companies, as planned in FutureGrid II. The absolute phase errors of the two-channel generator to the GPS PPS time reference are mainly depended on the SpS and can be configured to almost zero. The errors of the reference current transformer set are within ±10 µA/A and μrad at power frequency. According to IEC 61869-13 [5] for a SAMU and IEC 61869-6 [16] for a low-power instrument transformer (LPIT), the anti-aliasing filter attenuation for the half of the sampling rates (up to 14.4 kHz) up to 7.2 kHz shall be greater and equal to 40 dB. The uncertainties caused by the twochannel generator, the set of reference current transformers and the precision 2-channel measuring system are well below 10 -3 up to 12 kHz. Compared to the limits by the internal antialiasing filter of the DUT, the uncertainties of the proposed measuring system strictly meet the requirements. The SV receiver box is firstly validated for all sample rates with 1 ASDU. Additionally, preliminary calibrations of a SAMU are currently being executed.
Further complete calibrations of the integrated C-to-V transformer set and the SAMU as well as the corresponding measurement uncertainties will be accomplished in the prospective work. The SV receiver box will be extended for further sample rates with 2 ASDUs and 6 ASDUs according to [1]. Moreover, small phase errors of the phase reference signal of the generator can be software-compensated with a non-standard phase correction.
ACKNOWLEDGMENT This project has received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme.
|
2021-10-30T13:17:36.901Z
|
2021-09-29T00:00:00.000
|
{
"year": 2021,
"sha1": "ae0ceae262d2194f73f4189688b1200e1398e1ad",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/5704497/files/_OAR_Author_version_AMPS2021_Traceable_calibration_system_for_non-conventional_current_sensors_with_analogue_or_digital_output.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "17974ff6f06ce23a0b524e141f1f2a99316a3015",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
56446145
|
pes2o/s2orc
|
v3-fos-license
|
Stable ellipticity-induced Alfven eigenmodes in the Joint European Torus
An external antenna excites stable eigenmodes in elongated Ohmically heated plasmas in the Joint European Torus (cid:126) JET (cid:33) (cid:64) P.-H. Rebut, R. J. Bickerton, and B. E. Keen, Nucl. Fusion 25 , 1011 (cid:126) 1985 (cid:33)(cid:35) . The frequency of the modes (240 (cid:50) 290 kHz (cid:33) falls in the gap in the magnetohydrodynamic (cid:126) MHD (cid:33) continuum that is produced by ellipticity. Some modes are very weakly damped ( (cid:103) / (cid:118) (cid:44) 10 (cid:50) 3 ). © 1997 Physics. (cid:64) S1070-664X (cid:126) 97 (cid:33) 01210-X (cid:35)
I. INTRODUCTION
In a cylinder, the spectrum of Alfvén waves is continuous in the ideal magnetohydrodynamic ͑MHD͒ model. In tokamaks, departures from cylindrical symmetry create gaps in the continuum. MHD theory predicts Alfvén eigenmodes ͑AE͒ in the gaps produced by beta ͑BAE͒, 1 toroidicity ͑TAE͒, 2 and ellipticity ͑EAE͒. 3 Unstable modes with frequencies similar to the expected BAE 4 and TAE 4,5 frequencies were first observed during neutral-beam heating. A possible EAE driven by beam ions was also reported. 1 In recent months, tail ions that are accelerated by ion cyclotron waves have destabilized EAE in the Joint European Torus ͑JET͒ and in the Japan Atomic Energy Research Institute Tokamak-60 Upgrade ͑JT-60U͒. 6 Measurements of unstable, fast-ion driven instabilities are complemented by studies of stable modes. In JET, an external antenna excited stable TAE 7 and kinetic AE. 8 The first observation of stable EAE was also briefly reported. 9 This paper further documents the identification of the mode as an EAE. In addition, the first systematic measurements of the damping rate are presented and initial comparisons with theoretical models are given.
II. EXPERIMENT
The modes are excited by passing ϳ5 A through two saddle coils on the bottom of JET. 9 For the experiments reported here, the antenna phasing is adjusted to excite predominantly modes with toroidal mode numbers n of Ϯ2. The excitation frequency is swept ͑typically between 150 and 300 kHz͒ and the driven response of the plasma is extracted from background noise using a set of synchronous detectors that provide the real and imaginary components of the signal. For these experiments, data from twelve electron cyclotron emission ͑ECE͒ radiometer signals, 10 four ordinary-mode reflectometer signals, 11 and a toroidal array of nine magnetic probes are archived.
The detector response to the antenna current can be described as a transfer function H(). An EAE resonance in the transfer function is shown in Fig. 1, corresponding to a mode at the frequency of 262 kHz. In the complex plane, the magnetic probe signal encircles the pole at pϭi 0 ϩ␥, where 0 ϭ2 f exp is the ͑real͒ resonant frequency and ␥ is the ͑imaginary͒ damping rate. The data from a complete set of diagnostic signals ͕x i ͖ are analyzed by simultaneously fitting the measured transfer functions H(,x i ) to a rational fraction, HϭB/A, where B and A are complex polynomials. 9 The denominator A is assumed the same for all the signals and determines the characteristics of the resonance. Here A is chosen to be of the second order to describe a single resonance. The numerators B(,x i ) ͑chosen of 5th order in this case to account for direct coupling between the antenna and the detectors͒ are proportional to the strength of the response. In particular, for the ECE measurements, the residues B are related to the wave amplitude as a function of space, i.e., the radial eigenfunction. For the measurements presented here, the ECE and reflectometer signals are relatively weak, so the magnetic probe data govern the determination of f exp and ␥.
Determination of the MHD gap structure requires knowledge of the profiles of safety factor q and mass density . The q profile is calculated by the equilibrium reconstruction code EFIT, 12 using magnetics data and the sawtooth inversion radius ͑from ECE͒ as input to the code. The mass density is inferred from measurements of the electron density by six interferometer chords 13 in these deuterium plasmas with few high-Z impurities and little hydrogen the mass density is approximately Ӎ2m p n e , where m p is the proton mass. Systematic uncertainties in the data contribute more to the uncertainty in the calculated gap structure than random errors. At the plasma edge, the density inferred from interferometric measurements can differ by as much as 50% from Thomson scattering measurements, yielding a ϳ20% variation in the predicted frequency. At the center, uncertainty in the q profile typically generates ϳ10% uncertainty in the continuum frequency. The uncertainty in the local magnetic shear is particularly large (ϳ50% in the plasma interior͒. Corrections associated with the Doppler shift are negligible (ϳ1Ϫ2 kHz͒. The measured frequency f exp generally lies in the computed ellipticity-induced gap in the Alfvén continuum ͑Fig. 2͒. The center of the EAE gap occurs at a frequency of f EAE ϭv A /2qR, 3 where v A is the Alfvén speed. In Fig. 3, all of the measurements of f exp are compared with f EAE at sӍ0.95, f edge . In 80% of the cases, the measured frequency lies in the computed EAE gap; in the remaining cases, f exp is from 1-9% higher than the calculated continuum at the upper edge of the EAE gap, but this is within the estimated uncertainty of the calculated value. The correlation of f exp with f edge (rϭ0.53) is stronger than the correlation with the EAE frequency at sϭ0.5, f middle . Averaging over the data, the ratio f exp / f edge ϭ1.06Ϯ0. Fig. 1 to the nϭ2 Alfvén continuum as calculated by the CSCAS code. 15 The frequency falls in the gap associated with ellipticity ͑EAE͒; the toroidicity-induced gap is also shown ͑TAE͒. The radial coordinate is the square root of the normalized poloidal flux sϭͱ(⌿Ϫ⌿ 0 )/(⌿ 1 Ϫ⌿ 0 ). ͑Fig. 3͒. This lower-frequency mode falls in the gap created by toroidicity and is therefore identified as a TAE. 7 Further confirmation that the observed resonances are global Alfvén eigenmodes is obtained from the ECE measurements ͑Fig. 4͒. Although the signals are too weak to obtain an accurate profile of the radial eigenfunction, the observation of measurable residues on several detectors confirms that the eigenfunction is globally extended, as expected for an EAE. For the case shown in Fig. 4, the ECE signal is largest where the frequency of the eigenmode intersects the mϭ3 Alfvén continuum in the middle of the plasma. Calcu-lations of the expected eigenfunction with the CASTOR 16 and PENN 17 codes also predict a large amplitude near this intersection point ͑Fig. 4͒; however, the radius of the largest ECE signal does not coincide with the radius of the continuum crossing for all the modes in our EAE database.
The measured damping rates ͑Fig. 5͒ vary considerably, from values as low as ␥/ϭ8ϫ10 Ϫ4 to values as large as 3%. The dependence of ␥/ on plasma parameters is complicated. Even during nominally steady-state conditions in the same discharge, ␥/ can double on successive frequency sweeps. For our dataset, the correlation of ␥/ with v A , n e (0), a, , ␦, q 95 , I p , and T e is weak (r 2 Ͻ0.22); the correlation with the magnetic shear at sϭ0.50, s 50 , and with the shear at the edge, s 95 , is also weak. For the weakly damped modes (␥/Ͻ1%), the strongest correlation in the dataset is with the toroidal field (rϭϪ0.72). This dependence may reflect an underlying dependence of the damping rate on the gyroradius. ͓The correlation with ͱT e (0)/B 2 is weaker, however.͔ No correlation with the nonideal parameter 18 ϰs 95 q 95 ͱT e /B T is observed.
III. THEORY
Possible EAE damping mechanisms include trapped electron collisional absorption, 19,20 continuum damping, 21,22 radiative damping, 18 and, more generally, Landau damping through mode conversion. 23 ͑Ion Landau damping 24 should be negligible in these Ohmically-heated discharges.͒ A formalism for calculating the expected damping rate associated with electron collisional and radiative damping in realistic geometry was developed by Mett et al. 18 The theory only treats the interaction of a single pair of poloidal harmonics. This ''high-n'' assumption is of dubious validity for the n ϭ2 modes considered here, 25 although the theory did successfully predict the stability threshold of nϭ4 TAE modes in DIII-D ͑to within a factor of two͒. 18 We have applied this theory to our data. The frequencies at the top and bottom of . The error bars are derived from the covariance matrix of the fitting routine. 9 ͑b͒ Poloidal decomposition of the radial magnetic field Ќ calculated by CASTOR. 16 The eigenfunction is multiplied by the derivative of the electron temperature since the expected ECE fluctuation is -ٌT e . ͑c͒ Binormal electric field calculated by the kinetic version of PENN 17 multiplied by ٌT e . ͑The binormal component of E is approximately proportional to Ќ .) In ͑b͒ and ͑c͒, only the largest amplitude harmonics are shown: mϭ1 ͑solid͒, 2 ͑dash͒, 3 ͑dash-dotted͒, 4 ͑long dash͒, 5 ͑solid͒, 6 ͑dot͒, 7 ͑dash͒. the gap top and bottom are obtained from the calculations of the gap structure. Two different radial locations are selected for this evaluation: near sϭ0.95 and at the gap adjacent to the interior continuum crossing ͑for example, for the case shown in Fig. 2, the continuum frequencies are measured at sϭ0.66). The results of this analysis are shown in Fig. 5 for the interior gaps. The results for sϭ0.95 are similar. Clearly, this simple theory cannot explain the observations. On the other hand, the predictions are of the right order of magnitude, so it is possible that electron collisional and radiative damping are important damping mechanisms.
Comparisons that properly treat the mode structure are computationally expensive, so only a single discharge with both an EAE and a TAE resonance is analyzed in this study. Initially, the PENN code 17 found eigenmodes at frequencies that are consistent with the experimental values, but the predicted damping of the EAE exceeded the experimental value (␥/ϭ0.14Ϯ0.06%) by a factor of 5-10. A numerical convergence study performed a posteriori showed that higher numerical resolution was in fact required to represent correctly the mode coupling occurring in the plasma core (s Ͻ0.2); using a densified mesh with 96, 128, or 192 radial mesh points finally yielded a theoretically converged value (0.26Ϯ0.04%) which is in acceptable agreement with the experimental measurement. Initial calculations with the CASTOR code 16 correctly predicted the frequency of the TAE, but the frequency of the computed EAE was only ϳ80% of the experimental value. Judicious reduction of the density near the edge by 20% ͑which is within experimental uncertainties͒ yielded satisfactory agreement with the measured frequencies; however, the predicted damping (ϳ1%) still exceeded the experimental value. The CASTOR damping prediction is large because the computed EAE singularity occurs within a dominant poloidal harmonic. Further tailoring of the profile to shift the location of the singularity could yield a smaller damping rate.
IV. CONCLUSION
Eigenmodes with frequencies that lie in the ellipticityinduced gap in the Alfvén continuum are observed in JET. The damping of these EAE span from ␥/Շ10 Ϫ3 to values Շ0.1, i.e., the same range as the TAE. 8 Although the predicted destabilizing term produced by energetic ions is a factor of two smaller for the EAE, 24 the measured damping rates vary by orders of magnitude, so the EAE could prove dangerous in a reactor. The damping seems to depend sensitively on subtle details of the plasma profiles, thus making stability projections problematic.
|
2018-12-18T02:58:51.984Z
|
1997-10-01T00:00:00.000
|
{
"year": 1997,
"sha1": "8643f37b167717c90bb359f06ab2a73783c2f0e5",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt4594b8n4/qt4594b8n4.pdf?t=p15pyn",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5ed43702bb7a26315e06d5cd0e6ef5686878c4f4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236478546
|
pes2o/s2orc
|
v3-fos-license
|
Knowledge, attitudes, and impact of COVID-19 pandemic among neurology patients in Jordan: a cross-sectional study
Background The impacts of the COVID-19 pandemic on health services offered to patients with non-communicable diseases, including chronic neurological illnesses, are diverse and universal. We used a self-reported questionnaire to investigate these impacts on neurology patients in Jordan and assess their knowledge and attitudes towards the pandemic. Results Most respondents had positive attitudes towards the COVID-19 pandemic, with 96% reporting they believed in the seriousness of the pandemic and adhered to prevention measures. Nearly 97% resorted to the internet and media outlets for medical information about the pandemic. About one in five clinic visitors had their appointments delayed due to interruption of health services. A similar portion of patients with MS, epilepsy, and migraine or tension headache reported medication interruptions during the pandemic. One in two patients reported new events or worsening illness since the start of the pandemic, and sleep disturbances were reported by nearly one in three patients who had epilepsy or headache. Conclusion The COVID-19 pandemic’s impacts on patients with neurological illnesses in Jordan were deep and diverse. Meanwhile, the majority of surveyed neurology patients demonstrated a positive attitude towards the pandemic.
Background
The impact of pandemics on healthcare systems is welldocumented, particularly in countries with limited resources. Routine health services decreased by an estimated 18% during the 2014-2015 Ebola outbreak in West Africa, resulting in thousands of potentially preventable deaths [1]. Also, following the severe acute respiratory syndrome (SARS) outbreak in China, clinic and emergency room visits at a hospital in Taipei City dropped to 55% and 45%, respectively, in 2003 compared with the previous year [2]. A study in Qatar revealed that the overall utilization of primary health care services declined to 50% in April of 2020 during the surge of local Coronavirus disease of 2019 (COVID-19) spread [3]. In Spain, a negative effect was observed on up to 85% of healthcare quality standards in Catalonia in March and April of 2020 [4]. Also, in the USA, a CDC report found a substantial reduction in pediatric vaccine orders following the COVID-19 emergency declaration in March of 2020 [5].
A growing body of literature suggests that severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), causing COVID-19, has neurotropic characteristics [6][7][8][9]. Neurologists have other considerations regarding the impact of the virus on their patients. Many neurological diseases necessitate long-term and profound immune suppression [10]. In addition, patients with neuromuscular diseases represent a particularly vulnerable group to which the infection can be potentially fatal [11]. The most immediate and possibly the broadest short-term impact of the pandemic on neurology patients could be the limitations on accessibility to healthcare and medications, especially in communities with uncontrolled spread or as a byproduct of strict prevention measures.
In Jordan, the authorities have since March 17, 2020, imposed a number of local and nationwide curfews and lockdowns in a bid to curb local spread and prevent overwhelming the healthcare system [12]. Subsequently, many hospitals had to discontinue outpatient and clinic services, and elective procedures were postponed for weeks or months. Also, many patients faced difficulties refilling prescriptions and obtaining regular medications. On the other hand, an online survey of 5274 persons from Jordan found that approximately four out of every ten participants experienced quarantine-related anxiety [13], illustrating one aspect of the pandemic's impact on the health of the Jordanian population. This study aimed to explore the attitude towards the COVID-19 pandemic and its impacts on the health of patients with neurological illnesses including multiple sclerosis (MS), epilepsy, and primary tension or migraine headache.
Methods
This is a cross-sectional study that was conducted between November and December of 2020. Patients aged ≥ 18 years who presented with a neurological complaint at outpatient neurology clinics at the hospital affiliated with the second and last authors during the study period were invited to participate in this study. This hospital is a tertiary facility, and it is the main governmental hospital in the city. After obtaining ethical approval, a paper-based questionnaire was administered to patients visiting the hospital's neurology clinics. Written consent was obtained from each patient prior to the administration of the questionnaire. The questionnaire included questions about the attitude of patients towards the pandemic and the impact they feel it had on their lives and illnesses. Specific questions were included in the questionnaire relevant to those with an established diagnosis of MS, epilepsy, or primary tension or migraine headaches. Additionally, as sleeping disturbances can exacerbate epilepsy and headache, patients with these two conditions were asked about the occurrence of sleep disturbances during the pandemic and the predisposing factors. Patients were excluded if they presented with non-neurological complaints. Additionally, patients with advanced dementia or severe intellectual disability were excluded if their conditions precluded meaningful communication or affected their ability to answer the questionnaire independently and reliably. Statistical analysis of the collected data was conducted using IBM SPSS software version 25 (SPSS Inc., Chicago, IL, USA).
Descriptive statistics were performed for all variables to calculate frequencies and percentages.
Results
A total of 562 patients presented to neurology outpatient clinics during the study period, of whom 506 (90.03%) patients responded to the questionnaire. Patients under 40 years of age constituted over half the sample. Men constituted 45.45% of the sample. Five patients only (0.98%) reported a previous documented infection with . The majority of patients (81.42%) presented to the clinic for follow-up, while 18.57% stated that this was their first visit. One fifth (19.36%) of patients reported that their visit was delayed due to the pandemic and the lockdowns. Most patients (88.14%) believed in the existence of COVID-19 virus and adhered to preventive measures, and 79.84% agreed that lockdowns were necessary to control the spread of the pandemic among the population. Only 1.97% of patients obtained pandemic-related information from doctors and medical sources, while the vast majority were depending on the internet and media to seek necessary information. Table 1 illustrates the characteristics of the study sample.
Among the 506 participants, 80 (15.81%) had MS. About 73.75% of patients with MS were females and 77.50% were 40 years old or younger. Thirty-five patients (43.75%) reported experiencing at least one MS relapse since the start of the pandemic. Of those, 30 (88.24%) were admitted to the hospital, while three (8.82%) patients have rejected admission for concerns regarding the COVID-19 pandemic, and one (2.94%) did not seek medical advice for non-pandemic related reasons. Seventy-one (88.75%) patients with MS had been on a disease-modifying treatment (DMT), while nine (11.25%) were not receiving any treatment due to either medical or financial reasons. Beta-interferon (n = 35, 43.75%) and Fingolimod (n = 30, 37.50%) were the most frequently used DMTs, followed by Dimethyl Fumarate (n=4, 5.00%), and Natalizumab (n = 2, 2.50%). Nine (11.25%) patients with MS were not taking any medication. Regarding adherence to DMT during the pandemic, 13 (16.25%) patients stated that they had discontinued the DMT during the pandemic, with four (30.76%) of them stating that noncompliance was related to concerns over the immunosuppressive side effects, and three (23.07%) reporting inability to obtain the medication due to the lockdown ( Table 2).
A total of 150 patients had epilepsy, accounting for 29.64% of the study sample. Most patients with epilepsy were younger than 40 (70.00%). Males and females constituted 51.33% and 48.66% respectively. Of this sample, eighty patients (53.33%) reported an increased frequency of seizures since the start of the pandemic. Interruptions of anti-seizure medication (ASM) intake was reported by thirty patients (20%), of whom 76.66% had increased seizures. In addition, sleep disturbances were reported by over a quarter of patients with epilepsy, and the majority ascribed these disturbances to the impact of the pandemic (Table 2).
In addition, 40 patients had primary migraine or tension-type headache, of which 37 (92.50%) patients had migraine headache, and three (7.50%) had tension headache. Females represented 82.22% of patients in this group, and 65% of all patients were younger than 40. An increased frequency of headache since the start of the pandemic was reported by 25 (62.50%) of patients in this group, and 25.00% reported interruption of their regular medication during the same period. Changes in sleep patterns were reported by 37.50% of patients in this group, and 50.00% blamed these changes to the impact of the pandemic. Of note, eight out of nine (88.8%) patients with decreased total sleep hours experienced worsening headaches (Table 2). Figure 1 demonstrates the frequency of reporting treatment discontinuation and worsening clinical course since the start of the pandemic in the three common neurological illnesses described in previous paragraphs.
Discussion
In this survey, most respondents had positive attitudes towards the COVID-19 pandemic. At the same time, the negative impacts of the pandemic on patients with neurological illnesses in Jordan were evident. Understanding the attitudes and beliefs of a population towards the pandemic has significant implications for implementing and planning of mitigation strategies and for vaccination campaigns. We found that most patients (96%) who participated in the survey believed in the seriousness of the pandemic and demonstrated a positive attitude towards it such as adherence to prevention measures and support of national infection control plans. A previous national survey in April of 2020 revealed similar findings [14]. Therefore, despite different methodologies and target populations, the findings of our study reflect that awareness and positive attitude towards the pandemic among Jordanians remain high. We also found that nearly 97% of patients resorted to the internet and media outlets for medical information about the pandemic, a critical finding in an era where public perceptions about the pandemic are increasingly shaped by social media [15]. The findings of this study also reveal some aspects of how COVID-19 pandemic impacted health services in Jordan. Nearly one in five clinic visitors had their appointments delayed due to the interruptions of health services caused by the pandemic. Also, a similar percentage of patients with MS, epilepsy, and migraine or tension headache reported medication interruptions because of the pandemic and quarantine measures. On the other hand, nearly one in three patients with epilepsy or headache reported sleep disturbances, and half of those believed the pandemic was responsible. This is significant since sleep disturbance is a known trigger in both epilepsy and migraine headache [16,17]. Finally, nearly one in two of surveyed patients reported new events or worsening illnesses since the start of the pandemic. This finding mostly is the outcome of a multitude of factors, such as medication interruptions, sleep disturbances, new social and financial stressors, limitations on accessibility to health services, pandemic-related anxiety, lifestyle changes, and others [18].
Surveys from other countries and regions reported varying but consistent findings underpinning the impacts of the COVID-19 pandemic. For instance, surveys that have been published recently reported that between 4 and 35% of persons with epilepsy had seizure worsening during the pandemic, and the worsening was mainly correlated with epilepsy severity, sleep disturbances, and COVID-19-related factors [16,18]. Also, in a UK-based study that surveyed persons with epilepsy, a third reported difficulty accessing medical services, with 8% having had an appointment canceled. Meanwhile, medication shortages were noted by approximately 30% of neurologists in a survey of the American Epilepsy Society members [19]. On the other hand, a survey of 176 MS patients from Saudi Arabia found that 15% of the patients had a relapse but did not seek medical help because of the pandemic, while 15.9% stopped their DMTs, and 35.2% reported missing drug infusions or refills [20]. Moreover, in a survey of 1018 persons with migraine headache from Kuwait, 59.6% of sampled patients reported an increase in migraine frequency, and 78.1% reported sleep disturbances [17]. This study is not without limitations. Although recruiting patients from outpatient clinics reduced selection bias as opposed to online surveys, it remains based on a single center and medical diagnoses were ascertained retrospectively from patient files. Additionally, the establishment of a direct causal relationship through observational surveys is difficult. Nonetheless, this study is a steppingstone for future efforts in this regard.
Conclusion
The impacts of COVID-19 pandemic on neurology patients in Jordan are diverse and evident across the spectrum of neurological illnesses. More studies are necessary to further delineate the impacts of pandemic on other aspects of health services in the country and, importantly, to draw appropriate conclusions for the future.
|
2021-07-29T13:37:31.849Z
|
2021-07-29T00:00:00.000
|
{
"year": 2021,
"sha1": "0eec6d59127a3d9794beedf4ec96aa4d3e577a43",
"oa_license": "CCBY",
"oa_url": "https://ejnpn.springeropen.com/track/pdf/10.1186/s41983-021-00354-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0eec6d59127a3d9794beedf4ec96aa4d3e577a43",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240663282
|
pes2o/s2orc
|
v3-fos-license
|
Genomic analysis of avian-pathogenic Escherichia coli (APEC) isolated from diseased chicken
Background Avian pathogenic Escherichia coli (APEC) can cause various extraintestinal infections in chicken, resulting in massive economic losses in poultry industry. Apart from that, some avian E. coli strains may have zoonotic potential, making poultry a possible source of infection for humans. Due to its extreme genetic diversity, this pathotype remains poorly defined. This study aimed to investigate the diversity of colibacillosis-associated E. coli isolates from Central European countries with a focus on the Czech Republic. Results Out of 95 preliminarily characterized clinical isolates 32 isolates were selected for whole-genome sequencing. A multiresistant phenotype was detected in a majority of them and the predominant resistance to lactams and quinolones was widely associated with TEM-type beta-lactamase genes and chromosomal gyrA mutations, respectively. The phylogenetic analysis confirmed a great diversity of isolates, that were derived from nearly all phylogenetic groups, with predominace of B2, B1 and C phylogroups. Clusters of closely related isolates within ST23 (phylogroup C) and ST429 (phylogroup B2) indicated a long-term local spread of these clones. Besides, the ST429 cluster carried bla CMY-2, -59 genes for AmpC beta-lactamase and isolates of both clusters were generally well-equipped with virulence-associated genes, with considerable differences in distribution of certain virulence-associated genes between phylogenetically distant lineages. Other important and potentially zoonotic APEC STs were detected, incl. ST117, ST354 and ST95, showing several molecular features typical for human ExPEC. Conclusions The results support the concept of local spread of virulent APEC clones, as well as of zoonotic potential of specific poultry-associated lineages, and highlight the need to investigate the possible source of these pathogenic strains.
Background
Avian colibacillosis is a complex of several localized or systemic syndromes, affecting poultry of all age and production categories. It comprises yolk sac infection and omphalitis, leading to increased mortality rates in newly hatched chicks, cellulitis in broilers or reproductive tract infections in laying hens. Other forms of manifestation include swollen head syndrome (SHS), respiratory infections and septicemia which frequently result in death or chronic forms of infection. Avian colibacillosis thus represents a great economic burden for the poultry industry (1). Despite its importance, pathogenesis of these infections is rather intriguing and not well understood. For a long time APEC strains were considered merely opportunistic pathogens, predominantly, but not exclusively associated with O1, O2, O8, O78 and several other serogroups (2). It has nevertheless been proved that disease-associated E. coli strains encode multiple putative virulence genes and significantly differ from commensals, particularly in the presence of ColV plasmid-associated genes, possible markers of poultry-adapted pathogenic strains (3)(4)(5).
The ability to cause colibacillosis in chicken defines the APEC (avian-pathogenic E. coli) pathotype. However, not every strain isolated from diseased chicken carries typical virulence-associated genes, underlining an opportunistic character of some types of E. coli infections (6). On the other hand, APEC can be found also in the gut of healthy chicken (7,8). It has been suggested by Maturana et al. (9) that APEC population composes of distinct subpathotypes associated with different syndromes, similar to the human extraintestinal pathogenic E. coli (ExPEC). Interestingly, the authors showed that SHS and omphalitis isolates formed two distinct groups differing in virulence, suggesting primary and opportunistic character of those infections, respectively. Similarly, chronic salpingitisperitonitis syndrome resulting from an ascending infection and an acute peritonitis 4 without salpingitis, probably originating from respiratory infection or gut translocation after a stress insult, can be distinguished in layers (10)(11)(12).
There is a close genetic relationship between APEC and human ExPEC. Zoonotic potential of poultry strains has been implicated. ExPEC are the main cause of urinary-tract infections (as so called uropathogenic E. coli, UPEC) in humans and meningitis in neonates (neonatal-meningitis E. coli, NMEC), and are also associated with bacteremia, sepsis, cellulitis and other sometimes fatal infections (13). Similar to APEC, these strains are characterized by the presence of various virulence-associated genes, participating in adhesion and colonization of different tissues, invasion of internal organs, iron acquisition and avoiding host's immune responses. ExPEC are typically associated with the phylogenetic groups B2 and D, in contrast to commensal and intestinal pathogenic strains derived from groups A and B1 (14) and to APEC, which are highly variable in distribution to various phylogenetic groups (15). Although there is no specific set of genes to define the subpathotypes (16,17), APEC, UPEC and NMEC generally form genetically distinct groups. There is, however, a substantial overlap especially within the B2 phylogenetic group, which comprises strains isolated from both humans and chickens, showing high virulence in both chicken and mammalian models with low or no host specifity (18,19,17).
Moreover, an isolate showing high virulence in rat meningitis model has been found in faeces of a healthy chicken (5), another finding implying a potential importance of poultry and poultry products as a source of human pathogens.
Recently, several highly virulent and resistant ExPEC lineages with worldwide distribution have emerged (e.g. ST131, ST95 etc.) (20). Whereas some of them are associated exclusively with human infections, others are frequently isolated from diseased poultry or poultry products (21)(22)(23)(24)(25)(26). It is however difficult to assess the real importance of poultry as a source of human infections. Mechanisms of transmission of pathogenic clones through 5 the production chain to humans are very complex and not quite elucidated, as well as the relationships between genetic "armory" of virulence and resistance-associated genes and pathogenesis of the disease. Whole-genome sequencing (WGS) represents a revolutionary tool to study these mechanisms in their complexity (27). Moreover, an immense variability of APEC pathotype and differences in geographic distribution of specific clones underlines the importance of mapping the local situation. While several papers have reported occurrence of highly pathogenic APEC clones in different counries, the information for the Central Europe have been sporadic or is lacking (28).
Samples cellection and preliminary characterization
A total of 95 isolates were subjected to preliminary characterization including serogrouping, antimicrobial resistance (AMR) testing and PCR detection of virulence and antibiotic resistance genes. The disc diffusion test showed that 69.5% were resistant to three or more groups of antimicrobials, which we considered as a criterion of multiresistance. Resistance to ampicillin was recorded in 78 isolates (82.0%), followed by resistance to nalidixic acid (62 isolates; 65.3%), sulphonamides (45; 47.4%) and sulphonamides-trimethoprim (28; 29.5%). Nineteen isolates (20.0%) showed reduced susceptibility to ciprofloxacin (additional file 1- Figure 1). Using four antisera (O1, O8, O18 and O78), 49 isolates (52%) were typeable, with predominant serogroups O1 (30; 32%) and O8 (13; 14%). Only four, respectively two isolates reacted positively against antisera O78 and O18. isolates failed to be typed by WGS (the results are summarized in Table 2, supplementary material B). Except for serogroup O8 (7 isolates), the remaining serogroups were only represented by one or two isolates. The predominance of O8 serogroup appeared as a selection bias since only Czech isolates were selected for sequencing. Of the O8 serogroup, six isolates belonged to the O8:H9 serotype, most of them to ST23 type. In two cases (O1 and O78) the isolates reacted positively in the agglutination test, but gene identity of in silico identified genes was below threshold.
The core genome consisted of 2763 genes (55.28 kbp). The phylogenetic tree based on the core SNPs analysis basically corresponded to the structure of E. coli phylogeny. Groups F, D and clade I were represented by only a few isolates and did not form any distinct clusters except the minor subcluster of the two group D isolates; two ST117 isolates from the F phylogroup were unrelated to other isolate of F phylogroup (ST354) and formed their own distinct clade. In the B2 cluster two subclusters (B2a, B2b) were found; B2b subcluster was formed by four closely related ST429 isolates and one ST4110. Another cluster included isolates from phylogroups A, C and B1. Interestingly, all isolates of the C group belonged to ST23, O8:H9 serotype (with one exception of O78:H9 serotype). (1/32; 3.0%) and bla TEM-30 (1/32; 3,0%). In addition, all isolates showed the presence of genes encoding components of various multidrug efflux pumps, participating in resistance to aminoglycosides, macrolides and fluoroquinolones. Except for qnrS1, which is associated with partial resistance to fluoroquinolones, no other PMQR gene was detected.
Identification of resistance genes
Reduced susceptibility to quinolones in most isolates appeared to be due to chromosomal mutations, especially in gyrA gene (21; 65.6%), to lesser extent also in parC (5; 15.6%) and parE (1; 3%). In five ST23 isolates (15.6%), a mutation in ampC promoter was 8 detected. (For overview of resistance genes, please see the Table 1, additional file 1.)
Identification of virulence genes
The genomic analysis confirmed a great diversity of selected isolates (see figure 2 and supplementary material, file 3). Overall, factors associated with adhesion and invasion, as well as siderophores were found in most isolates; more than 90 % of isolates encoded F1 fimbriae, curli, E. coli common pilus and enterobactin. All but one isolate carried ibeB gene, while ibeA was present mostly in B2 and F phylogenetic groups, but not in isolates from other groups. A siderophore system salmochelin (81%) and serum-resistance associated proteins, Iss (87.5%) and TraT (78%) were present in most isolates with generally equal distribution in all phylogenetic groups. Full SitABCD iron transport system was detected in 78% isolates, outer membrane protease (OmpT) and colicin V synthesis protein (CvaC) in 68.8% and 59% isolates, respectively.
Discussion
The aim of current study was to evaluate diversity of colibacillosis-associated isolates from Central Europe. Indeed, analysis showed an immense phenotypic and genotypic variability, the isolates differing greatly in their antimicrobial-resistance phenotype, virulence genes profile and plasmid content, together having little in common. As generally acknowledged, there is no specific combination of virulence genes that would accurately define the APEC pathotype (16). The most prevalent APEC genes are also frequently present in commensal strains. There is an abundance of adhesins and irontransporting systems, which may be considered essential prerequsites of extraintestinal pathogenicity in all types of avian and mammalian disease, but also fitness factors enabling asymptomatic colonization of healthy hosts and effective transmission. Presence of Col-V-associated genes such as iroN, iss, iutA, ompT etc. is characteristic for most APEC, more than UPEC and NMEC (15), nevertheless, their exact role in pathogenesis remains unclear or controversial (30,31). Col-V-like plasmids are, however, acknowledged as markers of poultry-adapted pathogenic strains (5,22).
As expected, the phylogenetic analysis also revealed a substantial diversity of isolates, that originated from all phylogenetic groups with the exception of group E, the most prevalent was B2 phylogroup, which is, along with D, considered typical group for human ExPEC (14). However, the second most prevalent phylogroup was B1 (7 isolates), a group commonly associated with intestinal pathogenic or commensal fecal strains (32).
Interestingly, there were notable differences in virulence trait distribution among phylogenetic groups, although the isolates had been collected from the same type of infection. The idea of pathogenic strains with quite different combination of virulence genes with alternative function causing the same clinical disease has been proposed by Mokady et al. (33) and points out the importance of horizontal gene transfer enabling rapid adaptation to new niches by expression of certain genes in a different genetic background (34). Notably, it was the presence of typical Col-V plasmid-associated genes such as ompT, iss, cvaC, iro and sit (but suprisingly not iut, iuc for aerobactin) that were equally distributed among isolates from all phylogenetic groups.
Despite the overall diversity, the phylogenetic analysis revealed two clusters (ST429, group B2, and ST23, groups C), both containing four similar isolates that were obtained from different farms in Northern Moravia. Two ST23 isolates identical according to the core genome analysis were collected at the same day on two different farms, indicating a possible clonal spread in the locality. Colibacillosis outbreaks caused by a specific pathogenic clone have been repeatedly reported (e.g. 12,35). On the other hand, a closely related isolate (25 SNPs difference) had been collected on an unrelated farm approximately half a year before. Similar situation was observed in the ST429 cluster-the most similar isolates were from the same date and were separated from the other isolates of this cluster (with 26-61 SNPs difference) by a span of several months. One may imagine these isolates could have a common origin, however, the question, whether these clones may become established somehow in the production chain and circulate between flocks or farms for a long time period or whether a repeated introduction occurs from a specific source, remains unanswered. An evidence for "pseudo-vertical" spread through the production pyramid has been proposed recently (36). The problem of possible reservoir of pathogenic strains for Northern Moravian farms should be addressed more closely in the future.
Both ST429 and ST23 are considered as predominant APEC lineages that are frequently isolated from poultry with clinically manifested disease (35,37), but also poultry products (26). Although representing quite unrelated APEC clades, they both appear to be poultryspecific, with little pathogenic potential for humans (7,17). In fact, an APEC strain χ7122 (ST23) has been shown to be phylogenetically closer to human ETEC (without any enterotoxin production) than to ExPEC (38). Therefore, in our collection, one may consider the two clusters, ST429 and ST23, representatives of phylogenetically distant lineages presumably associated with the same disease, again underlining the importance of accessory genome in virulence potential of APEC. The ST429 isolates had a slightly higher average number of virulence-asociated genes than ST23 isolates (172 vs. 154) including genes encoding capsule production (kpsM, T, D, neuC),, invasins (ibeA, ompA) and ironbinding systems (aerobactin, yersiniabactin, chu) that the ST23 cluster (not all ST23 isolates) lacked. In contrast, ST23 isolates were characterized by presence of Stg fimbriae and ETT2-related genes. This transport system, even in degenerate state, has been reported to enhance virulence in APEC (39). Both sequence types coded for curli, F1 fimbriae, salmochelin, OmpT, TraT, Iss, however, only Iro, OmpT nad Iss have been reported to occur in significantly higher prevalence in APEC than avian-faecal E. coli (AFEC) (4). Nevertheless it probably supports the idea of feasibility and usefulness of PCR typing targeting such potential markers of APEC derived from distant phylogenetic groups (e.g. 3).
Two isolates were assigned to ST117 (phylogenetic group F). Recent studies indicate that this sequence type comprises an important APEC lineages that are repeatedly reported from colibacillosis outbreaks in different countries (25,37,(40)(41)(42), but are also highlighted as potential zoonotic pathogens for containing ExPEC-related virulence genes and being isolated from both retail poultry meat and human clinical urinary tract infections (43). The remaining phylogroup F isolate was ST354, another potentially zoonotic ST, reported particularly from human and animal healthcare facilities and characterized by common resistance to antimicrobials including fluoroquinolones (44,45). This isolate carried bla CMY-2,-59 and mulitple adhesin genes including K99/F5 fimbriae, which were not found anywhere else. Both ST117 and ST354 were highly prevalent among ESBL/AmpC positive chicken isolates and it has been proposed that these lineages exhibit effective host colonization and persistence in the environment (41,45).
ST95 is probably the most important pandemic ExPEC lineage that is frequently isolated from chickens (23,25,46). In fact, it may represent, along with closely related ST140, that part of B2 phylogroup where human ExPEC and APEC form a single "subpathotype" of genetically indistingushable strains (17)(18)(19). In humans, ST95 was associated with bloodstream infections, UTIs and meningitis, often characterized by serogroups O1, O2, O45, flagellar antigen H7 and K1 capsule (typical feature of NMEC) and, in contrast to other pandemic lineages, relatively low tendency to acquire antimicrobial resistence (20,47). Indeed, not every ST95 seems to be zoonotic, as was shown with APEC O1 in a murine infectious model (48). On the other hand, our ST95 isolate fulfilled the molecular criteria for UPEC as defined by Johnson et al. (49).
Antimicrobial-resistance profile ranged from full susceptibility to all antimicrobials tested to multidrug resistance, with dominating resistance to β-lactams (ampicillin) and first generation quinolones (nalidixic acid). Resistance to β-lactams was associated largely with TEM-type β-lactamase production. No selection procedure to obtain ESBL/AmpC producing isolates had been used and we did not detect any bla CTX-M gene, while four isolates carried bla CMY-2 gene. This gene, along with bla CTX-M-1 , is the most common ESBL/AmpC βlactamase in poultry E. coli isolates (50). While in most quinolone-resistant isolates a chromosomal mutation in gyrA gene was detected, seven both susceptible or resistant isolates carried qnrS-1. Co-occurence of bla-CMY-2, -59 and qnrS-1 was observed in two isolates from the ST429 cluster and all but one ST23 isolates carried the remaining qnrS-1 genes. One may assume that the afore-mentioned fact that these STs are not commonly associated with human disease does not make them epidemiologically irrelevant, for they may still serve as a source of resistance or virulence determinants in horizontal gene transfer. Indeed, the importance of horizontal gene transfer may be assumed from the detection of a multitude of replicons previously associated with both resistance and virulence gene spread (51)(52)(53)(54)(55).
Conclusions
Despite its limitations due to relatively small number of isolates of completely sequenced isolates, this study could be considered a basic overview of APEC diversity and a delineation of paths that are to be followed in more extensive monitoring of virulent Within preliminary characterization of 95 isolates, a slide agglutination test with four commercial antisera (O1, O8, O18 and O78) was performed according to the manufacturer's instructions (Denka Seiken, Japan). Presence of several selected resistance and virulence-associated genes (additional file 3-table 3) was detected by PCR. After this preliminary characterization 32 strains were selected for whole-genome sequencing. We selected isolates from chicken originating from one producer. To encompass the greatest possible diversity, we exluded isolates from the same individual, the same farm or the same date of isolation, if they showed the identical resistance phenotype and gene profile. 15 5.2 DNA extraction and whole-genome sequencing NucleoSpin Tissue DNA extraction kit (Macherey-Nagel, Germany) following manufacturer's instructions was used to obtain pure DNA. The DNA libraries were prepared with Nextera XT Library preparation kit (Illumina, USA). Finally, Illumina Next-Seq and Mi-Seq platforms were used for the whole-genome sequencing to obtain 2 x 150-bp or 2 x 300-bp pairedend reads, respectively.
Data processing
Adaptor residues and low quality (Q ≤ 20) ends were removed from the reads using The threshold for gene identity was set to 95%. The Clermont typing tool was used to classify isolates into phylogenetic groups (64)(http://clermontyping.iameresearch.center/). In order to investigate the genetic relationships between isolates, genomes were annotated using Prokka v1.13 (65) and the core genome alignment was performed using Roary (66). The core genome alignment was used to determine the single nucleotide polymorphism (SNP) distance using snp-dist (https://github.com/tseemann/snpdists). Phylogenetic tree was constructed using RAxML v8.2.10 using GTR+GAMMA+I model (67). Phylogenetic tree was then visualised via iTOL (68)(https://itol.embl.de/). The raw sequencing data were deposited to GenBank under BioProject PRJNA553636 and corresponding accession numbers to SRA for each sample can be found in Table 4 (additional file 4).
Ethic approval and consent to participate
The samples were collected from diseased or dead animals by practical veterinarians in cooperation with the farm owners and with their consent.
Consent for publication
Not applicable.
Availability of data and materials
The data supporting conclusions of this article are available in the GenBank https://www.ncbi.nlm.nih.gov/bioproject/PRJNA553636/.
Competing interests
The authors declare they have no conflict of interest.
|
2019-09-17T03:09:01.141Z
|
2019-09-06T00:00:00.000
|
{
"year": 2019,
"sha1": "bba69e6a121b88504c88d382de29fa03c3b2cd59",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-020-02407-2",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "0939944e91515fc7735fc65163acb8d7994225c0",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
}
|
146477322
|
pes2o/s2orc
|
v3-fos-license
|
Contemporary Intra-Core Relations and World Systems Theory
One of the great strengths of world-systems theory (wst) is the fact that it insists upon the need to analyse contemporary dynamics within a long historical perspective. It argues that we can make sense of historical continuity and change through its concepts of core/periphery relations reproducing themselves across time. And it also identifi es a recurrent pattern—or series of patterns—in intra-core relations in the Modern World System since the 16t Century involving a plurality of core powers both competing and co-operating with each other. Unlike, say, liberal international relations theory, wst sees intra-core relations as being marked by recurrent structural confl ict as core powers compete with each other. But unlike realist international relations theory, wst does not derive its theory of structural confl ict between core powers from purely political drives for power-maximisation on the part of states. Instead wst identifi es the sources of confl ict in the compulsions of capitalism as a socio-economic as well as an interstate system. In this paper, we will accept wst’s theory of the sources of structural confl ict amongst core powers within what Wallerstein calls the Modern World System. Our critique will be directed towards wst’s theorisation of resulting confl icts as a recurrent pattern of hegemonic cycles. This paper focuses upon one small region of World-Systems Theory (wst) but one that is important for analysis of the contemporary world: the dynamics of intra-core relations.
This paper focuses upon one small region of World-Systems Theory (wst) but one that is important for analysis of the contemporary world: the dynamics of intra-core relations.
I will try to address three questions: 1. Does the wst theory of the historically cyclical patterns of intra-core relations provide us with a persuasive framework for understanding contemporary core dynamics?2.More specifically can the reach and depth of the power of the United States within the contemporary core be captured by wst's theory of capitalist hegemons and their rise and decline? 3. Is wst's insistence that its concept of corewide world empires cannot be established in the modern world system valid?
In addressing these issues, I will begin by outlining the general approach of wst to the analysis of intra-core relations, focusing in particular upon wst's concept of core hegemons and their rise and fall.I will then look at the arguments of wst as to why a capitalist world empire is impossible.I will then go on to examine how we might conceive of the victory of a World-Empire.And I will then turn to examine the contending situation and the character of the power of the US today.
The Mainstream wst theory of Intra-Core Relations and Hegemonic Cycles
All the main trends in wst agree on the idea that within the Modern World System there have been recurrent cyclical patterns in intra-core relationships.Th e cycles can be thought of as beginning when one core power rises to a dominant position within the hierarchy, becoming a 'hegemon' and establishing some order and stability to the core as other states adapt to the new hegemon's regime.Th is phase is followed by attempts on the part of other core powers to innovate and challenge the hegemon.As this challenge mounts, the core enters a phase of instability and confl ict, typically resolved by intra-core wars which eventually throw up a new hegemon while the previous hegemon declines.¹ Within the broad fi eld of wst we can distinguish two contrasting emphases in the ways in which these cycles are theorised.One emphasis is close to realist theories of international relations, stressing the determinant as being the military-political capacities of core states.Writers like Modelski and Th ompson along with Gilpin see the economic dimension as being subordinated to and structured by this issue of military-political capacity.But what might be called the mainstream of wst represented by Wallerstein, Chase-Dunn and Arrighi emphasise capitalist economic systems as the determinant element in the competition, understanding these economic systems in a Marxist sense as production systems generating streams of surplus value.Th ey by no means ignore the role of military-political power but they view its role as an indispensable support for the struggle for dominance at the level of production.Th us we can summarise their theory of the hegemonic cycles as having two main components: a.A constant search by a plurality of core powers to gain dominance in the most sophisticated and desirable capital-intensive products.Hegemons are those capitalist powers which achieve dominance in this production fi eld thus positioning themselves at the top of the international division of labour, penetrating the markets of other core states, gaining the largest streams of surplus value and being able to set the framework for other core states in the economic fi eld.
b. Military-political action is viewed mainly as a buttress or support for this economic dominance, protecting the core economy from external attack or internal challenge and removing obstacles to the fl ow of its products across the system (Wallerstein 1984).
It is this very specifi c defi nition of hegemony which results in the wst' s mainstream identifi cation of the three hegemonic powers as Holland, Britain and the United States.Th e military-political perspective of Modelski and Th ompson focuses on sea power rather than dominance in capital-intensive commodities as the key to hegemony and this gives Portugal a place on the list before Holland.But with either version we should note that the idea of hegemonic cycles in the core derives from the identifi cation of hegemons and their fates.
Th is mainstream wst conception is perfectly coherent internally.But it is important to note that it employs a highly restricted concept of hegemony and one anchored in production systems.It is on the basis of that specifi c and restricted concept of hegemony that wst can derive its historical chain of hegemons and the cyclical patterns of their rise and decline.But wst also, as an inevitable consequence of its specifi c theory of hegemonic cycles, downplays other aspects of intra-core relations and is predisposed towards certain expectations of the contemporary dynamics rather than others.Th ree specifi c consequences of these kinds are important: a. Th e equation sign between the three powers designated as successive hegemons tends towards downplaying some radical diff erences between the three hegemonies in terms of the type of capitalism, in the nature of the core context in which the hegemons operate and the distinctive political capacities of the successive hegemons.
b.It tends to downplay the possibility that a hegemon with great political capacities may be able to exploit feedback mechanisms from the interstate system onto productive systems other than the traditional feedback mechanisms of intra-core wars.
c.It predisposes Wallerstein, Chase-Dunn and Arrighi in their analysis of contemporary developments in the 1980s and 1990s to view the US as having entered a phase of hegemonic decline after its dominance in capital intensive production for core markets was challenged by German and Japanese capitalism in the 1970s.
. wst authors have also noted and explored other cyclical patterns and regularities such as: regularities of quantitative economic cycles-Kondratief waves, with their A Phase of growth and their B phase of depression.Th ey link these K-waves with theories of co-operation/tension within the core; and quantitative regularities in the cycles of core warfare.But we will not consider these issues here.
Peter Gowan 474 Contemporary Intra-Core Relations and World Systems Theory that it constitutes a radical diff erence with the British 19t century case and it is not just a diff erence in the quantitative power resources of the hegemon: it is a radical diff erence in the structure of intra-core politics.
The Structural Character of US Political Subordination of the Core
US political dominance over the core does not simply derive from the US's quantitatively greater military power resources.It derives from how those military resources are deployed to politically shape the foreign and security policy context facing other core states.By shaping this context the US has indirectly shaped the actual substance of the foreign policies of other core states.Let us note some key features of this shaping activity: a. Th e US has the ability to shape and control the regional strategic environment of the West European powers and Japan.In the case of Western Europe this has been achieved through making Western Europe strategically dependent upon the US-Soviet and now US-Russia relationship; in the case of Japan through making it dependent fi rst on the US-Soviet relationship in the Cold War but now also on the US-China relationship.Th is strategic dependence of the allies is re-enforced by the Treaty obligations on both Germany and Japan not to develop their own strategic nuclear capacities.It may be further re-enforced by US development in the future of anti-ballistic missile capacities.Insofar as neither Germany nor Japan can break out of this strategic dependence on the relationship between the US and their neighbouring nuclear powers, their security is dependent upon the US.
b. Th e US has the ability to control, through its military-political reach, the regional peripheries of its major allies.In the West European case, the US has long controlled the Mediterranean area and it now also has extended its military-political predominance across South East and Eastern Europe through both NATO enlargement and the Partnership for Peace as well as through bilateral agreements.On the Pacifi c Rim it has important military-political bridgeheads in South Korea, South East Asia and privileged security relationships with Australia and New Zealand.As a result of this US military-political predominance in the hinterlands of the other core centres, it can steer events in those hinterlands to the benefi t or detriment of those core regions.And it can do so either to the benefi t, or to the detriment of these other core states.Th e US has demonstrated this capacity rather dramatically in the Yugoslav wars of the 1990s: from its refusal to use its resources to maintain Yugoslav unity in 1990-1991, to its drive for a unitary independent Bosnia (entailing a Bosnian war) at the start of 1992, to its success in persuading the Bosnian government to reject EU eff orts to bring the war to an end, to its readiness to bring the war to The US as a Sui Generis Hegemon: Is it a Cycle-Breaker?
Wallerstein, Chase-Dunn and especially Arrighi do, of course, note various diff erences between the successive hegemonies both in terms of their own attributes and the contexts in which they have operated.But they have underestimated the qualitative diff erences between the US and Britain either by overplaying British power in the 19t century or by underplaying US power in the second half of the twentieth century or both.Th ey have thereby tended to ignore the possibility that the peculiarities of US hegemonic capacities could disrupt the cyclical pattern by which wst has characterised core dynamics.We will briefl y outline some central peculiarities of US hegemony since 1945: The Unipolar Core Since 1945 US dominance within the core has been qualitatively diff erent from that of Britain in the 19t century, not to speak of Holland in the 17t century.
Th e political dimension of the Britain-core relationship in the 19t century and the US-core relationship in the second half of the 20t century has been radically diff erent.Th e British relationship was marked by balance of power mechanisms-political multipolarity; the American relationship since 1945 has been marked by political unipolarity.
Britain never could, and never tried to, suppress political multipolarity within the core.Apart from ensuring the security of its access to the continent through the Scheldt and Belgium, Britain had only a 'negative' political goal within the continental core: that of ensuring that no single continental power dominated the continent.Britain's lack of both political capacity and political ambition to dominate the continental core was an important reason why Britain was accepted as the leader of the international political economy by other core powers.Th at leadership operated within a balance of power international political mechanism.
Since 1945, the US has suppressed the balance of power mechanism within the core, brigading all other core powers into essentially bilateral security alliances dominated by the US and taking over political leadership functions of the other core powers in the fi eld of international politics.A hub-and-spokes structure of intra-core political/military relations thus ensued after 1945, with the primary political relationship of each core power being its subordinate link with Washington.Th ere were, of course, variations in this political subordination: it was most marked in the case of the two other strongest core economies, Germany and Japan, less marked in the case of France.We will look at the modalities of this US political dominance later, but there is surely no doubt an end once the EU states had accepted the dominance of NATO in the Yugoslav and wider European theatres, to its capacity to lead the EU states into a war with the Yugoslav state in 1999.Th e US has similarly acquired predominant regional military-political infl uence over such parts of the Japanese hinterland as the Philippines, Th ailand, Indonesia, Taiwan and South Korea.
c. Th e US has the ability to control the sources of and transport routes for crucial energy and other strategic materials supplies needed by its allies, through its positions in the Middle East and its sea and air dominance in the Mediterranean, the Indian Ocean, the Pacifi c and the Atlantic (it has also, of course, been seeking to extend its control into the Caspian area in the recent past).Interruptions of supplies can have very grave consequences for the other core states, but they are dependent upon the US to assure these supplies.
d. Very importantly, it has also had the capacity to homogenise the political cultures of its allies around sets of political values articulated to serve US interests, symbolic structures rooted in the US victory over Japan and Germany in the second world war embodying such highly sensitive symbols as 'Munich,' 'Hitler,' ethnicist nationalism and exterminism, totalitarianism versus freedom, democracy, individual rights, one universalist humanity, etc. Th is value structure has been repeatedly and eff ectively embedded within the national political cultures of its allies through repeated international political polarisations during and after the Cold War (notably recently in the drive against Iraq and in the various Yugoslav wars).It is a structure of political values which throws the main allied powers (Germany and Japan) into a very vulnerable international position and it has also repeatedly demonstrated the US's capacity to trump the rival potential centre of internationalist liberal and democratic universalism, France.
Taken together these four US capacities have reduced the foreign policy and power projection autonomy of its allies to near zero.Th is marks, at the very least, a profound, structural modifi cation in the inter-state system in comparison with earlier epochs.Behind unipolarity lies a series of structural dependencies of other core states upon the US for their political security.
The Regime-Making Capacities of the United States
wst argues that each hegemon establishes an international regime of accumulation suited to its dominance in a particular set of capital intensive commodities and the other core powers adapt to that regime and then launch a competitive challenge within it.Th e regime then is eventually reshaped through intra-core wars.But there have been striking diff erences between the regimemaking capacities of the US and of Britain.
Britain established both a regime for trade and a regime for monetary relations: the Free Trade principle and the Gold Standard principle.But the other core powers were not brigaded by British power into accepting these regimes.Th ey 'voluntarily' accepted them (or didn't, as the case may be).And Britain unilaterally committed itself to these regimes: free trade was a unilateral decision by Britain, not a reciprocal bargain; and the same was true of the Gold Standard.
Th e USA has been able to operate quite diff erently: it has imposed international regimes on the other core powers and has had the capacity both to stand above its own international regimes and to adapt them to suit its perceived interests or to create entirely new regimes.
a. Trade Regimes: Th us the USA was never a unilateral free trader.It has adopted the ideology of free trade in the post-war period but it has restricted its implementation in very important ways and has continually demonstrated its readiness, if necessary, to fl out free trade principles and pursue a policy of reciprocity rather than most favoured nation (MFN) status in trade relations.At the start of the 1990s the GATT was the embodiment of free trade principles but it was far from being the organiser of actual trade relations as a whole: on some estimates it embraced no more than about 5 of all international trade.
Th us the US has both presided over a (partial) free trade regime for the rest of the world and simultaneously given itself the right both to control the scope of that regime and to fl out its own regime, where necessary, to suit its own interests.
Th is pattern has been applied throughout the post-1945 period and has been very evident in relation to the major institutional development in the fi eld of economic relations in the 1990s: the emergence of the WTO.Th e US Congress's ratifi cation of the WTO Treaty explicitly makes US acceptance of its jurisdiction conditional upon the WTO's being 'fair' to US interests.And all who follow international trade policy know that the word 'fair' in this context means serving and defending US economic interests.And for successive US administrations since the late 1980s this conditional general stance towards the GATT/WTO has been combined in US trade policy, with explicit determination to fl out GATT/WTO rules where these are deemed 'unfair' to US interests, an approach which Jagdish Bagwati has aptly called 'aggressive unilateralism.' Bagwati highlights the creation and use of the so-called Super 301 and Special 301 laws, but to these could be added other instruments of US unilateralism on international economic law, such as its use of anti-dumping instruments and countervailing duties.All these instruments have been placed in the service of US claims to have unilateral national authority to judge which kinds of behaviour by other states in economic policy are 'unfair' to the US, regardless of what rules are laid down within the GATT/WTO framework.And the use of these instruments has been far from marginal in US international economic policy.As Miles Kahler (1995) points out, side 'the number of actions brought against 'unfair' trading practices-anti-dumping, countervailing duties (subsidies) and section 301-increased dramatically' during the 1990s (p.46).In the words of Pietro Nivola (1993) 'no other economic regulatory programme took on such an increase in case-loads ' (p. 21).
And this refusal to be bound by global economic law has been combined with vigorous attempts in some fi elds to extend the jurisdictional reach of US domestic economic laws internationally, applying it to non-American corporations operating outside the United States.Of actions in this fi eld, Kahler (1995) reports that 'Here the list was long' (p.46).
b. International Monetary Relations: the contrast is equally striking and structurally similar in international monetary relations.Th e international monetary system established at Bretton Woods was always conditionally and partially implemented and although it did begin with the US accepting a discipline upon its dollar policy through the gold link, when that discipline was perceived by the US government in the 1970s to be detrimental to US interests it was simply scrapped through unilateral action by the US against opposition from all other core states and from then on the international monetary system became a pure dollar standard, thus manipulable by the US government as it wished.
Th is dollar standard international monetary system has enabled the US to escape from the usual balance of payments constraints upon a state's economic management and also enabled the US to escape the consequences of large swings in dollar exchange rates with other currencies, such as the Dmark and the Yen.It has thus been able to swing the dollar up or down against other currencies in line with purely US economic or political objectives.
John Williamson (1977), an insider in the diplomacy that led to the US's imposition of the dollar standard in the mid-1970s has expressed what was at stake clearly: "Th e central political fact is that a dollar standard places the direction of world monetary policy in the hands of a single country, which thereby acquires great infl uence over the economic destiny of others.It is one thing to sacrifi ce sovereignty in the interests of interdependence; it is quite another when the relationship is one way.Th e diff erence is that between the EEC and a colonial empire….Th e fact is that acceptance of a dollar standard necessarily implies a degree of asymmetry in power which, although it actually existed in the early post-war years, had vanished by the time that the world found itself sliding to a reluctant dollar standard" (p.37).c.International Financial Regimes: Th e same pattern has applied to the international fi nancial regime: when the US government decided that the Bretton Woods system of state control of international fi nancial control was detrimental to US interests, it had the capacity in the 1970s to transform the regime, placing international fi nancial fl ows in the hands of private fi nancial operators and markets, and placing New York as the international fi nancial centre from the early 1980s.Since the 1970s it has also involved eff ectively dismantling the fi nancial regimes of its allies (ending capital controls).
d. Product and Asset Market Regimes: US regime-shaping capacities have extended also to all other areas of international economic fl ows and international markets.Markets are often treated as if they were spheres of exchange autonomous from state policy, but in the modern world they are highly complex mechanisms grounded in intricate networks of public and private law, institutions and conventions.Th e state executives and big businesses of the core states work together to seek to shape markets in their own interests.And in this fi eld the US has demonstrated great and continuing infl uence.Since the launching of the Uruguay Round in the mid-1980s it has been engaging in an extremely wide-ranging and remarkably successful eff ort to restructure both product and asset markets within other states, bringing their legal rules and institutions into line with the perceived interests of US business expansion into those states.Th ese so-called 'behind the border' international regimes are another distinctive feature of the phase of US hegemony.
Giovanni Arrighi, who, more than other wst theorists, has understood some crucial distinctive features of US global power, provides us with an interesting perspective on this.He calls American capitalism 'autocentric' in its relation to the international political economy, while British capitalism was, in an important sense, shaped by the distinctive relationship of each of its parts with the world economy.Th e 'autocentric' character of US capitalism-made possible not only by its internal characteristics but also by its extraordinary power vis a vis the rest of the world explained above, has involved an ambitious agenda of, in Arrighi's words, 'internalising the world economy within and in line with the structures of American capitalism.' Arrighi stresses internalisation within the organisational domains of US MNCs: but US restructuring of the social relations of production abroad has been far more extensive than that.
We do not wish to suggest that these capacities to restructure the internal regimes of its allies have been absolute-absolutely not.And we will not, at this stage consider how extensive they have been.
Th is international regime-shaping capacity in the international political economy has been, of course, linked to the overwhelming military-political
Peter Gowan 480
Contemporary Intra-Core Relations and World Systems Theory A very important indirect eff ect of US military-political capacity has been its control over energy and strategic mineral sources and transport routes, the most dramatic example being its use of the oil price rises in the early 1970s.
Th e potency of the monetary-fi nancial levers has been equally striking, with the US government demonstrating repeatedly that through the threat or actual use of US control over the international monetary and fi nancial regime, it can profoundly negatively aff ect the economic outcomes of allied economies, disrupting their macro-economic strategies: what I have described elsewhere as the Dollar-Wall Street Regime constructed in the 1970s and early 1980s (Gowan 1999).Examples of such strategies would include monetary pressure on the French economy to defeat the Keynesian growth strategy of the early 1980s and the manipulation of the Dollar-Yen exchange rate to exert intense pressure on Japan's trade position in order to gain an opening of Japanese fi nance to US fi nancial operators in the 1980s and to gain various kinds of managed trade agreements with Japan in the 1990s.Linked to the security pact tactic, the US in the 1980s and 1990s added the use of economic statecraft in the monetary and fi nancial fi eld to encourage states to 'deal' with it on restructuring its approaches to economic policy and organisation.
Taken together, these levers have enabled the US to 'internalise' the international political economy as Arrighi puts it, to a considerable extent or, to express the same idea in another way, to make signifi cant inroads into the capacity of its allies to manage their own internal aff airs autonomously.
The Mistake about US Hegemonic Decline
Aggregating all these distinctive features of US hegemony, we can see how, when faced with serious challenges to its dominance in capital intensive sectors in the 1970s, the US has a very wide range of instruments essentially derived from its structural power over the inter-state system of the core with which to strike back at competitors.Th ese instruments have been largely ignored or downplayed by mainstream wst.And even Arrighi, who stresses them more than others still remains wedded to the thesis of precipitate US hegemonic decline.
Arrighi's account of the supposed decline focuses upon fi nancialisation.He provides a brilliant account of the way in which earlier hegemonic powers, when faced with defeat in product markets, switched to fi nancialisation and to gaining profi ts from the competitive success of its rivals.Th is pattern fi ts Genoa, Holland and Britain.Chase-Dunn provides a supporting theorisation with his strong emphasis on capital mobility across the inter-state system.He adds to Arrighi's argument by saying that the declining hegemon's domestic capitals are dominance of the USA over the core discussed earlier.Both have given the USA historically egregious power capacities enabling it to respond assertively to the challenges to its hegemony in the fi eld of capital intensive production, using its strength outside this fi eld to strike back on many fronts in order to prepare the way for its hegemonic restoration in the productive fi eld.Th ese feedback eff ects have not applied to other core powers and have not been given due weight by wst authors, although Arrighi has been sensitive to some important aspects of them.
US Feedback Mechanisms for Cycle-Breaking.
wst's focus upon a defi nition of hegemony centred upon production systems has thus been combined with an inadequate stress on the mechanisms available to the US and not available to earlier hegemons for responding to challenges from core competitors in the sphere of production and striking back.We can think of these mechanisms as a kind of feedback from outside the productive sector onto the course of events within the productive sector.Th e most important of these mechanisms has been the US's extraordinary military-political reach; but also of great importance has been its power of the monetary-fi nancial system.Both these mechanisms have given the US the ability to change and rechange the rules of the game in the sphere of production and commodity exchange in order to create the conditions for rebuilding US hegemon in the narrow sense in which it has been used by wst.
Th e potency of the military-political levers during the Cold War has been stressed by Samuel Huntington (1973) in an important article in the 1970s: Western Europe, Latin America, East Asia, and much of South Asia, the Middle East and Africa fell within what was euphemistically referred to as 'the Free World' and what was, in fact, a security zone.The governments within this zone found it in their interest: (a) to accept an explicit or implicit guarantee by Washington of the independence of their country and, in some cases, the authority of the government; (b) to permit access to their country to a variety of US governmental and non-governmental organisations pursuing goals which those organisations considered important….The great bulk of the countries of Europe and the Third World….found the advantages of transnational access to outweigh the costs of attempting to stop it (p.).
And as David Rothkopf (1998) has added, in the post-war years "Pax Americana came with an implicit price tag to nations that accepted the US security umbrella.If a country depended on the United States for security protection, it dealt with the United States on trade and commercial matters." Peter Gowan 482 Contemporary Intra-Core Relations and World Systems Theory not prepared to foot the bill for the mobilisation of state resources to re-subordinate rivals by military means.Arrighi then suggests that the international fi nancialisation which we have witnessed since the 1970s has essentially been a repeat of this earlier cyclical pattern of fi nancialisation.But this has not been the case: quite the opposite.First, the fi nancialisation process was initiated as much by the US state as by US capitals.Secondly, it should be understood as part and parcel of the US state's drive to construct the Dollar Wall Street regime as a weapon for the US fi ght-back.Th irdly, US leadership of international monetary and fi nancial relations has been a double lever for this fi ght back: both an instrument of pressure upon other core states, as we have suggested above, but also an instrument for providing the US state with the fi nancial resources for massively strengthening its state military-political capacity in the 1980s.
With all these instruments the US has thus been able to 'hold the line' against its allied competitors and during the 1990s it has been able to pressure its allies into accepting its own internally generated new leading sectors of capital-intensive industries as the 'hegemonic' industrial driving forces of the new phase of the world economy: the 'information' and telecommunication industries.
part 2: wst and the possibility of capitalist world empires
Our critique of wst analysis of contemporary intra-core relations suggests that the scheme of hegemonic cycles in a politically pluralistic core may need structural modifi cation in the light of the characteristics of US hegemony.Some writers, particularly American realists, go much further and insist that the advanced capitalist core today is organised as an American world-empire.
Zbigniew Brzezinski has recently forcefully advanced this argument that today we have US imperial dominance over its European and East Asian allies.He underlines the fact that "the scope and pervasiveness of American global power today are unique….Its military legions are fi rmly perched on the western and eastern extremities of Eurasia, and they also control the Persian Gulf.American vassals and tributaries, some yearning to be embraced by even more formal ties to Washington, dot the entire Eurasian continent, as the map on page 22 shows" (Brzezinski 1997).What the map in question shows is areas of US 'geopolitical preponderance' and other areas of US 'political infl uence.' Th e whole of Western Europe, Japan, South Korea and Australia and New Zealand, as well as some parts of the Middle East and Canada, fall into the category of US geopolitical preponderance, not just infl uence.
Kenneth Waltz and Paul Wolfowitz have claimed that the Bush and Clinton administrations have been guided precisely by the goal of establishing polit-ical dominance over the rest of the core.Th e famous 1992 Bush administration document on American Grand Strategy for the post-Cold War world order frankly placed at the very centre of US strategic priorities the subordination of the rest of the core, in the version of the text leaked to the New York Times early in 1992.²Th is advocated as a central goal 'discouraging the advanced industrialised nations from…even aspiring to a larger global or regional role.' Waltz (2000) points out that despite protests at the time that the document was only a draft, 'its tenets continue to guide American policy.'Th e chair of the interagency committee which produced the 1992 Grand Strategy, Paul Wolfowitz agrees with Waltz both that the 1992 strategy guidelines have guided US policy and that they have been centred on creating a Pax Americana in the sense of maintaining the subordination of the allies.He adds that ' just seven years later' many of those who criticised the document at the time ' seem quite comfortable with the idea of a Pax Americana…Today the criticism of Pax Americana comes mainly from the isolationist right, from Patrick Buchanan' (Wolfowitz 2000).
Th e concept of world-empires plays a prominent role in wst.When Wallerstein fi rst launched wst upon the world in 1974 he argued that historically world systems have taken two forms: world economies and world empires.At the start of Volume One of Wallerstein's Modern World System, he draws this distinction very sharply (Wallerstein 1974).A world-economy, he explains, is an 'economic' unit, while a world-empire is a 'political' unit in which one political centre dominated the entire world system.
Chase-Dunn and Hall have modifi ed Wallerstein's original conception, arguing that the concept of a World Empire should be defi ned as one power dominating the core rather than the entire international division of labour involving the whole periphery as well.As they put it: 'Th ere have not been true "world-empires" in the sense that a single state encompassed an entire trade net-work….rather,so-called world-empires have a relatively high degree of control over a relatively large proportion of a world system.Th e term we prefer because it is more precise, is core-wide empire' (Chase-Dunn and Hall 1997:210).
Th ey also acknowledge that there have been a series of attempts by capitalist powers to precisely achieve, through war, a capitalist world empire.Th ey mention in particular the Napoleonic attempt and the German attempt in the fi rst part of the 20t century (Chase-Dunn and Hall 1997; see also Chase-Dunn 1998).
. Th is was the Draft of the Pentagon Defence Planning Guide.
Peter Gowan 484
Contemporary Intra-Core Relations and World Systems Theory with precapitalist systems, in which state power itself was the main basis of accumulation, through taxes or tribute.Phrased diff erently, capitalist states are qualitatively diff erent from tributory states' (Chase-Dunn and Hall 1997:33; see also Chase-Dunn 1990).Th is argument is re-iterated in slightly diff erent terms towards the end of their book, when they say that in the modern world system unlike earlier ones, a hegemonic power 'never takes over the other core states.Th is is not merely a systematic diff erence in the degree of peak political concentration.Th e whole nature of the process of rise and fall is diff erent in the modern world-system.Th e structural diff erence is primarily due to the relatively much greater importance that capital accumulation has in the modern world system' (Chase-Dunn and Hall 1997:210).Th ere is, indeed, a slightly diff erent stress here from Wallerstein, particularly in the implicit idea of Chase-Dunn and Hall that core capitalists will display solidarity against a world empire being established by a hegemon since it would restrict their freedom of movement as capitals and block their scope for exploiting inter-state arbitrage, a point to which we will return.
But in Chase-Dunn's earlier book, Global Formation, he provides a much more specifi ed and testable series of arguments as to why the modern capitalist core will successfully resist the establishment of a world empire.His argumentative route passes from an initial acceptance that a capitalist core-wide empire involving capitalist market exchange is in principle possible to deploying a series of arguments to the eff ect that there are overwhelmingly powerful forces built into the structure of the modern world system preventing this theoretical possibility from occurring.Some of these arguments derive resistances to world empire from structural characteristics of the inter-state system in the modern world.Others focus upon structural features of capitalism as an economic and social power system of production and upon the derived interest perceptions of capitalists.
While Chase-Dunn presents his argumentation as a set of reasons why a core-wide empire is impossible, we can re-angle his claims to present them as the necessary preconditions for achieving a core-wide empire.Some of these are preconditions in the inter-state system; others are preconditions concerning capitalist impulses and interests.We can summarise these as follows: a. Inter-state system preconditions: .An empire-state would have to be strong enough to suppress the balance of power system and establish a unipolar organisation of core politics..It would have to fi nd ways of preventing the diff usion of military technologies to other core states, to prevent them mounting a military challenge to the empire-state.
Furthermore, Chase-Dunn (1998), in his book Global Formation, gives an even clearer and more analytically operational concept of a capitalist world empire: he says it is 'the formation of a core state large enough to end the operation of the balance of power system' (p.147).Th is is precisely the condition which has applied in the core since 1945.Th us, Chase-Dunn's reformulation sharply raises the question whether what we have today is precisely just such a world empire dominating the core.
Yet a consistent and distinctive feature of wst since 1974 has been the insistence of Wallerstein and Chase-Dunn on the theoretical impossibility of a capitalist world-empire.
Th us, even while Chase-Dunn defi nes a world empire as a condition where a single core state suppresses the balance of power mechanism within the core-a very weak defi nition of a world empire-he does not acknowledge that the US has eff ectively achieved this since 1945.And like Wallerstein and other mainstream wst theorists he resolutely argues that in the modern, capitalist world system a core-wide empire is theoretically impossible.We will therefore examine in some detail the arguments of wst theorists as to why a capitalist world empire should be ruled out in the contemporary world.
wst authors reach this conclusion by various signifi cantly diff erent, though overlapping routes.Wallerstein acknowledges that both world economies and world empires seek the extraction of economic surplus.But he says that world empires employ a diff erent mode of extraction, a statist tributary mode, while world economies use market exchange mechanisms.And since, for Wallerstein, market mechanisms are integral to capitalism, capitalist world empires are contradictions in terms.His conclusions as to the impossibility of a world empire are thus contained in his premises.He excludes ab initio the possibility that world empires could be other than tributary states.
As he explains: Political empires are a primitive means of economic domination.It is the social achievement of the modern world, if you will, to have invented the technology that makes it possible to increase the flow of the surplus from the lower strata to the upper strata, from the periphery to the center, from the majority to the minority, by eliminating the "waste" of too cumbersome a political superstructure.(Chase-Dunn :-) In Rise and Demise Chase-Dunn and Hall make a similar point.Th ey state: 'Capitalists prefer a multicentric international political system.Hence the most powerful states in the modern inter-state system do not try to create a corewide empire but seek rather to sustain the interstate system.Th is is because their main method of accumulation is commodity production, which contrasts Peter Gowan 486 Contemporary Intra-Core Relations and World Systems Theory .It would have to be able to suppress the possibility of other core states using their sovereignty to experiment and innovate to challenge the hegemon in the productive fi eld..It would have to be able to prevent counter-tendencies and movements towards world government from other core capitalists and states, perhaps in alliance with other, subordinate social groups.
b. Capitalist interest/incentive pre-conditions: .It would have to prevent international capitalists from ganging up to weaken its control over the international political economy in order to protect their own freedom of movement and of operations from its predatory demands..It would have to convince international capitalists that the worldempire would avoid undermining the basis of capitalist social domination within other core and periphery states, avoiding, for example, the possibility of transnational anti-systemic movements challenging both the empire and capitalism.
Th ese arguments of Chase-Dunn are important.We can agree that many of them do indeed off er us a theory of the pre-conditions for a secure, long-term, core-wide empire highlighting important internal tensions in any such project.But after examining each in turn, we will question some of the premises underlying Chase-Dunn's theorisation.
Inter-state Preconditions: Th e maintenance of unipolarity in the core, preventing other core states from allying against the world-empire project is clearly a fundamental precondition.But Chase-Dunn's argument that the empire state would have to prevent the diff usion of military technological knowledge across the core-something that Chase-Dunn considers impossible in the modern world-is surely one-sided.Th e empire state would simply have to maintain at any one time a decisive technological lead suffi cient to deter any challenge at any given time.Th is would indeed be a precondition but one linked as much to relative resources for military research and development as to capacities to block information fl ows in this area.
Th e third point in this area-suppression of eff ective competitive challenges in the productive sector from other sovereign core nation states-is clearly fundamental.We can express this as the ability of the empire state eff ectively to control socio-economic developments and outcomes within juridically sovereign core states.Many would regard such a task as a contradiction in terms and thus a decisive basis for ruling out a world empire in which juridically sovereign states are retained in the core.We shall return to this subject later.
Th e fourth point-the world-state's ability to prevent the other core states from transforming the world dominated by a single empire-state into a world state is also, of course, fundamental.
Capitalist interest/incentive preconditions: Th is set of arguments essentially rest upon the idea that the interests/incentives of core capitals including those of the incipient empire state would be radically opposed to any such world empire project because of the systemic needs of capitalism as such.As Chase-Dunn and Hall (1997:33) put it in the quotation above, 'capitalists prefer a multicentric international political system.' Th ey do so for both economic and political reasons.
Freedom of international movement of capital is important both to exploit unevenness and as a decisive source of structural power over geographically immobile labour.Both depend upon real competition between core states in the international political economy.Th is competition off ers capital the chance for regime arbitrage across states, checks the ability of any state, not least the empire-state, to impose restrictions and extra fi scal and other burdens on capital and drives labour constantly to accept restructuring of production within any state for fear of capital migration.Th us the maintenance of inter-state competition is necessary for the preservation of the social domination of capital.
But the inter-state system is not only a lever for negatively disciplining the working class and other subordinate groups in the economic system.It also provides a basis for subordination through providing strong 'vertical' political identities between diff erent social groups within a given state: identities based on the supposed priority of racial/ethnic, cultural, or religious bonds between social classes within the state overriding other social divisions.Th e resulting 'state-worship' based upon the state's supposed embodiment of the values of the ethnic, cultural or religious community is a further source of social subordination to the rule of capitalism and one that depends upon the maintenance of the authority and capacity of nation states and thus of the inter-state system.Insofar as a set of core nation states seemed to be subordinated to an empire state, there could be the risk of movements by subordinate classes across core states to mount challenges to the empire state with potentially anti-capitalist dynamics.
Th ese arguments carry great force.But they rest quite strongly upon two premises.Th e fi rst is that world-empires and sovereign states are necessarily mutually exclusive, polar opposites.And the second is that there is a structural tension between capitalists and states which a fortiori must be particularly strong as between capitalists and an empire state.Both these premises are weak in the contemporary world.
Peter Gowan 488
Contemporary Intra-Core Relations and World Systems Theory
A World Empire of Juridically Sovereign States?
Th e liberal tradition tends to place juridical relations on a higher plane than political relations.It thus assumes that a world empire in a political sense presupposes juridically imperial relations.Th e European Empires of the fi rst half of the 20t century were indeed juridically anchored and liberalism typically assumes that their replacement with a new juridical order of sovereign states encompassing the globe ended possibility of an era of empires of any kind.
But this concept of an empire presupposes that an imperial relation is one of hierarchical command-compliance: a centre gives an order and the subordinates follow it-a juridical empire is simply the most formalised form of such an hierarchical command empire.
But a systems approach to the organisation of politics and political economies can off er us a very diff erent, more indirect but also more robust and eff ective form of imperial control, one in which the empire state has suffi cient capacity to design the core as a system of inter-actions which systematically tends to produce outcomes re-enforcing the power and interests of the empire-state.
Joseph Nye (1990) discusses this variant in his book, Bound to Lead, as follows: Command power can rest on inducements ("carrots") or threats ("sticks").But there is also an indirect way to exercise power.A country may achieve the outcomes it prefers in world politics because other countries want to follow it or have agreed to a system that produces such effects.In this sense, it is just as important to set the agenda and structure the situations in world politics as it is to get others to change in particular situations.This aspect of power-that is, getting others to want what you want-might be called indirect or cooptive power behaviour.It is in contrast to the active command power behaviour of getting others to do what you want (: ).
One central consequence of Nye's concept is that it suggests the possibility that a world empire can be an inter-state system and international political economy shaped and structured in ways that generate empire-state re-enforcing agendas and outcomes.We can call this an Empire-System.
Let us take some simple examples of how an Empire-System could work.If the empire state can shape the geopolitical environment of other core states in such a way that their security is threatened in ways that require the military resources of the empire state, these other core states will want what the empirestate wants.Or if the other core states' fi nancial sectors' stability is bound up with the safety of their loans to empire-state companies and individuals whose prosperity in turn hinges upon rising prices on the empire-state's securities markets, those other core states will want what the government of the empire-state wants: a priority for stability on the empire-state's fi nancial markets.Or if other core states' capitals view their continuing expansion as dependent upon further opening of 'emerging markets' in the semi-periphery and if the most potent instrument for such opening is the empire-state's manipulation of the international monetary and fi nancial regime, the other core states will want what the empire state wants.
Of course, in reality, a core-wide empire in contemporary conditions would not be exclusively an Empire-System of this sort.It would also possess various instruments of command power and indeed of covert action and surveillance within the core to assure its dominance.But the main form of its dominance would be indirect, of the Empire-System type, even if the Empire-System rested upon foundations of extraordinary military-political capacity and reach.
The Empire State as Friend or Foe of Capital?
Th e idea that there is a deep antagonism between private business and the state runs deep in Anglo-American liberalism and it has been radicalised in the neo-liberal ideologies of the contemporary period.Th is preconception can lead one to think that capital would be especially hostile to an imperial super-state.
One referent for this supposed antagonism lies, of course, in the counterposition between private-property-market mechanisms of supplying goods and services and state provision of goods and services.But to defi ne the capitalist state as fi rst and foremost a provider of goods and services is, to say the least, somewhat one-sided.Another referent is the trade-off between state revenue and retained private income.But this can scarcely be seen as a radical opposition between state and capital given that the bulk of such taxation is spent upon infrastructures necessary for the reproduction of the private sector itself.
Th ere are, of course, very strong grounds for arguing the opposite case, namely that in the contemporary core there is a symbiotic relationship between capitalist states and capitalist classes.Arrighi has stressed closeness of this relationship pointing out that markets are simply a mediating level in capitalist reproduction rather than an autonomous governing framework for capital accumulation.He emphasises this with some striking formulations by Braudel on the relationships between capitalism and markets.
Braudel argues that the market should be seen as the 'middle layer' of the modern economy; beneath it is the layer of production and subsistence; and above it is the layer which Braudel calls capitalism-or as he expresses it, the 'anti-market.' Braudel (1982) says of this: "Above [the lowest layer], comes the favoured terrain of the market economy, with its many horizontal communications between diff erent markets: here a degree of automatic co-ordination Of course, the long-term sustainability of the world empire would require many other pre-conditions: the empire-state would have to use its extraordinary dominance to ensure the continued ascendancy of its capitals in key production sectors.It would have to assure its capacity to extract suffi cient resources from the reproduction process to sustain its military-political reach and ascendancy and it would be faced by the constant danger that its own public policy blunders could drag it down to defeat.
We will now turn to consideration of whether such an empire actually exists, as Zbigniew Brzezinski would have us believe.
part 3: current intra-core dynamics: the united states as a new world-empire?
One of the most striking areas of weakness in Western social science analysis in the last quarter of a century has been its inability to reach anything like a stable, minimal agreement on the role and capacity of the United States in international relations.Within a decade opinion has swung wildly from images of the US as being in terminal hegemonic decline to images of it as a colossus dominating the planet.And there has generally been no minimal agreement, even within each of the various intellectual paradigms on the criteria for making analytical judgments on this topic.
Mainstream wst at least has had the merit of maintaining over decades a fairly clear and stable set of theoretical and analytical criteria for approaching this topic.It has ruled out the theoretical possibility of a world empire, it has provided clear criteria for identifying hegemonic status and it has judged, on the basis of its criteria that since the 1970s the US has been in hegemonic decline.
Th e performance of American capitalism in the 1990s would also seem to provide wst with evidence that the United States is bouncing back and has entered a phase of hegemonic revival-something not excluded as a possibility in wst.In the capital intensive information and telecommunication industries which seem to be revolutionising international economics, the US seems to possess a substantial competitive advantage.And more than ever it seems to possess the military-political capacity to ensure the diff usion of its products in these fi elds on a global scale.
But our analysis in this paper suggests that the United States occupies a place within the contemporary core qualitatively diff erent from the place suggested by the concept of hegemon which mainstream wst advances.It possesses strong elements of what we have called a capitalist world empire.
We will focus here on some critical issues on which a judgement of the nature of US dominance would depend.We argued above that the success of usually links supply, demand and prices.Th en alongside, or rather above this layer, comes the zone of the anti-market, where the great predators roam and the law of the jungle operates.Th is-today as in the past, before and after the industrial revolution-is the real home of capitalism" (pp.229-230).Elsewhere Braudel (1977) adds: 'Capitalism only triumphs when it becomes identifi ed with the state, when it is the state ' (pp. 64-65).
In this context, it is perfectly possible to envisage possible bases for strong co-operation between the capitals of the core and an emergent empire-state.Let us mention some of them: a.If the empire-state presents itself as the champion of the most unrestricted rights of capital over labour within all the states of the core, this empire state should expect a warm reception from capitals across the core.
b.If the empire-state off ers itself as an instrument for expanding the reach of all core capitals into the semi-periphery and periphery it should also expect a warm reception from capitals across the core.
c.If the empire-state off ers a new model of capitalist organisation which brings very large additional pecuniary rewards to leading social groups within other core states it can hope to create a broad constituency of social support in the business classes across the core.
d.If the empire-state off ers a mechanism for managing the world economy and world politics which is suffi ciently cognisant of trans-core business interests the empire-state may be strongly preferred to the risks of institutionalised world government by core business and political elites.
In conclusion, insofar as Chase-Dunn is arguing that a precondition for a capitalist world empire is that the empire-state must be perceived by strategic sectors of core-wide capital as its champion, we could agree with him.But insofar as he argues that this is a theoretical impossibility we would disagree.
wst theorists do not seem to have adequately explored the possibility that within the Modern World System, a core wide empire is, under certain conditions, very much a theoretical possibility.Th e key attributes of a state seeking to become an empire-state in contemporary conditions are: a.It must have the resources to organise its empire as a System-Empire not just as a Command (or juridical) Empire.
b.It must have the capacity to rally strategic constituencies of core-wide capital to its empire project.
the American state's project for establishing and consolidating a capitalist world empire must depend upon achieving four critical goals: a.It must have the capacity to rally strategic constituencies of core-wide capital to its empire project.
b.It must have and must be able to deploy eff ectively the resources to organise its empire as a System-Empire not just as a Command Empire.
c. Success in these two fi elds must be complemented by its ability to sustain, in the long term its ascendancy in the most dynamic sectors of capital-intensive production.
d. Success must also include an eff ective set of mechanisms for demonstrating that such an empire-system is optimal for managing transnational class relations between capitalism and subordinate classes, coping with future anti-systemic movements.
International Social Coalition Building
In pursuing its world-empire project over the last twenty years, the United States' business and political elites have sought to rally support as the champions not just of American business interests but of business interests and the strengthening of capitalism as a social system on a world-wide scale.Th is, we have argued, is a necessary condition for any capitalist world-empire project.
On the face of it, this task might seem a daunting one.After all, every European or Asian business person knows very well that the US government aggressively supports its own businesses against the international competition wherever it can, a feature that has been particularly pronounced in the Clinton administration.Yet the US has shown that it has very great capacities to present itself as the leader of global capitalist interests in a number of ways: a. Th e champion of the rights of capital over labour.Business in other parts of the core and semi-periphery is not simply or mainly pre-occupied with competitive challenges from US businesses.It is daily concerned with maintaining its stable social ascendancy over labour.Th e US stands as an example and a champion of the most unrestricted rights of capital over labour within all the states of the core.Its programmes processed through the IMF and World Bank in the former Soviet Bloc, in semi-periphery and periphery demonstrate that.And its programmes for privatising utilities, freeing transnational private fi nancial operations, placing the fi nancial sector in the driving seat and re-accenting capitalism towards securities-market centred, share-holder value buttressed by private pension funds has great attractions for core capitalists.Th e US pro-gramme off ers very substantial rewards to the rentier interests of business executives and others.Th us, insofar as the German government fully adopted the US programme for shareholder capitalism, a German business executive could hope to see his or her income at least doubling.
b. Strengthening Core Capital's Expansion into the Semi-Periphery and Periphery.A second very important basis for the US being able to present itself as the champion of core capital as a whole lies in its ability to demonstrate its leadership on the global expansion of core capitals into regions outside the core.Since the days of the Reagan administration, the US has driven forward a programme which off ers the semi-periphery and periphery only one path towards economic development: that of opening its domestic assets to the entry of core capitals for FDI-led growth and for portfolio infl ows to compensate for domestic fi nancial and fi scal strains.Th is has been a powerful programmatic link between the interests of the United States and its businesses on the one hand and the businesses of the rest of the core on the other.
c. Bargaining Power with the Strongest non-American Core Businesses.A much more narrowly focused but extremely important aspect of US coalitionbuilding is its ability to accept or deny the most infl uential groups of multinational corporations based in other core states secure insertion into the US market itself.Any European or Japanese company seeking global ascendancy in its sector must gain a strong, secure presence within the United States.Achieving this is as much a political as a purely economic task.Th e capacity of the Deutsche Bank to buy a large German bank or of Daimler Benz to buy a large US car producer depends upon a willingness to accept American approaches to developments in their own countries, for example, a readiness on the part of the Deutsche Bank to move away from the closed system of German corporate governance involving inter-locking bank-industrial structures.Th e same applies to Japanese companies.
d. Being able to resist pressures from other parts of the core for collegial, institutionalised forms of global government by off ering core capitals suffi cient scope for their own expansion within an empire-state framework of global governance.
Th is has been, perhaps, the most sensitive area in the eff orts of the US to consolidate its global social coalition in the 1990s.Its operations in international monetary, fi nancial and trade and investment policy at an international level have frequently aroused suspicion on the part of the capitals as well as the governments of other parts of the core that US power is being used narrowly to favour its own capitals and clients.Rather than opting for a capitalist world empire, capitalists are, in the view of Chase-Dunn and Hall, more likely to accept moves towards world government, despite the risks these steps could At a more immediate level, a powerful compensating factor mitigating resentments among other core capitals against US economic nationalism has been the boom in the American economy itself, which has off ered wide profi table opportunities for capitals across the core and which has thus eased international business tensions.
All these factors, then, have enabled the United States to gain very broad social support from the business classes of the rest of the core for its worldempire project in the 1990s.No clearer demonstration of that is needed than the fact that the media empires of the core have been prepared to thematise the American project not as a Pax Americana but as an agentless process of 'globalisation' that we must all accept and live within.
Progress Towards an Empire-System
We have argued that in the contemporary world, a core-wide empire cannot be sustainable simply as a Command Empire, whereby the empire-state is reliant upon carrots and sticks to maintain its dominance over the rest of the core.Th ese command capacities should be confi ned largely to crisis situations while the normal functioning of the order leaves them in the background and can rely upon the shaping of the power-relevant environments of other core powers to make them 'want what the US wants' in the phrase of Joseph Nye.We will now investigate the extent to which the US has been able to advance and consolidate this Empire-System in the 1990s.
a. Preventing Other Core Powers from Gaining Regional Geostrategic Autonomy.Th e Bush administration's 1992 Grand Strategy document was surely right to prioritise the risk of the West European and Japanese parts of the core acquiring regional political autonomy.One very important dimension of this is geostrategic autonomy.Th is could be achieved through Germany leading Western Europe into a strategic security partnership with Russia and through Japan entering a strategic security partnership with China.Such partnerships would not, of course, be directed against the United States.Th ey would simply give priority to the formation of a security community of the states involved.In the event of achieving this, the relevant core states would lose their geostrategic dependence on the US relationships respectively with Russia and China.
During the 1990s, the US has successfully prevented this eventuality from arising.Th e exclusion of Russia from an enlarging NATO striking out of area at a state with friendly relations with Russia-namely Serbia-in the Kosovo war has indeed gone a great distance towards rebuilding Europe's bipolar structure.At the same time the United States has been strengthened in its eff orts to secure a belt of pro-US states between Russia and Germany.A further step to consolidate this pattern of Western Europe's strategic dependence on the US would, paradoxically, need to be for the US to have the capacity to demonstrate to Russia that its position in the international order can best be secured through privileging its relations with the US rather than with Germany and Western Europe.
In the Pacifi c region, there is little risk of Japan seeking to break out of its strategic dependence upon the United States-China relationship because of the many potential confl icts of political interest with a China which is becoming increasingly powerful within the whole region.
b. Preventing European Political Unity.A very important and too little recognised feature of US political dominance in Europe during the Cold War was the fact that NATO Western Europe was actually politically fragmented with each fragment having its main political link with the US rather than with other West European fragments.Th e EU created the illusion that this was not so.Th is political fragmentation of Western Europe has continued through the 1990s, but signifi cant counter-tendencies are emerging, focused upon a much more political Franco-German axis.Th e driving forces behind these tendencies lie fi rst in the common commitment to the Euro and to giving it an adequate political anchorage; and secondly, in the common concern at their vulnerability to events in East Central South Eastern and Eastern Europe which the West European states do not control (and which the United States exerts increasing infl uence over).Th ese pressures are leading to eff orts to build an inner core within the EU and to giving that core (with or without Britain) some collective military capacity.Th is shows it to be a cohesive political group around the Euro, turns it towards being a West European caucus within NATO and gives it, through its collective military instruments, the potential to wield greater infl u- Contemporary Intra-Core Relations and World Systems Theory ence around Western Europe's immediate hinterland.A secure world-empire would need to contain such pressures.c.Preventing Pacifi c Regional Political-Economy Integration.Th e greatest challenge to a consolidated World Empire in the Pacifi c region would come from the capacity of Japan and China and the ASEAN states to form a stable regional political-economy bloc, whether involving monetary and fi nancial integration or a so-called 'Free Trade Area' (i.e. a zone of relatively protected investment and trade linkages).Th e United States, whose economic penetration of the region has been weak, has worked hard to prevent such a development.It succeeded triumphantly (with West European support) in preventing Japan from establishing a regional fi nancial and monetary shield in the autumn of 1997 and in subsequently greatly strengthening US economic penetration of the region as a result of the fi nancial crisis of 1997-8 and the IMF (i.e.US Treasury) policies in that crisis.But Japanese eff orts to build such a fi nancial and perhaps monetary shield have been relaunched in 2000, with support from China and with some initial success.Nevertheless, access to the US market remains suffi ciently critical for so many of these economies that the US retains substantial leverage at a political-economy as well as a military-political level.
d. Maintaining International Monetary and Financial Leverage.A US World-Empire project would have to combine the military-political dimension with continued dominance over international monetary and fi nancial relations.Both Japan and Western Europe have taken steps, in diff erent ways, to protect themselves from the US use of economic statecraft in this fi eld to exert pressure on the rest of the core.
In the West European case, this has been attempted through the European Monetary System and its successor, the Euro.Th e fi nal implementation of the Euro in July 2002 will supply a very substantial shield for Western Europe, particularly when it is combined with an integrated deep and liquid EU fi nancial system.Th e strength of this shield will be all the greater in that, despite all the talk of economic globalisation, the European economy is becoming an increasingly closed one, less and less reliant upon transatlantic trade.
As far as Japan is concerned, it has not made any serious attempt to turn the Yen into a signifi cant international reserve currency or to construct a yen bloc as a shield against US economic statecraft.Th is would be too risky a step, threatening heavy retaliation.Instead the Japanese government has used its enormous fi nancial power to build up very large positions in the US fi nancial market, especially in the Treasury bond markets.Such is the size of these Japanese holdings in the dollar area that their liquidation could deliver a substantial shock to the dollar.In other words the Japanese government has acquired leverage over US dollar policy.e. Gaining Strategic Control over the International Division of Labour.A fullyfl edged World-Empire project would give the United States the capacity not just to use the market mechanism to assure its ascendancy in product and services markets but to acquire a more structured ascendancy in the markets of the rest of the core.Yet there is continued resistance to eff orts in this direction from both Japan and Western Europe.One striking symptom of this is the instability and tension surrounding the functioning of the World Trade Organisation.Another is the series of battles raging over biotechnology industries.A third is the very important confl icts over corporate governance issues and the capacity of foreign capitals to engage in hostile takeovers of important domestic companies.A fourth is the constant eff orts of the US to enlarge the reach of US domestic jurisdiction over the political economies of the rest of the core.
Th e general direction of US policy in these areas is that of re-engineering the internal social relations of the rest of the core in such a way as to enable US capitalism to use its huge fi nancial resources to be able to centralise and concentrate capital eff ortlessly across the core in the sectors considered vital for US ascendancy.But the US is still a long way from achieving this even if it has progressed far down this road in the case of Britain.
Th is is the area where a capitalist world empire does seem to reach its limits as a result of the necessary continued existence of an inter-state system of parcelised legal sovereignties within the core.Th e capacity of other core states to use their legal and administrative autonomy as well as their economic capacity and cultural/political identities to resist pressures in this area has been demonstrated in the cases of both Japan and Germany over the last two decades.Th is capacity for resistance is not limitless.Th e Japanese fi nancial crisis of 1998 demonstrated the US's ability to enlarge the frontier of its penetration into Japan.But it remains very great.
Assuring US Ascendancy in the Field of Production
Th e extraordinary advances made by the United States during the 1990s have received great impetus from both the macro-economic dynamism of the US economy in the context of continuing stagnation in Japan and Western Europe and from the perceived emergence of a new wave of growth-generating capital-intensive industries within the United States.Th ese two factors have dazzled the capitalists of the rest of the core.But they may not be as solidly based as they seem.
Th ere is now widespread agreement that the US boom has been fed by some features which are not only unsustainable but potentially very dangerous: a strongly speculative boom on the stock market which itself has become an ever-more central mechanism in the American economy, a huge growth in private indebtedness, with much of the debt being tied to stock market speculation, and very large levels of US international debt and US trade defi cits.A sudden shock could therefore swiftly transform the boom into a very savage fi nancial crisis and deep recession with multiple consequences for the world economy.
Secondly, the supposed new growth motors of information industries and telecommunications may not have the long-term eff ects of sustained productivity gains necessary for what wst theorists call the A Phase of a new K-wave, in other words a new long boom anchored in a new US hegemony in the key productive sectors.Studies of the impact of information technology on productivity do not indicate unequivocally its capacity to be the necessary growth motor for a new long boom.
Th irdly, there are very real doubts about the new American business system of share-holder value.While this system is extremely attractive at a pecuniary level to business classes throughout the core and while it off ers great opportunities for US money capital to extend its sway over productive assets in other countries, there must be serious doubts as to whether it is an eff ective business system for generating long-term large investments in fi xed capital, geared to sustaining US innovation and productive ascendancy.If German and Japanese capitalisms can resist the seductions of dramatic short-term fi nancial gains and maintain business systems more geared to long-term investment in innovations they may well be able to remount a challenge to the US in the productive sector quite rapidly (O'Sullivan 2000).
Coping with Future Anti-Systemic Movements
Too often overlooked in assessments of American resurgence in the 1990s has been one absolutely central feature of the period: the collapse of Communism.Th is has not simply led to a scramble for gain in the former Soviet Bloc, it has given a unique accent to transnational class relations because it has resulted in the disorientation and disorganisation of labour on an international scale.Th is has been a fundamental social basis for the extraordinary advance of the new Pax Americana or empire project.
Th at project's advance has required that the states and capitalist classes of the rest of the core fi nd it relatively risk free to accent their eff orts towards bandwaggoning with the US programme of unfettered capitalism, American style.Th e weakness of labour has made that emphasis relatively easy to achieve.But in the event of a restabilisation of labour and renewed pressure from that quarter, core and semi-periphery capitalist states will face a trade off between making further adaptations towards the regime goals of the US and making adaptations to the domestic pressures from labour, even if, at the cost of disrupting US regimes.A process can occur somewhat similar to the processes leading to the disintegration of the Gold Standard and free trade in the interwar period as states in Europe had to cope with the rise of labour then.And, of course, core and semi-periphery states can also use the risk of a challenge from labour as a way of resisting US pressures to accept imperial regimes.
While a revival of the strength of labour may seem to many a fanciful prospect at this moment of post-modernist play and senses of endings there remain both strong sociological and economic bases for such a resurgence and also still very substantial resources of the most subversive strands of the modernist project available for challenging the narrow strip of liberal individualist universalism through which the current imperial project is ideologically legitimated.
Such a revival of the challenge from labour could also be used by core powers to advance a programme of more collegial and institutionalised world government against the unipolar, US-governance instruments which have been unchallenged in the 1990s.conclusion wst's historical theorisation of intra-core relations has been a very great scientifi c achievement.It provides us with a comprehensive research agenda on this topic, even if it underplays the radical diff erences between the hegemony of Britain and the United States, down-grades some central features of US hegemonic capacities and rules out too glibly the possibility of a contemporary capitalist world empire.Furthermore, the work of Arrighi contains many insights and leads upon which to draw for developing a more adequate analysis of contemporary dynamics.And Chase-Dunn's and Hall's work has helped to transform wst's study of these issues from being a brilliant schema outlined by Wallerstein into a very serious scholarly research programme.
Core Relations and World Systems Theory involve of generating social movements challenging the capitalist market.Th us, in Rise and Demise, speaking of the weak forms of global governance supplied by the Concert of Europe, the League of Nations and the UN, Chase-Dunn and Hall (1997) continue: 'Th ough these weak forms of global governance did not much alter the pattern of hegemonic rise and fall in the cycle of world wars over the past 200 years, the spiraling strengthening of global governance might, if it continues, eventually lead to a world state that can eff ectively prevent warfare among core states' (240).But they underestimate the extent to which the worldempire project can remain an attractive alternative even for the capitalists of competitive core states.One of the reasons for that attractiveness is precisely given by Chase Dunn and Hall when they point out a 'world state would likely be dominated by the hegemony of global capital for a time.However, if the fascist alternative were avoided, it might undergo a reform process that would lead to global democratic socialism'(Chase-Dunn and Hall 1997:240).
|
2018-12-12T03:50:57.441Z
|
2004-08-26T00:00:00.000
|
{
"year": 2004,
"sha1": "2880150590f4468aea27218aee1980e5ab72611a",
"oa_license": "CCBY",
"oa_url": "https://jwsr.pitt.edu/ojs/jwsr/article/download/291/303",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2880150590f4468aea27218aee1980e5ab72611a",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
119710201
|
pes2o/s2orc
|
v3-fos-license
|
Equivelar toroids with few flag-orbits
An $(n+1)$-toroid is a quotient of a tessellation of the $n$-dimensional Euclidean space with a lattice group. Toroids are generalizations of maps in the torus on higher dimensions and also provide examples of abstract polytopes. Equivelar toroids are those that are induced by regular tessellations. In this paper we present a classification of equivelar $(n+1)$-toroids with at most $n$ flag-orbits; in particular, we discuss a classification of $2$-orbit toroids of arbitrary dimension.
Introduction
The study of symmetric discrete objects on both combinatorial and geometrical approach has been of interest in recent years. In particular, symmetric maps on surfaces has been one of the most studied topics.
In [5] Coxeter and Moser present the classification of regular (reflexible) and chiral (regular irreflexible) maps on the torus. All such maps arise as quotients of a regular tessellation of the euclidean plane. Several results regarding a classification of highly symmetric maps on surfaces of small genus have been obtained since. See [2,3].
When looking for generalizations of maps to higher dimensions the approach of abstract polytopes has been one of the most studied. The number of recent contributions to the theory of symmetric abstract polytopes is large. Many of such results may be found in [13].
One concept that generalizes maps on a combinatorial way while keeping the topological idea behind maps is that of tessellations of space forms (see [13,Chapter 6]). Euclidean space forms are probably the most studied on the setting of symmetric tessellations. Among those, the n-dimensional torus is probably the most well understood.
When talking about symmetric structures on the n-dimensional torus, much of the work follow the ideas introduced by Coxeter and Moser in [5]. Toroids are generalizations to higher dimension of maps on the torus and may be regarded as tessellations of the n-dimensional torus.
Several classification results of highly symmetric toroids have been developed in the recent years. In [12] McMullen and Schulte classify regular toroids of arbitrary dimension. They also show that there are no chiral toroids of dimension higher than 3. In [9] Hartley, McMullen and Schulte extend this result and show that the only Euclidean space form that admits chiral tessellations is the n-torus and prove that this is only possible when n = 2.
In [1] Brehm and Kühnel classify the equivelar maps in the two dimensional torus. This classification was also achieved by Hubard et al. in [10]. They also extend the techniques to classify equivelar tessellations of the 3 dimensional torus (rank-4 toroids).
As a consequence of their results in [10], Hubard, Orbanić, Pellicer and Weiss found that there are no equivelar 2-orbit (3 + 1)-toroids. Therefore, they mentioned that even though a classification of (n + 1)-toroids for n > 4 seems to be too hard to achieve with their techniques, it would be of interest to obtain a classification of 2-orbit toroids of rank n + 1 for n 4.
In this paper we give a classification of equivelar (n + 1)-toroids with at most n orbits. In particular, 2-orbit (n + 1)-toroids are classified for arbitrary dimension. If n 3 the classification is a consequence of the results in [5] and [10]. The main results of this article can be summarized in the following theorem. Theorem 1. Let n 4. The classification of equivelar (n + 1)-toroids is explained in Table 1.
Basic notions 2.1 Tessellations of the Euclidean Space
In this section we introduce basic concepts about Euclidean tessellations and toroids. We focus mainly on those with high degree of symmetry. Readers interested in further details are referred to [13,Chapter 6] and [10].
A convex n-polytope P is the convex hull of a finite set of points of E n such that the interior of P is non-empty. If P is a convex n-polytope, then F ⊆ P is a face of P if F = P ∩ Π for some hyperplane Π that leaves P contained in one of the closed half spaces determined by Π. If the affine dimension of a face F is i for some i ∈ {0, . . . , n − 1}, then F is an i-face. The 0-faces, 1-faces and (n − 1)-faces of a convex n-polytope are also called vertices, edges and facets, respectively. We usually consider a convex n-polytope P itself as its (unique) n-face. Readers interested in more details concerning basic notions of convex polytopes are referred to [11,Chapter 5]. If n is odd, there are no 2orbit toroids. If n is even there is one family of torois in class 2 {1,2,...,n−1} .
One family of 3-orbit toroids.
There are no k-orbit toroids if 2 < k < n. Five infinite families of toroids with 4 flag-orbits, all with the same symetry type.
Five infinite families of toroids with n flag-orbits, all with the same symetry type. A tessellation of the Euclidean space E n (or an Euclidean tessellation) is a family U of convex n-polytopes that is locally finite, meaning that every compact set of E n meets only finitely many members of U. We also require that U covers the space and tile it in a face-to-face manner. This is, every two members of U that have non-empty intersection have disjoint interiors, and they meet in a common i-face for some i ∈ {0, . . . , n − 1}. If U is a Euclidean tessellation, the elements of U are called cells.
A flag in a convex n-polytope is an (n + 1)-tuple of incident faces containing exactly one face of each dimension, including the polytope itself. This definition extends naturally to tessellations of E n . A flag of a tessellation U is a flag of any cell of U. It is sometimes useful to identify a flag Φ of U with the non regular n-simplex induced by the centroids of the faces of Φ. Observe that given a flag Φ of U and i ∈ {0, . . . , n}, there exists exactly one flag Φ i of U that differs from Φ only in the face of dimension i. In this situation we say that Φ and Φ i are adjacent (or i-adjacent if we want to emphasize on i).
A symmetry of a tessellation U is an isometry of E n that preserves U. The group of symmetries of U is denoted by G(U). It is not hard to see that G(U) acts freely on the set of flags of U. A tessellation U is regular if the action of G(U) on the flags of U is transitive.
The dual tessellation of a regular Euclidean tessellation U, usually denoted by U * , is the tessellation whose cells are the polytopes given by the convex hull of the centroids of the cells of U incident to a common vertex of U. A tessellation is self-dual if it is isometric to its dual.
The Schläfli type (or type, for short) of a convex n-polytope P is defined recursively as follows. If n = 2 then P is a convex p-gon for some p and we say that P has Schläfli type {p}. For n 3, whenever all the facets of a convex n-polytope have type {p 1 , . . . , p n−2 } and there are exactly p n−1 facets around each (n − 3)-face of P, we say that P has Schläfli type {p 1 , . . . , p n−1 }. Observe that not every convex polytope has a well-defined Schläfli type, however all regular convex polytopes do. The notion of Schläfli type extends to tessellations in a natural way. We say that U has (Schläfli) type {p 1 , . . . p n } if all the cells of U have type {p 1 , . . . , p n−1 } and the number of cells around each (n − 2)-face of U is p n . Regular tessellations are well known. There exists a self-dual regular tessellation with cubes on E n with type {4, 3 n−2 , 4}. Here 3 n−2 denotes a sequence of 3 with length n − 2; if there is no possible confusion, in this work we will use exponents to denote a sequence of equal symbols. In E 2 there exists a regular tessellation with equilateral triangles and type {3, 6}, and a tessellation with regular hexagons and type {6, 3}. Those two are dual of each other. In E 4 there is another pair of regular tessellations, one with 24-cells as facets and type {3, 4, 3, 3} and its dual of type {3, 3, 4, 3} whose cells are four dimensional cross-polytopes. These tessellations are unique up to similarity and they complete the list of regular tessellations of the Euclidean n-space [4, Table II]. In Table 2 we give explicit coordinates for the vertex set of one of each pair of dual regular tessellations. Those coordinates determine uniquely the tessellation.
If U is a regular tessellation of E n and Φ is a fixed base flag, there exist R 0 , . . . , R n symmetries of U such that ΦR i = Φ i . The symmetries R 0 , . . . , R n are the reflections on the facets of the n-simplex induced by Φ and generate the group G(U). If U has type {p 1 , . . . , p n }, the group G(U) with the generators R 0 , . . . , R n is the string Coxeter group [p 1 , . . . , p n ] meaning that is a presentation for the group, where p i,j = p j,i , p i,i = 2, p i,j = 2 if |i − j| > 1, and p i−1,i = p i . Moreover, the group of symmetries of the cell contained in the base flag is R 0 , . . . , R n−1 ∼ = [p 1 , . . . , p n−1 ] and the stabilizer of the vertex of Φ is R 1 , . . . , R n ∼ = [p 2 , . . . , p n ] (see [13, section 3A] for details).
Note that if R 0 , . . . , R n denote the distinguished generators of U with respect to a base flag Φ, then R n , . . . , R 0 act as distinguished generators of U * with respect to some flag. A consequence of this is that G(U) = G(U * ). In Table 2 we give coordinates and explicit expressions for R 0 , . . . , R n for one of each dual pair of regular tessellations.
Equivelar toroids
Toroids generalize maps in the 2-dimensional torus to higher dimensions. In this section we will discuss the basic results about toroids and their symmetries, with special interest in those induced by regular tessellations of E n .
Let 0 d n, a rank-d lattice group in E n is a group generated by d translations with linearly independent translation vectors. If Λ = t 1 , . . . , t d is a lattice group and v i is the translation vector of t i , then the lattice Λ induced by Λ is the orbit of the origin o under Λ, that is In this case we say that {v 1 , . . . , v d } is a basis for Λ.
Let T(U) denote the group of translations of an Euclidean tessellation U. A toroid of rank (n + 1) or (n + 1)-toroid is the quotient of a tessellation U of E n by a rank-n lattice group Λ T(U). We say that Λ induces the toroid, and denote the latter by U/Λ. If U is regular of type {p 1 , . . . , p n } we say that the toroid is equivelar of (Schläfli) type {p 1 , . . . , p n }; in this situation we also denote the toroid induced by Λ by {p 1 , . . . , p n } Λ (cf. [5,Chapter 7] and [13,Chapter 6]). An (n + 1)-toroid may be regarded as a tessellation of the n-dimensional torus E n /Λ.
For all regular tessellations U except {6, 3} and {3, 4, 3, 3} the group of translations T(U) acts transitively on the vertex set of U. Therefore the vertex set of U may be identified with the lattice associated to T(U) and the group of automorphisms G(U) is of the form T(U) G o (U), where G o (U) denotes the stabilizer of the origin o (see for example [13,Chapter 6]). For now on, we restrict our study to regular tessellations whose vertex-set is a lattice. The results regarding toroids of type {6, 3} and {3, 4, 3, 3} may be recovered by duality.
Let Isom(E n ) denote the group of isometries of E n . If t v is the translation by a vector v and S ∈ Isom(E n ) fixing the origin o, then S −1 t v S = t vS , the translation by vS. In other words, if Λ is the lattice associated to Λ, then ΛS is the lattice associated to S −1 ΛS. Therefore if there exists an isometry mapping a lattice Λ to another lattice Λ , then there exists an isometry S that fixes o that maps Λ to Λ . In this case the corresponding tori E n /Λ and E n /Λ are isometric. Geometrically this means that S maps fundamental regions of Λ to fundamental regions of Λ . This implies that two toroids U/Λ and U/Λ are isometric if and only if there exists an isometry S ∈ G o (U) mapping Λ to Λ or, equivalently S −1 ΛS = Λ .
With the notation given above, when Λ = Λ , an isometry S of E n induces an isometry S that makes the diagram (1) commutative if and only if S normalizes Λ. Furthermore, two isometries of E n induce the same isometry of E n /U if and only if they differ by an element of Λ. In particular, all the elements of Λ induce a trivial isometry of E n /Λ. This implies that the group Norm Isom(E n ) (Λ)/Λ acts as a group of isometries of E n /Λ. It can be proven that every isometry of E n /Λ is given this way, that is Isom(E n /Λ) ∼ = Norm Isom(E n ) (Λ)/Λ (see [15, p.336] and [13, Section 6A]).
With the previous discussion in mind, it makes sense to define the group of automorphisms of a toroid U/Λ, denoted by Aut(U/Λ), as the quotient Norm G(U ) (Λ)/Λ. Intuitively speaking, Norm G(U ) (Λ) denotes the symmetries of U that are compatible with the quotient by Λ. In the same sense, two toroids U/Λ and U/Λ are isomorphic if Λ and Λ are conjugate in Isom(E n ).
If an isometry S normalizes Λ then we say that S induces or projects to an automorphism of U/Λ (namely, to the automorphism ΛS ∈ Norm G(U ) (Λ)/Λ). If S ∈ G o (U), then S normalizes Λ if and only if S preserves Λ. Since every element S ∈ G(U) may be written as a product tS with t ∈ T(U) and S ∈ G o (U) and every translation normalizes Λ, an isometry S induces an automorphism of U/Λ if and only if S preserves Λ. Therefore we may restrict our analysis to elements of G o (U).
The i-faces of a toroid U/Λ are the orbits of the i-faces of U under Λ. Whenever all the vertices on each cell of U are different under the action of Λ the set of faces of U/Λ has the structure of an abstract polytope (in the sense of [13]). In this case, the symmetry properties of U/Λ as an abstract polytope coincide with those as a toroid. However, even when U/Λ is not an abstract polytope, we may define the set of flags of U/Λ as the set of orbits of flags of U under Λ. The group Aut(U/Λ) has a well defined action on the set of flags of U/Λ by (ΦΛ)ΛS = ΦSΛ with Φ a flag of U and ΛS ∈ Aut(U/Λ). In the future, we shall abuse slightly of notation and write simply ΦΛS = ΦSΛ. We say that a toroid U/Λ is a k-orbit toroid if Aut(U/Λ) has k orbits on flags. Following [13], regular toroids are precisely 1-orbit toroids.
Observe that every translation of U induces an automorphism of U/Λ and the translations of Λ induce the trivial automorphism. Also, the central inversion of E n χ : x → −x is always an automorphism of U (see [10, Table 1]) and preserves every lattice Λ, so it projects to an automorphism of every toroid. This implies that T(U), χ Norm G(U ) (Λ). Furthermore, since χ normalizes T(U) and T(U) ∩ χ = {id} it follows that T(U), χ = T(U) χ . Therefore, groups of automorphism of toroids are induced by groups K such that T(U) χ K G(U). By the Correspondence Theorem for groups, those groups K with T(U) χ K G(U) are in one-to-one correspondence with groups K such that χ K G o (U). In this correspondence, the group K corresponds with the group T(U) K .
Recall that if Λ = S −1 Λ S for some S ∈ G(U) then the toroids U/Λ and U/Λ are isomorphic. Furthermore, the corresponding automorphism groups are K/Λ and (S −1 KS)/Λ , for some group T(U) χ K G(U). Hence, in order to classify toroids up to isomorphism it is sufficient to determine their automorphism groups up to conjugacy.
According to the discussion above, we only need to the find conjugacy classes of groups K such that χ K G o (U). Furthermore, according to [14] the number of flag-orbits of a toroid U/Λ under Aut(U/Λ) is the same as the index of K in G(U) which is the same as the index of K in G o (U).
We summarize the discussion above in the following lemma. This is essentially Lemma 6 of [10].
Lemma 2. With the notation above, the following statements hold. 2. Since all lattices are centrally symmetric, χ : x → −x always projects to an automorphism of U/Λ. An important class of lattices are those that are preserved by a reflection on a hyperplane. Assume that Π is a hyperplane that contains o, a lattice Λ is a vertical translation
The automorphism group
Vertical translation lattices with respect to a hyperplane Π are trivially preserved by the refection on Π. However, even those lattices that are not vertical translation are provided with an interesting structure, which is described in the following lemma.
Lemma 3. Let Λ be a rank-n lattice on E n . Let R be the reflection on a hyperplane Π that contains o and assume that Λ is preserved by R, that is ΛR = Λ. Then the following statements hold:
The point w may be chosen in Π ⊥ if and only if Λ is a vertical translation lattice
with respect to Π.
3. If Λ is not a vertical translation lattice and {v 1 , . . . , v n−1 } is a basis for Λ 0 then w may be chosen of the form 1} not all zero. Furthermore, the choice of such α 1 , . . . , α n−1 is unique.
Let w be a vector in Λ \ Π with minimum (positive) distance d to Π. It is clear that Assume the other inclusion does not hold. In this situation there must be a point v of Λ in between the hyperplanes Π + kw and Π + (k + 1)w for some k ∈ Z. This implies that the point v − kw is a point of Λ with distance strictly less than d, which contradicts the choice of w. This proves part 1.
If w may be chosen in Π ⊥ then Λ is a vertical translation lattice. Conversely, if Λ is a vertical translation lattice, then the minimality of d implies that w may be chosen in Π ⊥ . This proves part 2.
Assume that Λ is not a vertical translation lattice. Let {v 1 , . . . v n−1 } be a basis for Λ 0 and w ∈ Λ \ Π such that d = d(w, Π) is minimum among the points of Λ \ Π. Let u = w − wR and note that |u| = 2d, hence u is a closest point of Λ ∩ Π ⊥ to o other than o itself. Observe that 2w − u ∈ Λ 0 , thus there exist m 1 , . . . , m n−1 ∈ Z such that 2w − u = m 1 v 1 + · · · + m n−1 v n−1 . Let k i ∈ Z for 1 i n − 1 such that k i is even and and observe that Λ = k∈Z Λ 0 + kw 1 since w and w 1 differ by an element in Λ 0 and hence d(Π, w 1 ) = d. Now, observe that if α i = 0 for all 1 i n − 1, then all m 1 , . . . , m n−1 are all even and this will imply that 1 2 u ∈ Λ, contradicting that Λ is not a vertical translation lattice. Finally, suppose there exist α 1 , . . . , α n−1 ∈ {0, 1} such that if
Since the vertex set of U is a lattice, according to the discussion in Section 2.2, the symmetry group of U, is the semidirect product T(U) G o , where T(U) is the translation group of U and G o is the stabilizer of the base vertex o, that is, the group of the vertex figure at o of U.
The group G o is the string Coxeter group [3 n−2 , 4], generated by reflections R 1 , . . . , R n . The group S n = R 1 , . . . , R n−1 is isomorphic to the symmetric group in n symbols, acting on the points of E n by permutation of coordinates. Observe that R n together with its conjugates under S n generate a group isomorphic to the group C n 2 ; we denote this group by C n 2 . Observe that the vector ( It is not hard to see that G o = C n 2 S n . Finally, observe that the action of conjugacy of S n on C n 2 is precisely the same as the action of S n permuting coordinates in C n 2 . We will use this structure to prove some results regarding the conjugacy classes of subgroups of G o . During the discussion we will identify the elements of S n with permutations and the elements of C n 2 with vectors with entries ±1 as described before.
Lemma 4. Let n 3 and let H be a subgroup of C n 2 . Let A n denote rotational subgroup of S n , that is, the group consisting of orientation preserving isometries of S n . If H is normalized by A n then one of the following hold: Where χ denotes the central inversion x → −x.
Proof. We use the structure discussed above for the group C n 2 S n . In this context, the group A n corresponds to the alternating group and hence we only have to classify groups H C n 2 preserved by the action of A n in the coordinates of its elements. Suppose that H = {1} and define m to be the minimum positive number of −1 entries in a non-trivial vector of H. Let A ∈ H be the transformation given by the vector (a 1 , ..., a n ) and assume that (a 1 , ..., a n ) has precisely m entries equal to −1.
If m = 1, then A = E i , the reflection given by the vector that has 1 in all its coordinates except in the i-th. Since A n acts transitively on the coordinates, for every j ∈ {1, . . . , n} Since H is preserved by A n and the set {E j : 1 j n} generates C n 2 , H must be C n 2 . If m = 2 then A = E i E j , the transformation given by vector whose i-th and j-th coordinates are −1. Proceeding in a similar way as before we may show that {E i E j : 1 i < j n} ⊆ H, which implies that (C n 2 ) + H. However, by the minimality of m we must have that H = (C n 2 ) + . If 2 < m < n, without lose of generality we may assume that a i = −1 if and only if 1 i m, that is A is the transformation induced by the vector (−1 m , 1 n−m ). Since H is preserved by A n , the transformation A given by (1, −1 m , 1 n−m−1 ) belongs to H. Therefore, AA ∈ H is given by the vector (−1, 1 m−1 , −1, 1 n−m−1 ), which contradicts the minimality of m.
Finally, if m = n then H = χ .
Recall that if a group G is the semidirect product N H there exists a natural mapping η : G → H whose kernel is N . Furthermore, if K G then the restriction of η to K has kernel K ∩ N ; in particular if G is finite we have 3.1 Cubic toroids with 2 orbits.
In this section we classify 2-orbit toroids of type {4, 3 n−2 , 4}. According to Lemma 2, to do so we need to find lattices preserved by index two subgroups of [4, 3 n−2 , 4]. The following results enumerate such groups.
Lemma 5. Let n 3. If K C n 2 S n is an index two subgroup, then (C n 2 ) + A n K.
Proof. Let η : C n 2 S n → S n be the natural mapping. By Equation (2), [S n : η(K)] 2, therefore A n η(K). This implies that |K ∩ C n 2 | 2 n−1 . Since K and C n 2 are normal subgroups, K ∩C n 2 must be preserved by conjugation under A n , and by Lemma 4, (C n 2 ) + K.
Since A n η(K), for every 3-cycle S in A n there exists A ∈ C n 2 such that AS ∈ K. This implies that However, ASAS −1 ∈ (C n 2 ) + K which implies that S 2 = S −1 ∈ K. Since this holds for every 3-cycle S, then A n K which implies that (C n 2 ) + A n K, as desired.
Corollary 6. If K is an index two subgroup of C n 2 S n , then K is one of the following.
Proof. It is clear that those three groups are different. By Lemma 5, K must contain (C n 2 ) + A n which is a normal subgroup of C n 2 S n of index 4. By the Correspondence Theorem for groups there must be at most three index-two subgroups containing (C n 2 ) + A n .
Corollary 6 determines all the subgroups of C n 2 S n of index 2. According to Lemma 2, by classifying the lattices preserved by such groups we obtain a classification of 2-orbits toroids.
By Lemma 5, every lattice preserved by an index 2 subgroup of C n 2 S n is also invariant under (C n 2 ) + A n . Therefore it is useful to know all those lattices. Consider the vectors v 1 := (1, 1, . . . , 1), v k := (−1, 1 k−2 , −1, 1 n−k ) for 2 k n and w k := (1 k−1 , −1, 1 n−k ) for 1 k n. Let Λ 0 and Λ 1 be the lattices whose basis are {v i : 1 i n} and {w i : 1 i n}, respectively. First observe that for every s ∈ N, we have that In particular, 2e j + 2e k ∈ Λ 0 ∩ Λ 1 , for j, k ∈ {1, . . . , n}, j = k. This implies that Λ 0 and Λ 1 are preserved by . . , n}, i < j. Similar arguments prove that Λ 0 and Λ 1 are also preserved by A n . Now it is easy to classify the lattices preserved by (C n 2 ) + A n .
If every point of Λ is of the form (±s, . . . , ±s) + x with x ∈ 4sΛ (1,0 n−1 ) there are several cases to consider. Assume there exist points p, q ∈ Λ with p = (p 1 , . . . , p n ) + x p , q = (q 1 , . . . , q n ) + x q where |p i | = |q i | = s for i ∈ {1, . . . , n}, x p , x q ∈ 4sΛ (1,0 n−1 ) such that the number of entries of (p 1 , . . . , p n ) equal to −s is even and the number of entries of (q 1 , . . . , q n ) equal to −s is odd. By performing some permutation of coordinates and even number of sign changes (if needed) we may assume that p = (−s k , s n−k )+x p and q = (−s m , s n−m )+x q with k even, m odd and k < m. In this situation p − q = (0 k , 2 m−k , 0 n−m ) + (x p − x q ) and since m − k is odd and 2se i + 2se j ∈ Λ for every i, j ∈ {0, . . . , n}, then 2se i ∈ Λ for every i ∈ {1, . . . n}. This implies that Λ is preserved by C n 2 A n . As before, this implies that Λ is sΛ (1 k ,0 n−k ) for k ∈ {1, 2, n}. Now, with the notation used above, the other possibility is that for every point p = (p 1 , . . . , p n ) + x p , the parity of the number of entries of (p 1 , . . . , p n ) equal to −s does not depend on p. Recall that 2sΛ (1,1,0 n−2 ) ⊆ Λ 0 ∩ Λ 1 . Hence, if the parity is even then all such points (p 1 , . . . , p n ) belong to sΛ 0 , if the parity is odd then all belong to sΛ 1 . Furthermore, since 4sΛ (1,0 n−1 ) ⊆ sΛ 0 ∩ sΛ 1 , then Λ ⊆ sΛ i , for exactly one i ∈ {0, 1}. The other inclusion follows from the fact that 2s(e i + e j ) ∈ Λ for every i, j ∈ {1, . . . , n}, which implies that Λ contains either {sv i : 1 i n} or {sw i : 1 i n}.
We say that a 2-orbit toroid belongs to class 2 I for I ⊆ {0, . . . , n} if given a flag Φ and i ∈ I, then the i-adjacent flag Φ i belongs to the same orbit as Φ. In particular, chiral toroids are those in class 2 ∅ .
Proof. According to Lemma 2, every 2-orbit cubic toroid must be given by a lattice Λ preserved by an index two subgroup K of G o containing χ. By Lemma 5, the only possibilities for K are (C n 2 S n ) + , C n 2 A n and (C n 2 ) + S n . However, K cannot be (C n 2 S n ) + , since this will induce a chiral (n + 1)-toroid and they are known not to exist if n 3 (see [12,Theorem 9.1] and [13, Section 6D]). As mentioned before, if Λ is a lattice preserved by C n 2 A n then Λ is an integer multiple of a lattice Λ (1 k ,0 n−k ) for k ∈ {1, 2, n}, but such lattices induce regular toroids. Therefore, the only possibility for K is (C n 2 ) + S n . Note that if n is odd, then χ / ∈ (C n 2 ) + S n . This implies that there are no two-orbit (n + 1)-toroids whenever n is odd. If n is even then Λ must be preserved by (C n 2 ) + S n . In particular, Λ must be preserved by (C n 2 ) + A n and those lattices are classified in Lemma 7. Since the lattices sΛ (1 k ,0 n−k ) induce regular toroids for all s ∈ Z, then Λ must be an integer multiple of Λ 0 or an integer multiple of Λ 1 . Finally, observe that both Λ 0 and Λ 1 are preserved by S n and since they are isometric lattices, then they produce isomorphic toroids. Consequently, every two-orbit toroid is induced by an integer multiple of Λ 1 , as stated.
3.2 Cubic (n + 1)-toroids with k flag-orbits for 2 < k < n. Now we proceed to classify toroids of type {4, 3 n−2 , 4} with k orbits for 2 < k < n. The following lemma is the key of such classification. Lemma 9. Let n 5. If K C n 2 S n has index k for some 2 < k < n then K = (C n 2 ) + A n .
Proof. Assume K as above and let η : C n 2 S n → S n be the projection to S n . By Equation (2), [S n : η(K)] k and since n 5, η(K) must contain A n . Let K 0 = K ∩ C n 2 . Observe that K 0 is normalized by A n and those groups are classified in Lemma 4. If (C n 2 ) + K 0 , then we may proceed as in the proof of Lemma 5 and conclude that (C n 2 ) + A n ⊆ K and k 4, which implies that (C n 2 ) + A n = K. If K 0 ∈ {{1}, χ } then we have that |K| = |K 0 ||η(K)| 2n!, hence n > [C n 2 S n : K] = 2 n n! |K| 2 n n! 2n! = 2 n−1 which is a contradiction since n 5.
As an immediate corollary we have the following result.
So far we have classified cubic two-orbit (n + 1)-toroids for arbitrary n 3. We also proved the non-existence of cubic k-orbit (n + 1)-toroids for 2 < k < n when n 5. These results together with those in [13, Section 6D] and [10] almost complete the classification of equivelar few-orbit toroids of type {4, 3 n−1 , 4}. In order to complete this classification we need to classify four dimensional toroids with three flag-orbits and n-dimensional toroids with n flag-orbits.
The following results classify cubic (4 + 1)-toroids with three flag-orbits. In order to give such classification first consider the group D 4 S 4 generated by the reflections . This group acts in the coordinates of the points of E 4 as the dihedral group acts on the vertices of an square labeled by 1, 2, 3, 4. Proof. Let K be a subgroup of C n 2 S n of index 3. Since K is a 2-Sylow subgroup of C n 2 S n and C 4 2 is a normal 2-subgroup, C 4 2 must be contained in K and hence K = C 4 2 η(K), where η is the natural mapping to S 4 . Furthermore, η(K) must have index 3 in S 4 which implies that, up to conjugacy, η(K) is D 4 . Lemma 12. Assume Λ is an integer lattice preserved by the group C 4 2 D 4 , then Λ is a integer multiple of one of the following: Proof. It is clear that all those lattices are preserved by C 4 2 D 4 . Assume that Λ is a lattice preserved by C 4 2 D 4 and let s be the minimum positive value among all the coordinates of the points in Λ and take (s, s 2 , s 3 , s 4 ) ∈ Λ. Since Λ is preserved by C 4 2 and D 4 acts transitively on the coordinates of the elements in Λ, proceeding as before we may conclude that (0 j−1 , 2s, 0 n−j ) ∈ Λ for all 1 j n. Therefore, we may assume that s i ∈ {0, s} for i ∈ {2, 3, 4}.
Cubic (n + 1)-toroids with n flag-orbits.
Now we proceed to classify (n + 1)-cubic toroids with n flag-orbits for n 4. The strategy is essentially the same that we have used before: first we determine the n-index subgroups of C n 2 S n and then we determine the lattices preserved by each of them. In order to classify subgroups of C n 2 S n of index n we use the following result regarding permutation groups. This is a consequence of [7, Theorem 5.2B].
Theorem 14. Let n 5 and let S n be the group of permutations of {1, . . . , n}. If G is a subgroup of S n of index n, then up to conjugacy, one of the following holds: • The group G is the stabilizer of {n} and hence isomorphic to S n−1 .
• n = 6 and G acts on {1, 2, . . . 6} as the group P GL 2 (5) acts on the points of the projective line over GF (5), the field with 5 elements.
Now we may classify the subgroups of C n 2 S n of index n.
Lemma 15. Let n 4. If K is a subgroup of C n 2 S n of index n, then up to conjugacy, one of the following holds: • n = 4 and K = (C 4 2 ) + A 4 .
Here S n−1 denotes the stabilizer of the last coordinate in S n and PGL 2 (5) denotes the subgroup of S 6 that acts on the coordinates of E 6 as P GL 2 (5).
Proof. The result is a consequence of Theorem 14. By Equation (2), [S n : η(K)] n, which implies that η(K) is S n , A n , or one of the groups described in Theorem 14. Let K 0 = K ∩ C n 2 and recall that |K| = |η(K)||K 0 |. If η(K) = S n , then |K| = (n − 1)!2 n = n!|K 0 |, which implies that 2 n n = |K 0 |. However, if n 4 we have 2 < 2 n n < 2 n−1 but this is impossible since K 0 is a subgroup of C n 2 normalized by S n and by Lemma 4, must be one among C n 2 , (C n 2 ) + and χ. Proceeding in a similar way we may see that if η(K) = A n , the unique possibility for K 0 is (C n 2 ) + and this is only possible if n = 4. With similar arguments to those used in the proof of Lemma 5 we may conclude that K = A 4 (C 4 2 ) + . Finally, if η(K) has index n in S n , then we have |K| = (n − 1)!2 n = |η(K)||K 0 | = (n − 1)!|K 0 |, therefore K 0 must be C n 2 and K = C n 2 S n−1 or n = 6 and K = C n 2 PGL 2 (5).
Now we proceed to classify lattices preserved by subgroups of C n 2 S n of index n. Lattices preserved by (C 4 2 ) + A 4 are described in Lemma 7. It only remains to determine those rank-n lattices preserved by C n 2 S n−1 and those rank-6 lattices preserved by C 6 2 PGL 2 (5). Such lattices are described in the following results.
Proof. Let Λ be a rank-6 lattices preserved by C 6 2 PGL 2 (5). Let s ∈ Z be a positive integer such that every entry of any vector in Λ has absolute value at least s. As before, since PGL 2 (5) acts transitively on the coordinates of E 6 , all the entries of any vector of Λ must be multiples of s. Let (s 1 , . . . , s 6 ) ∈ Λ. Without loss of generality, we may assume that s 1 = s. Since Λ is preserved by C 6 2 this implies that (−s, s 2 , . . . , s 6 ) ∈ Λ. Hence 2se 1 ∈ Λ and therefore 2se i ∈ Λ for i ∈ {1, . . . , 6}.
We may take (s 1 , . . . , s 6 ) ∈ Λ such that all its entries are either 0 or s. Furthermore, since the action of PGL 2 (5) is 3-transitive on the coordinates of E 6 , we may assume that (s 1 , . . . , s 6 ) is of the form (s k , 0 6−k ). Among all non-zero points in Λ with this form take one of those where k is minimum.
Finally, if k = 6, proceding in a similar way of that used when k = 2, we may conclude that Λ = sΛ (1 6 ) .
The previous lemma implies that all (6 + 1)-toroids induced by lattices preserved by C 6 2 PGL 2 (5) are regular toroids. Since all (4 + 1)-toroids induced by lattices preserved by C 4 2 A 4 are regular or 2-orbit, Lemma 15 and the fact that C n 2 S n−1 is maximal in C n 2 S n imply that if there exist (n + 1)-toroids with n flag-orbits, they must be induced by lattices preserved by C n 2 S n−1 that do not induce regular toroids. We now proceed to classify those lattices.
We use Lemma 3 to classify lattices preserved by C n 2 S n−1 . We shall assume that S n−1 is precisely the stabilizer of the last coordinate on E n . Let Π be the hyperplane x n = 0 and R the reflection through Π, this is R : (x 1 , . . . , x n ) → (x 1 , . . . , x n−1 , −x n ). If Λ is a lattice preserved by C n 2 S n−1 , then Λ is a lattice preserved by R. Since S n−1 preserves Π, if Λ 0 = Λ ∩ Π, then Λ 0 must be a rank-(n − 1) lattice preserved by the restriction of the action of C n 2 S n−1 to Π and therefore Λ 0 must be one among sΛ (1,0 n−2 ) , sΛ (1,1,0 n−3 ) and sΛ (1 n−1 ) for some positive integer s.
If Λ is a vertical translation lattice then Λ = k∈Z (Λ 0 + k(de n )) for some d ∈ N. All the lattices of this form are preserved by C n S n−1 , we only need to determine those values for d which induce regular lattices. If Λ 0 = sΛ (1,0 n−2 ) , then d = s implies that Λ = sΛ (1,0 n−1 ) . If Λ 0 is sΛ (1,1,0 n−3 ) or sΛ (1 n−1 ) , then any value of d gives a lattice that induce an n-orbit toroid. Therefore, for the following discussion we shall asume that Λ is not a vertical translation lattice.
In this section we classify the equivelar toroids that are quotients of the regular tessellation of E 4 of type {3, 3, 4, 3}. We will follow the same ideas we used in the classification of the cubic toroids. Recall that the vertex set of {3, 3, 4, 3} is the lattice Λ (1,1,1,1) , and therefore its automorphism group has the form T(U) G o (U) = R 0 , . . . , R n . Its vertex stabilizer R 1 , . . . , R 4 , with R 1 , R 2 , R 3 , R 4 defined as in Table 2, is isomorphic to [3,4,3], the automorphism group of the 24-cell {3, 4, 3}. To proceed as in the case of the cubic toroids, we need some facts about the structure of [3,4,3] that we present in the following paragraphs.
Finally, it is not hard to see that R 1 , R 2 acts as the full symmetric group on the elements of the partition, this observation will prove to be very useful to compute the symmetry types of the 3-orbit non-cubic toroids.
A straightforward implementation of the ConjugacyClassesSubgroups procedure in GAP System [8] can be used to find representatives of conjugacy classes of subgroups of index 2, 3 and 4 that contain χ. Those representatives are listed in Table 3. Now we can study the lattices contained in Λ (1,1,1,1) that are invariant under the representatives of the aforementioned conjugacy classes.
Finally, by the preceding discussion, if a toroid {3, 3, 4, 3} Λ is such that Λ is preserved by ((C 4 2 ) + S 4 ) R 1 R 2 , then Λ ought to be an integer multiple of the isometric lattices Λ 0 and Λ 1 , known to be invariant under the action of (C 4 2 ) + S 4 . An easy computation shows that aforesaid lattices are also invariant under R 1 R 2 , from which it follows that for every positive integer s, sΛ 1 and sΛ 0 are invariant under the action of ((C 4 2 ) + S 4 ) R 1 R 2 . Consequently the toroid {3, 3, 4, 3} Λ , with Λ equal to sΛ 1 or sΛ 0 for some integer s, has two flag-orbits.
As every sublattice of Λ (1,1,1,1) that is invariant under ((C 4 2 ) + A 4 ) R 1 R 2 is invariant under one of the subgroups that yield either to a regular or 2-orbit toroid, we can state the following theorem.
Thus, we have proved the following result.
Symmetry type of few-orbit toroids
In this section we determine the symmetry type of each family of few-orbit toroids classified in Sections 3 and 4. Following [6], the symmetry type graph T (U/Λ) of a toroid U/Λ, is the labeled pre-graph (that is, semi-edges and and multiple edges are allowed) whose vertex-set is the set of orbits of flags of U/Λ and such that there is an edge (or semi-edge) labeled i between orbits O 1 and O 2 if and only if there exists a flag Φ such that Φ ∈ O 1 and Φ i ∈ O 2 . Observe that since (Φ i )S = (ΦS) i for every flag Φ and every automorphism S, the symmetry type graph does not depend on the representatives of the flag-orbits. We shall agree on using a semi-edge labeled i instead of a loop whenever Φ and Φ i belong to the same orbit. Note that this definition is slightly different of that of [6], however it is easy to see that both are equivalent. According to this definition, the symmetry type graph of a regular n-toroid consists of only one vertex and n semi-edges labeled with {0, . . . , n − 1}. If an n-toroid is in class 2 I for I ⊆ {0, . . . , n − 1}, then its symmetry type graph is composed of only two vertices with edges whose labels lie in {0, . . . , n − 1} \ I and semi-edges on each vertex labeled with the elements of I. In this sense, the symmetry type graph generalizes the notion of toroids in class 2 I for toroids with more than two flag-orbits. Observe that the symmetry type graph of the dual of a n-toroid its just the pre-graph with the same vertex set as the symmetry type graph of the toroid and with an edge labeled n − i − 1 for every i-labeled edge.
Symmetry type graphs describe not only the number of flag-orbits of a toroid, but also the local arrangement of the orbits. In order to determine the symmetry graph of few-orbit toroids we use the following result, which is essentially [6, Proposition 1] on the language of toroids.
Lemma 21. Let U/Λ be a toroid with symmetry type graph T (U/Λ). Let T i (U/Λ) be the the subgraph of T (U/Λ) obtained by erasing the edges labeled i of T (U/Λ). Then U/Λ is i-face-transitive if and only if T i (U/Λ) is connected.
Recall that the group of automorphisms of a toroid U/Λ is the group Norm G(U ) (Λ)/Λ. If U is a regular tessellation of E n then, up to duality, T(U) acts transitively on the vertices of U. This implies that all the flag-orbits of U/Λ occur on the base vertex of U/Λ. Furthermore, with the correspondence introduced in Lemma 2 between Norm G(U ) (Λ) and a subgroup N of the vertex stabilizer of G(U), the configuration of flag-orbits of U/Λ around the base vertex is the same as the configuration of orbits of flags of U containing the base vertex under the action of N . We use Lemma 21 and the previous observation, together with the results of [6] on symmetry type graphs of highly symmetric maniplexes to determine the symmetry type graph of the few-orbit toroids.
Cubic toroids
Symmetry type graphs of regular and 2-orbit toroids were already described above. Since k-orbit (n + 1)-toroids do not exist for 2 < k < n unless n = 4 we only need to determine the symmetry type graphs of 3-orbit (4 + 1)-toroids and n-orbit (n + 1)-toroids for n even and n 4.
Recall that if U/Λ is a cubic toroid, then Aut(U/Λ) = Norm G(U ) (Λ)/Λ = (T(U) N )/Λ for a certain group N with N S, where S denotes the stabilizer of the vertex o under G(U). Note that the symmetry type graph depends only on N , in particular all (n + 1)-toroids with n orbits share the same symmetry type graph.
Let U/Λ a (4+1)-toroid with 3 flag-orbits. According to Lemma 11, the group N is the group C 4 2 D 4 , where D 4 acts on the coordinates of E 4 as the dihedral group. Note that since T(U) Norm G(U ) (Λ), Aut(U/Λ) acts transitively on vertices and on cells of U/Λ. Vertex transitivity of U/Λ implies that every edge of U is in the same orbit as one joining the vertex o with the vertex ±e i for some i ∈ {1, 2, 3, 4}. Since C 4 2 N we may assume that the sign is positive. However, D 4 acts transitively on the points e i for i ∈ {1, 2, 3, 4}. This implies that U/Λ is edge transitive. A similar argument can be used to prove that U/Λ is transitive on rank-3 faces. Since there is no 3-orbit i-face transitive toroid for every i ∈ {0, . . . , 4} (see [6,Theorem 1]), then the symmetry type graph of U/Λ must be that of Figure 1a.
To determine the symmetry type graph of an n-orbit (n + 1)-toroid we shall change slightly the approach. Instead of looking at the action of the vertex-stabilizer of Aut(U/Λ) we will use the stabilizer of a cell. One way to imagine this is no to think o as the vertex of U/Λ but as the center of a cell. (alternatively we may just take the dual of U/Λ, which is isomorphic to U/Λ since U is self-dual). With this in mind we may think that the group N = C n 2 S n−1 acts on a cell. Note that even though our results regarding cubic n-orbit (n + 1)-toroids are for n 4, the ideas apply as well when n = 3. In particular these give a way to classify 4-toroids on class 3 listed in Tables 2 and 3 of [10]. Observe that the symmetry type graph of such toroids is that of Figure 2 for n = 3 (see [10, Fig. 4]).
The observation in the previous paragraph allows us to give the following inductive argument to determine the symmetry type graph of any (n + 1)-toroid with n orbits. Let n 3 and let U/Λ be an (n + 1)-toroid with n orbits. We will show that the symmetry type graph of U/Λ is that of Figure 2. The case n = 3 is explained above; assume then that n 4. As said before, we shall think that o is the center of a cell C o whose edges are segments of lenght 1. Observe that the group N = C n 2 S n−1 acts transitively on the flags containing the facet F of C o in the hyperplane x n = 1 2 . Moreover, N acts transitively on the set of facets of C o contained in the hyperplanes x i = ± 1 2 for i = n. This implies that all the other flag-obits of U/Λ have representatives on the flags containing the facet F R n−1 of C o . This facet is contained on the hyperplane x n−1 = 1 2 . However, the stabilizer of such face under the action of N is precisely C n−1 2 S n−2 , where C n−1 2 denotes the group generated by the reflections on all the coordinate hyperplanes x i = 0 for i = n − 1, and S n−2 denotes the point-wise stabilizer of the (n − 1)-th and n-th coordinates. By inductive hypothesis, the arrangement of flag-orbits of F R n−1 induce the graph on Figure 2 with n − 1 vertices, implying that the symmetry type graph of U/Λ is that of Figure 2. Those results are summarized on the following theorem.
Theorem 22. The symmetry type graph of a cubic (4 + 1)-toroid with three flag orbits is the graph shown in Figure 1a. If n 3, the symmetry type graph of a cubic (n + 1)-toroid with n flag orbits is the graph with n vertices shown in Figure 2.
According to Theorem 20 there are two families of regular toroids of type {3, 3, 4, 3}, one family of 2-orbit toroids and two families of 3-orbit toroids. The symmetry type graph of the regular toroids and toroids in class 2 {3,4} are already described above. It only remains to determine the symmetry type graphs of 3-orbit toroids.
Just as in the case of cubic toroids, we will determine the transitivity of the automorphism group on the set of i-faces of each toroid and use the results of [6] to determine its symmetry type.
Recall that the vertex set of the tessellation U = {3, 3, 4, 3} is a lattice. Since every translation of G(U) induces an automorphism of any toroid U/Λ, the group Aut(U/Λ) acts transitively on vertices.
The discussion in the previous paragraphs and Propositions 3 and 4 of [6] imply that the symetry type graph of a toroid U/Λ whose automorphism group is (T(U) K)/Λ with K = ((C 4 2 ) + D 4 ) R 1 , R 2 is the graph in Figure 1a.
|
2017-08-27T07:46:18.209Z
|
2018-02-21T00:00:00.000
|
{
"year": 2021,
"sha1": "0584c4b9218e0140792af4fb08e9a27d932a037a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1802.07837",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "55fa49f121ced3cbfe0c88d7b73f39d7562ae770",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
225466693
|
pes2o/s2orc
|
v3-fos-license
|
The Kāmasiddhistuti of King Vatsarāja
This essay concerns a pūjāstuti1 that guides its reciter through the mental or actualworshipof the goddessNityā.The text is composed in the first personbut the author does not name himself in the text. The text is named Vāmakeśvarīstuti and attributed toMahārājādhirāja Vidyādharacakravartin Vatsarāja in the colophon of the sole palm-leaf manuscript of the text available to me. However, the last verse of the text calls it Kāmeśvarīstuti and describes it using two adjectives, kāmasiddhi and atimaṅgalakāmadhenu. It is not unnatural, I think, to name this stuti using its first adjective.2 The manuscript containing this stuti text is preserved in the National Archives, Kathmandu. It bears accession number 1–1077 and can be foundmicrofilmed under NGMPP reel number A 39/15. The samemanuscript also contains a paddhati text called Aśeṣakulavallarī that dwells on the worship of the goddess Tripurā, but this text remains incomplete as the folios following the sixteenth are absent. Our text begins on the verso of the first folio and ends in the third line of the recto of the fourth, with a colophon and a decorative symbol. The other text immediately follows in the same hand with a salutation to the goddess Tripurā. The manuscript is written in a variety of North Indian script close to Newari with frequent use of pṛṣṭhamātrās. It is possible that this manuscript was copied by an immigrant or pilgrim in Kathmandu valley. It measures 33×4.5cm and has a binding hole to the left of the centre. It bears foliation in numerals in the left margin and in numbers in the right margin of verso folios. The text in the manuscript is dotted with scribal errors, but no secunda manus corrections are seen. On palaeographical grounds I place the manuscript in the late fourteenth century. This manuscript contains 46 verses of the stuti and one more verse (numbered here as 38a) can be retrieved from a citation.3 A little less than the half of the stuti covering the first 21 verses is in Anuṣṭubh metre and the rest in
Vasantatilakā. Verses 31 and 32 form a yugalaka as the finite verb comes only in the second verse. The author plays now and again with syllabic rhyming (anuprāsa), and his language is beautiful, though sometimes elliptical.
The stuti opens with a pair of verses invoking Paramaśiva and Nityā Śakti. These verses already tell us of the poet's understanding of the nature of Nityā and inseparability of Paramaśiva and Śakti, a point highlighted in the second half of the text, particularly verses 31-32 and 42. In verse 3 the poet states that he approaches the temple of Mṛḍānī from the west gate (paścimadvāra).4 The next two verses invoke Gaṇeśa and Kṣetreśa. The latter, who has the form of Bhairava, can be identified as Baṭuka. Gaṇeśa and Baṭuka together are identified as the goddess's sons in Śākta systems and serve as her doorkeepers.5 To our surprise, verse 6 invokes the Vaiṣṇava doorkeepers Śaṅkhanidhi and Padmanidhi, who bear the Vaiṣṇava emblems of the conch and lotus on their heads.6 Verses 7-9 invoke respectively three goddesses: Padmā, a Vaiṣṇava version of Durgā carrying a conch and discus, and Bhāratī. Verses 10 and 11 invoke Manobhava, namely, the Indian love-god Kāmadeva, and describe him as the 4 This should be the intended meaning, because one is supposed to enter a temple from the western or southern gate facing east or north. Therefore, many of the early Śaiva-Śākta temples, even though they face east, have an older western or southern entry. For more discussions, see Goodall et al. 2005, 103-107 andGoodall et al. 2015, 366 (Niśvāsa, Uttarasūtra 3:8 and annotation thereon). Another possible interpretation of paścimadvāra is "the last door to resort to." Perhaps, the poet is punning. 5 For Gaṇeśa and Baṭuka as the Goddess' sons, See, e.g., Jayaratha on Tantrāloka 1.6b. 6 Śaṃkhanidhi and Padmanidhi have strong associations with the cult of Yakṣas. In the Meghadūta, Kālidāsa's Yakṣa tells the cloud-messenger that the marks of conch (śaṃkha) and lotus (padma) are painted on the sides of the gate of his house in the city of Alakā, as he provides a number of clues for the identification of his house. In the form of emblems as well as human forms, Śaṃkhanidhi and Padmanidhi are depicted in the Ajaṇṭā caves and are associated with Yakṣa deities (cf. Bautze-Picron 2002, 225-231). Besides, the Buddhist Vasudhārā Dhāraṇī enjoins worship Śaṅkhanidhāna and Padmanidhāna with the goddess Vasudhārā encircled by a group of eight unspecified Yakṣiṇīs. Some other texts name Śaṃkhanidhi and Padmanidhi as male consorts of Vasudhārā and Vasumatī, respectively. Anyway, these two are adopted by the Vaiṣṇavas as doorkeepers or attendants of Viṣṇu along with the other pairs of Jaya and Vijaya, Caṇḍa and Pracaṇḍa, Nanda and Sunanda. They also feature in some comparatively late Tantric texts of other traditions, particularly those from the south. They are listed also among the twelve Vaiṣṇava nidhis found in some Puranic and Vaiṣṇava texts. Professor Dominic Goodall kindly informs me (personal communication of November 20, 2019) that what is now called the Kailāsanātha temple in Kancheepuram seems to have Śaṅkhanidhi and Padmanidhi framing the doorway. According to him, that temple now has an eastern entrance to the enclosure, but there is an older western entry, now blocked up.
For an example of images of Śaṃkhanidhi and Padmanidhi from Anurādhapur, Sri Lanka, See Paranavitana 1955. acharya beloved husband of Rati and Prīti.7 Here we are told that the love-god forms the circular base of the Śrīcakra, the maṇḍala of the goddess Nityā Sundarī. With these verses the text enters the process of installation of various deities in the Śrīcakra. It does not specify where these deities are installed, but from the order of verses we know that we are starting from the periphery and moving towards the centre. Verses 12-14 respectively praise eight siddhis, beginning with Aṇimā (in personified forms), eight mother-goddesses, and the deities of ten gestures of the goddess.8 Verses 15 and 16 venerate sixteen goddesses of attraction (ākarṣaṇa) and eight powers of the bodiless love-god (anaṅgaśakti), respectively, all in personified forms.9 We know from the Vāmakeśvaratantra and other Tripurā texts that these are installed on the petals of the sixteen-and eight-petalled lotuses. The next four verses, 17-20, respectively praise the set of fourteen goddesses/powers (śaktis) headed by Sarvasaṃkṣobhaṇī,10 ten Kula It is possible that these three sets of deities are installed on the three lines forming the outermost retinue of the rectangular boundary. The Vāmakeśvaratantra, also known as Nityāṣoḍaśikārṇava, enjoins installing the eight mother-goddesses as well as the eight siddhis in the four directions and four sub-directions, and does not instruct one to worship the goddesses of the gestures. Bhāskararāya (p. 99), however, mentions that according to some other system the outermost boundary is made of three lines and these three sets of goddesses are installed there. According to its commentators, the Vāmakeśvaratantra teaches that one should build the boundary with only two lines. Although the Vāmakeśvaratantra does not assign a place for the gestures (mudrā) in the maṇḍala, it does describe them and asks the worshipper to use them during the worship. As found in the third chapter of the Vāmakeśvaratantra, these ten gestures are trikhaṇḍā, kṣobhiṇī, vidrāviṇī, ākarṣiṇī, āveśakarī, unmādinī, mahāṅkuśā, khecarī, bīja, and yoni. As listed in many texts, including the , the eight siddhis are aṇimā, laghimā,mahimā,īśitva,vaśitva,prāpti,prākāmya, makes them ten by adding two more, bhukti and icchā, and prescribes worshipping them in ten directions. According to the latter (1.156-157), the eight mother-goddesses are Brahmāṇī,Māheśī,Kaumārī,Vaiṣṇavī,Vārāhī,Indrāṇī,Cāmuṇḍā,and Mahālakṣmī. 9 These are not individually named in this text, but, as listed in the Vāmakeśvaratantra, the first set is made of Kāmākarṣiṇī,Budhyākarṣiṇī,Ahaṃkārākarṣiṇī,Śabdākarṣiṇī,Sparśākarṣiṇī,Rūpākarṣiṇī,Rasākarṣiṇī,Gandhākarṣiṇī,Cittākarṣiṇī,Dhairyākarṣiṇī,Smṛtyākarṣiṇī,Nāmākarṣiṇī,Bījākarṣiṇī,Ātmākarṣiṇī,Amṛtākarṣiṇī,, and the second set is made of Anaṅgakusumā, Anaṅgamekhalā, Anaṅgamadanā, Madanāturā, Anaṅgarekhā, Anaṅgaveginī, Anaṅgāṅkuśā, and Anaṅgamālinī (cf. 1.163-164). 10 We know only the name of the first from this text but the rest can be known from the Vāmakeśvaratantra (1.165-168). They are: Sarvavidrāviṇī, Sarvākarṣiṇī, Sarvāhlādinī, Sarvasaṃmohinī, Sarvastambhanī, Sarvajambhanī, Sarvatovaśinī, Sarvarañjanī, Sarvon-goddesses (kuleśvarī) headed by Sarvasiddhipradā,11 ten goddesses headed by Sarvajñā,12 and eight goddesses of speech, headed by Vaśinī.13 They are stationed in the four consecutive retinues of fourteen, ten, another ten, and eight triangles. All deities in a group (see verses 12-20) are visualised in the same way; for example, all mother-goddesses (mātṛ) have the same appearance.14 Verse 21 invokes and asks the deities of four weapons of the goddess for their permission. It is known from other sources that they are placed around the central triangle (cf., e.g., Vāmakeśvaratantra 1.179-180). The next three verses, 22-24, praise Kāmeśvarī, Vajreśvarī, and Bhagamālinī, and urge them to fulfill the reciter's desires. Unlike previous ones, these verses also name the three corners of the central triangle as the homes of these goddesses. Verse 25 is in praise of Nityā Sundarī, the goddess in the centre. From here onward, until the second to last verse (45), the poet praises Nityā in various ways. He first invokes the goddess as Nityā (verse 25) and later as Śrīsundarī (verse 30), and describes her as "the felicitous banner of the Love-god." Verses 25-28 describe the beauty of the goddess, and verses 29-45, with the exception of verse 33 (which describes the Śrīcakra made of 43 triangles as her abode), exalt her in various ways, identifying her as the ultimate reality of the external as well as internal worlds. She is described as the primordial light (ādyamahas) and paramārthavidyā, which can be interpreted as the highest mantra, the mantra leading to the highest, or the ultimate gnosis. The last verse is a fine eulogy of the stuti itself, describing its reward and thus encouraging people to recite it. It has been already pointed out by Sanderson and also Golovkova that the mature cult of Tripurasundarī developed against the backdrop of the nityā cult, evidence for which is available in the Nityākaulatantra and the Siddhakhaṇḍa of the Manthānabhairavatantra. In those texts Tripurasundarī is accompa- mādinī,Sarvārthasādhanī,Sarvasampattipūraṇī,Sarvamantramayī,and Sarvadvandvakṣayaṃkarī. 11 Again, the list can be completed with the help of the Vāmakeśvaratantra, but these goddesses are here simply called śaktis. The other nine following Sarvasiddhipradā are: Sarvasampatpradā, Sarvapriyaṃkarī, Sarvamaṅgalakāriṇī, Sarvakāmapradā, Sarvaduḥkhavimocinī, Sarvamṛtyupraśamanī, Sarvavighnanivāriṇī, Sarvāṅgasundarī, and Sarvasaubhāgyadāyinī (cf. 1.169-171). 12 Sarvajñā is followed by Sarvaśakti, Sarvaiśvaryapradāyinī, Sarvajñānamayī, Sarvavyādhivināśinī, Sarvādhārasvarūpā, Sarvapāpaharā, Sarvānandamayī, Sarvarakṣāsvarūpiṇī, and Sarvepsitaphalapradā (cf. Vāmakeśvaratantra 1.173-175). 13 The names of these eight can be retrieved from the mantroddhāra section of the Vāmakeśvaratantra (cf. 1.77-80). They are Vaśinī, Kāmeśvarī, Modinī, Vimalā, Aruṇā, Jayinī, Sarveśvarī, and Kaulinī. 14 Neither the Vāmakeśvaratantra nor any of the paddhatis of that tradition give visualisations of these deities. acharya nied by a retinue of eleven and nine nityās, respectively, and worshipped with Kāmadeva.15 Our text identifies Kāmadeva as the husband of Rati and Prīti, places him on the base of the Śrīcakra (cf. verses 10-11), and installs Nityā Sundarī at the altar of worship in the centre of the maṇḍala without a consort, independent and supreme. However, in verses 31-32 she is described as devamahiṣī, although it is said that their body is one and undifferentiated. In verse 2 the poet names the goddess Nityā and invokes her as the Śakti of Paramaśiva possessing all powers and carrying out the five tasks (pañcakṛtya) for him. In verse 34 the poet invokes her as Maheśvarī but states that some royal people in this world call her Lakṣmī and Parā Prakṛti. In verse 40 she is described as Atibhavā, highlighting her transcendent nature, and in verse 42 she is invoked again as Gaurī.
It is thus clear that the poet of our text is a Śaiva devotee of goddess Nityā. It is important to note that in the system known to our poet there is only one Nityā, simply called Sundarī, and that the Śrīcakra is also already known. Our poet appears unaware of the sixteen nityās, who are worshipped in the tradition of the Vāmakeśvaratantra. It thus appears that the tradition this stuti text represents is different from both the cult of nityās and that of Tripurā. The inclusion of Śaṅkhanidhi and Padmanidhi (verse 6), Padmā (verse 7), and the Vaiṣṇava Durgā (verse 8) suggests that the goddess Nityā is somehow linked to the Vaiṣṇava tradition as well. In fact, in verse 34 the poet mentions that some people call her Lakṣmī and Parā Prakṛti, but we are not aware of survival of any Vaiṣṇava paddhati of Nityā. Now I come to the issue of the poet's identity. The fact that he is a king and was perhaps somewhat distressed at the time of composition of the stuti can be known from the text itself (cf. verse 40). Furthermore, in the colophon the text is attributed to Mahārājādhirāja Vidyādharacakravarti Vatsarāja.16 Apparently, the first epithet is royal-he is the king of great kings-while the second is mantric: he is sovereign among the vidyādharas, who are supposed to possess esoteric mantric knowledge and due to this have supernatural powers. Vatsarāja is his personal name. The most famous Vatsarāja, the mythical king of Ujjayinī, does not fit the context. Another is King Vatsarāja of the Gurjara-Pratihāra dynasty (c. 775-805ce), the father of Mahārājādhirāja Nāgabhaṭa II (805-833ce). Vatsarāja is always called paramamāheśvara, but in the Pratāpagaḍh Stone Inscription of Mahendrapāla II (dated Year 1003 = 946 ce), 15 cf. Sanderson 2009, 47-49;Golovkova 2012, 816-817. 16 It is interesting to note that a fifteenth-century inscription from Vijayanagara remembers a king called Vatsarāja blessed by Tripurāmbā. As Sinopoli (2010, 22) Nāgabhaṭa II is called paramabhagavatībhakta.17 It may be a coincidence, but the latter's mother is named Sundarī. In any case, this Vatsarāja could be our poet.18 Our text represents an archaic tradition that does not even know the name Tripurasundarī, and thus this date in the early-ninth century ce fits it well. Sircar 1983, 251. 18 There is another poet of the same name who flourished in the second half of the twelfth and the first quarter of the thirteenth century ce (cf. Dalal 1918, vi-vii), but he is a minister, not a king. He served the Kālañjara King Paramardideva and wrote some dramatic pieces. Six of such pieces have been published in one volume under the title Rūpakaṣaṭkam (see Dalal 1918). He does not mention Nityā, Sundarī, or Tripurasundarī in his dramas. 19 The manuscript begins with an invocation, ||oṃ namo gaṇapataye||, preceded by a siddhi sign. I do not think that this invocation is part of the text. 20 The manuscript reads niḥśāmānanda-and I have emended it to niḥsīmānanda. I have found this compound used at least in one more text, the Adhikaraṇasārāvalī of Vedāntadeśika. 21 The five tasks of Śiva include punishment (nigraha) and grace (anugraha I worship those eight goddesses of speech, Vaśinī and others, whose complexion is red. They carry in their four lovely hands a bow, arrows, a book, and a rosary. May the three-eyed goddess Bhagamālinī give the glory of good fortune. She possesses abundant miraculous power and is as lovely as the moon. She is stationed in the left corner [of the central triangle] and holds in the row of her arms a snare, a goad, a sugarcane, ropes, a book, and a sword. I uninterruptedly bow to Nityā who has a form worthy of worship. She has ascended the shining throne made of the sun, moon, and fire. She holds acharya in her hands a hook, a snare, arrows, and a bow, and carries the crescent moon on her crest. She is pure and clean, and her eyes, adorned with the tips of the locks of hair, are very beautiful. Her body is beautiful and bears the hue of vermillion. Its middle part is slim, [and] she is the repository of beauty. She is slightly bent like a young elephant because of her pitcher-like breasts, resembling the temples of a young elephant. Her eyes are moving and wide like those of a deer. She is moon-faced, her smiles are gentle, and she serves as the felicitous banner of the Love-god. I seek refuge with the glorious goddess Sundarī, the benefactress of prosperity, the secret heart, whose heart is soaked with compassion. She is blazing with an utmost tenacity steeped in joy, and consequently beaming with plenteous light that shimmers spontaneously. O goddess, though you are one and simple,28 you are [also] nine,29 you are ten, you are again ten, and again you are fourteen. Thus you, the benefactor of poets, dwell in the sea of Śaktis marked with forty-three triangles. 27 These two verses depict the goddess as the stream of consciousness or immortality in the human body, known widely as Kuṇḍalinī, originating from the brahmarandhra, the abode of Śiva, flowing through various channels and reaching to the six bases. It is in this light that these verses should be read. 28 I have conjectured api in place of asi to provide a concessive tone. Perhaps this is not even necessary. In any case, on her own the goddess is singular and unembellished, but the poet appears to imply that all goddesses in different retinues of the Śrīcakra are her projections. 29 The central triangle and the immediately following retinue of eight triangles are obviously counted together as nine. You are the goddess of prosperity, and prosperities depend on you. You are the goddess of speech, and authority and words depend on you. You are the goddess of wisdom, and wise ideas depend on you. You are the foremost fortress, and towns depend on you. You are the primordial power, and yours are all the properties of power. What is the use of any further explanation: this entire world is nothing but you. The underlying digit of the moon (antaścarī śaśikalā) in all likelihood is the sixteen innermost digit beyond the waning and waxing process. 32 The late Pundit Vraja Vallabha Dwivedi (1985, 45) presents this verse in his preface (originally written in 1968) to the Nityāṣoḍaśikārṇava as cited in the Aruṇāmodinī commentary of the Saundaryalaharī and attributed to the Kāmasiddhistotra of Vatsarāja (cf. Śāstrī 1957, 221), and suggests that it should be located in the Nepalese palm-leaf manuscript of the text (the same manuscript I am editing now). However, in 1983 in the Luptāgamasaṃgraha, a collection of citations from lost Āgamic texts he prepared, he writes that the verse is not found in the palm-leaf manuscript and so must come from a different text (cf. Dwivedi 1983, 25). I think Dwivedi arrived at this conclusion without reading the implied name of the stuti. The author of the Aruṇāmodinī writes that it is a verse from the Kāmasiddhistotra of Vatsarāja, and the same name is alluded to in the last verse of our text. I conclude that the verse therefore belongs to this text even though it is not found in the palm-leaf manuscript. I assume that it was dropped in the process of transmission. It is thus just possible that there are still a few more verses missing from the latter part of the stuti.
36
The original reading of the manuscript gaurīti is unmetrical. The scribe has corrected it to gaur iti which is just possible, but I conjecture naur iti because of the following word nāviketi. Thus, also, the syllabic rhyme of the line is restored. 37 Thus, there are three deities in this tradition who can be called by this name: the chief goddess Nityā, one of the goddesses in the central triangle, and one of the goddesses of speech in the retinue of eight triangles. iti śrīmahārājādhirājavidyādharacakravartivatsarājaviracitā śrīvāmakeśvarīstutiḥ samāptā ||⊗|| Here ends the Vāmakeśvarīstuti composed by Vatsarāja, the king of great kings, the sovereign among the vidyādharas.
|
2020-08-06T09:07:48.985Z
|
2020-07-28T00:00:00.000
|
{
"year": 2020,
"sha1": "af8ab7bcf60f1250c2ddade341668da9cdfe8765",
"oa_license": "CCBYNC",
"oa_url": "https://brill.com/downloadpdf/book/edcoll/9789004432802/BP000020.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "63658c60d4e78dff13d3f880289f20aa24a7ab75",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"History"
]
}
|
17260823
|
pes2o/s2orc
|
v3-fos-license
|
Continuity properties of best analytic approximation
Let $\A$ be the operator which assigns to each $m \times n$ matrix-valued function on the unit circle with entries in $H^\infty + C$ its unique superoptimal approximant in the space of bounded analytic $m \times n$ matrix-valued functions in the open unit disc. We study the continuity of $\A$ with respect to various norms. Our main result is that, for a class of norms satifying certain natural axioms, $\A$ is continuous at any function whose superoptimal singular values are non-zero and is such that certain associated integer indices are equal to 1. We also obtain necessary conditions for continuity of $\A$ at point and a sufficient condition for the continuity of superoptimal singular values.
Introduction
The problem of finding a best uniform approximation of a given bounded function on the unit circle by an analytic function in the unit disc is a natural one from the viewpoint of pure mathematics and it also has engineering applications, for example in H ∞ control [F], broadband impedance matching [He] and robust identification [Par]. In these contexts, to effect a design or construct a model, one must compute such a best approximation, and in order that numerical computations have validity it is important that the solution to be computed depend continuously on the input data, for otherwise the imperfect precision of floating point arithmetic may lead to highly inaccurate results. It is therefore somewhat disconcerting that, with respect to the L ∞ norm, the operator of best analytic approximation is discontinuous everywhere except at points of H ∞ [M, Pa]. Nevertheless engineers regularly compute such approximations and appear to find the results reliable. A way to account for this would be to show that best analytic approximation is continuous on suitable Banach subspaces of L ∞ (T) with norms which majorise the uniform norm, or at least, is continuous at most points of the space. One can expect that most functions of engineering interest will lie in one of these well-behaved subspaces, and that the errors Date: March 30, 2022 Mathematics Subject Classification. AMS Subject Classifications: 30E10, 47B35, 93B36. V.V.Peller's research was supported by an NSF grant in Modern Analysis. N.J.Young wishes to thank the Mathematics Departments of Kansas State University and the University of California at San Diego, and the Mathematical Sciences Research Institute for hospitality while this work was carried out. Research at MSRI is supported in part by NSF grant DMS-9022140. introduced by computer arithmetic will result in perturbations which are small in the associated norm. We are thus led to ask for which Banach spaces X ⊂ L ∞ (T) the operator A of best analytic approximation maps X into X and is continuous at a generic point of X (in some sense). This question has been thoroughly analysed for the case of scalar-valued functions. It was shown in [P1] that, for spaces X ⊂ H ∞ +C satisfying some natural axioms, the restriction of A to X is continuous with respect to the norm of X at a function ϕ if and only if H ϕ is a simple singular value of the Hankel operator H ϕ .
Analogous questions for matrix-valued functions are also of interest, particularly for their relevance to engineering applications. They are a good deal more complicated than in the scalar case. To begin with, there is typically no unique best analytic approximation in the matrix case, when we measure closeness by the L ∞ norm. In order to specify an approximation uniquely and so obtain a well formulated question of continuity we can use a more stringent criterion of approximation. The notion of a superoptimal approximation is a natural one for matrix-valued functions: by imposing the condition of the minimisation of the suprema of all singular values of the error function it gives a unique best approximant in many cases. Here is a precise definition.
Denote by M m,n the space of m × n complex matrices endowed with the operator norm as a space of linear operators from C n to C m with their standard inner products. Let H ∞ (M m,n ) denote the space of bounded analytic M m,n -valued functions on the unit disc D with supremum norm: Similarly, L ∞ (M m,n ) denotes the space of essentially bounded Lebesgue measurable M m,n -valued functions on T with essential supremum norm. By Fatou's theorem [H, p.34] functions in H ∞ (M m,n ) have radial limits a.e. on T, so that H ∞ (M m,n ) can be embedded isometrically in L ∞ (M m,n ), and we shall often tacitly regard elements of H ∞ (M m,n ) as functions on the unit circle. Where there is no risk of confusion we shall sometimes write H ∞ , L ∞ for H ∞ (M m,n ), L ∞ (M m,n ). We define H ∞ + C to be the space of (matrix-valued) functions on T which are expressible as the sum of an H ∞ function and a continuous function on T. For any matrix A we denote the transpose of A by A t and the singular values or s-numbers of A by and is a minimum over Q ∈ H ∞ with respect to the lexicographic ordering. It was proved in [PY1] that if an m × n matrix function Φ is in H ∞ + C then there is a unique superoptimal approximant to Φ in H ∞ (M m,n ). We shall denote this approximant by AΦ. In [PY1], in addition to proving uniqueness, we obtained detailed structural information about the "superoptimal error" Φ − AΦ and we established several heredity results (that is, theorems of the form "Φ ∈ X implies AΦ ∈ X" for various function spaces X). In any space which does have this heredity property it is natural to ask whether A acts continuously. We shall show that for a substantial class of norms there are many continuity points of A. We cannot, however, expect A to be continuous everywhere: it is shown in [P1] that, for scalar functions, A is discontinuous with respect to virtually any norm at every ϕ for which H ϕ is a multiple singular value of H ϕ , and it follows that (matricial) A is discontinuous at the matrix function diag{ϕ, 0, · · · }.
We shall study spaces X ⊂ L 2 (T) of functions for which the following axioms hold. Denote by P + , P − the orthogonal projections from L 2 (T) onto the Hardy space H 2 and its orthogonal complement H 2 − in L 2 (T). For a space X ⊂ L 2 (T) we denote by The axioms are: (A1) If f ∈ X thenf ∈ X and P + f ∈ X; (A2) X is a Banach algebra with respect to pointwise multiplication; (A3) the set of trigonometric polynomials is dense in X; (A4) every multiplicative linear functional on X is of the form f → f (ζ) for some ζ ∈ T; (A5) if f ∈ X + and h ∈ H ∞ then P + (hf ) ∈ X + .
The following fact is well known.
Lemma 1.1.. X + with the restriction of · X is a commutative Banach algebra whose maximal ideal space is the closed unit disc clos D.
Proof. By the Closed Graph Theorem P + is continuous on X, and so its range X + is a closed subspace of X. Functions in X + are continuous on T (the Gelfand topology of X on T is compact and refines the natural topology, hence coincides with it), and their negative Fourier coefficients vanish. Hence X + ⊂ A(D), the disc algebra. It follows that X + = X ∩ A(D), and so X + is a subalgebra of X. Clearly the maximal ideal space M of X + contains clos D, which is the maximal ideal space of A(D). Since X + is generated as a Banach algebra by the single element z, M is naturally identified with σ X + (z), the spectrum of z in X + . Since X + is a subalgebra of X we have ∂σ X + (z) ⊂ ∂σ X (z) = ∂T = T (∂ denotes boundary). That is, M contains clos D and ∂M ⊂ T. Hence M = clos D.
For a space X of functions and a matrix-valued function Φ we write Φ ∈ X to mean that each entry of Φ belongs to X. We denote by X(M m,n ) the space of m × n matrix-valued functions whose entries belong to X, endowed with the norm We recall that the space QC of quasicontinuous functions is defined to be (H ∞ It transpires that the analysis of the continuity of A involves certain integer indices associated with a matrix function. These indices were introduced in [PY1], and depend on the notion of a thematic factorization, which is a type of diagonalization of a superoptimal error function Φ − AΦ. A thematic function is a function V ∈ L ∞ (M n,n ) for some n ∈ N which is unitary-valued a.e. on T and of the form We shall assume henceforth that m ≤ n. By [PY1, Theorem 2.1] the singular values s j (Φ(z) − AΦ(z)) are constant a.e. on T; their values t 0 ≥ t 1 ≥ · · · ≥ t m−1 are the superoptimal singular values of Φ. Moreover, according to [PY1,Theorem 4.1], Φ − AΦ admits a factorization of the form where D of type m × n is given by for some unimodular functions u 0 , . . . , u m−1 ∈ QC, and W t 0 ,W t j , V 0 andṼ j are thematic functions for 1 ≤ j ≤ m − 1. We call (1.1) a thematic factorization of Φ−AΦ, and we define the index of t j in this factorization to be the modulus of the winding number of u j (or alternatively, as the Fredholm index of the Toeplitz operator T u j ). Numerous properties of these indices were established in [PY3]. In Section 1 we prove continuity of A with respect to a wide class of norms at functions whose superoptimal singular values are nonzero and whose indices are all 1. For the Besov norm B 1 1 we obtain a continuity result even in the presence of zero superoptimal singular values. In Section 2 we consider the converse problem, and derive some necessary conditions for continuity points of A in the case of square matrix functions. In Section 3 we present sufficient conditions for the continuity of the superoptimal singular values themselves.
Sufficient conditions for continuity
Let X be a space of functions on T invariant under A (e.g. one satisfying the above axioms). As we noted above, even in the scalar case A is discontinuous with respect to virtually any norm at any Φ such that H Φ is a multiple singular value of H Φ [P1]. In the scalar case, for many spaces X the converse also holds. That is, if H Φ is a simple singular value then Φ is a continuity point of A with respect to the norm of X. For matrix functions the situation is more complicated, but we do have the following sufficient condition.
Theorem 2.1.. Let X be a space of functions on T satisfying Axioms (A1) to (A5), let Φ ∈ X(M m,n ), m ≤ n, and let t 0 , t 1 , · · · , t m−1 be the superoptimal singular values of Φ. Suppose that t m−1 = 0. If Φ − AΦ has a thematic factorization with indices then Φ is a continuity point of the operator A of superoptimal approximation in X(M m,n ).
As we have observed in [PY3], (2.2) implies that all thematic factorizations of Φ − AΦ have indices equal to 1.
The proof of the theorem will be based on the recursive construction of AΦ given in [PY2], which in turn was based on the proof in [PY1] that AΦ is well defined. Let us briefly recall the construction of AΦ. The first step is to find a Schmidt pair {v, w} of the compact Hankel operator H Φ corresponding to the singular value H Φ . Then (these equations always have a solution; in fact Q = AΦ satisfies them, and in the proof of the theorem below we shall even give an explicit rank two solution for Q).
Next let v (i) , w (i) be the inner factors of v,zw and let . The strategy of the proof is simply to show that α, β and Q can be chosen to depend continuously on Φ and then to use induction on m. In order to do this we have to study some properties of maximizing vectors for H Φ .
It is easy to see from the axioms (A1)-(A5) that H * Φ H Φ is also a compact operator on X + (C n ). Denote this operator on X + (C n ) by R. We can identify the dual space Since R is a compact operator on X + (C n ), it follows from the Riesz-Schauder theorem that R * is compact on X * + (C n ) and if λ > 0, then λ is an eigenvalue of R if and only if λ is an eigenvalue of R * of the same multiplicity (see [Yo], Ch. X, §5). Since Clearly, every eigenvector of R is an eigenvector of H * Φ H Φ and every eigenvector of H * Φ H Φ is an eigenvector of R * . It follows from the Riesz-Schauder theorem that R, H * Φ H Φ , and R * have the same eigenvectors that correspond to positive eigenvalues.
Theorem 2.2.. Let Φ be a function in X(M m,n ), m ≤ n, with superoptimal singular values t 0 , · · · , t m−1 , t 0 = 0. Suppose that Φ − AΦ has a thematic factorization whose indices k j are equal to 1 whenever t j = t 0 . Let {v, w} be a Schmidt pair of H Φ corresponding to H Φ . Then v andzw are co-outer and v(ζ) = 0 for any ζ ∈ T.
Clearly it is sufficient to prove that v(1) = O. We shall deduce Theorem 2.2 from the following lemma whose proof is similar to that of Lemma 3.2 of [PK].
Since the right-hand side of (2.5) clearly determines a continuous linear functional on On the other hand it is easy to see that which proves (2.5).
To complete the proof of the lemma, we have to prove that for any f ∈ X + (C n ). We may assume for convenience that t 0 = 1.
We have The result follows now from (2.5).
Then as we have already mentioned in the proof of Lemma 2.
where v (i) and w (i) are inner and co-outer in H 2 (C n ). Then hv (i) is also a maximizing vector for H Φ and and Φ − AΦ has a thematic factorization with index equal to dim Ker T u 0 , where and so k 0 = dim Ker T u 0 ≥ 2, since obviously q and zq belong to Ker T u 0 . This contradicts the hypotheses of Theorem 1.2, and so v(1) = 0. In similar fashion, the relation (2.7) shows that Ker T u 0 contains h,θ 1 h andθ 2 h. Thus, if v (i) or w (i) is not co-outer, we have again contradicted dim Ker T u 0 = 1. Hence v, w are co-outer.
Lemma 2.5.. Let n > 1 and let ϕ be an inner function in X + (C n ). Then 0 is an isolated spectral point of the operators T X ϕ T X ϕ t on X + (C n ) and TφT ϕ t on H 2 (C n ).
Proof. Let us prove the lemma for the operator T X ϕ T X ϕ t . The proof for TφT ϕ t is exactly the same.
Let us observe that we may assume that ϕ is co-outer. Indeed if ϕ = ϑτ , where ϑ is a scalar inner function and τ is an inner co-outer function, then it follows from the axiom (A5) that τ ∈ X + (C n ) and clearly T For an inner function ϕ ∈ H ∞ (C n ) we denote by L ϕ the kernel of T ϕ t and by P ϕ the orthogonal projection from Consider a simple closed positively oriented Jordan curve Ω which lies in the resolvent sets of TφT ϕ t and T X ϕ T X ϕ t , encircles zero but does not wind round any other point of the spectra of TφT ϕ t and T X ϕ T X ϕ t . Clearly Consider the projection P X ϕ from X + (C n ) onto L X ϕ defined by . Suppose now that {ϕ (k) } k≥1 is a sequence of inner functions in X + (C n ), which converges to ϕ in the norm. Then T X (ϕ (k) ) t T X ϕ (k) → T X ϕ t T X ϕ in the norm of L(X + (C n )). As in the proof of Lemma 1.5, T X ϕ t T X ϕ is invertible, and hence there is a neighbourhood U of zero which lies in the resolvent set of T X ϕ t T X ϕ and of T X (ϕ (k) ) t T X ϕ (k) for all sufficiently large k. Without loss of generality we may assume that this holds for all values of k. Choose a simple closed contour Ω lying in U and winding round 0. Then 0 is the only point inside or on Ω of the spectra of Tφ(k)T (ϕ (k) ) t and T X ϕ (k) T X (ϕ (k) ) t . We can therefore define projections P ϕ , P X ϕ , P ϕ (k) , P X ϕ (k) by integrals as above, all using the same contour Ω. It is then easy to see from (2.8) that P X ϕ (k) → P X ϕ in the operator norm.
Lemma 2.6.. Let V = ϕ ϕ c be unitary-valued on T, where ϕ c is inner and co-outer. There exist inner co-outer functions ϕ (k) It was shown in [PY1] (see the proof of Theorem 1.1) that, for a given inner column ϕ, one can construct an inner co-outer α such that ϕ α is unitaryvalued on T and the columns of α have the form P ϕ C 1 , P ϕ C 2 , · · · , P ϕ C n−1 , where C 1 , C 2 , · · · , C n−1 are constant column functions. By [PY1,Corollary 1.6], ϕ c = αU for some constant unitary U. Hence the columns of ϕ c also have the form P ϕ C j for some constants C j . Consider the subspace of H 2 (C n ) where we identify C ∈ C n with a constant function in H 2 (C n ). This space has the remarkable property that the pointwise and H 2 inner products coincide on it. That is, if f j = P ϕ C j , j = 1, 2, where C 1 , C 2 ∈ C n , then for almost all z ∈ T. To see this note that L ϕ is a closed z-invariant subspace of H 2 (C n ), and so is of the form ΘH 2 (C p ) for some natural number p and some n × p inner function Θ. Then for any C ∈ C n , and so for almost all z ∈ T. It follows that any unit vector in P ϕ C n is an inner column function, and any orthonormal sequence (with respect to the inner product of H 2 (C n )) of vectors in P ϕ C n constitutes the columns of an inner function. Now let P ϕ C j , 1 ≤ j ≤ n − 1, be the columns of ϕ c as above, and consider the functions P ϕ (k) C 1 , P ϕ (k) C 2 , · · · , P ϕ (k) C n−1 . Clearly It follows that for large values of k the inner products (P ϕ (k) C j 1 , P ϕ (k) C j 2 ) H 2 (C n ) are small for j 1 = j 2 and are close to 1 if j 1 = j 2 . We shall show that the desired ϕ (k) c can be obtained by orthonormalising the P ϕ (k) C j . Pick M > 1 such that P ϕ (k) C j X(C n ) ≤ M for all k ∈ N and 1 ≤ j < n. By the equivalence of norms on finite-dimensional spaces there exists K > 0 such that, for any (n − 1)-square matrix T = (t ij ), (2.10) (here . is the operator norm on L(C n−1 )). Let 0 < ε < 1. Choose k 0 such that k ≥ k 0 implies and , i, j = 1, . . . , n − 1. (2.12) Fix k ≥ k 0 and let T : C n−1 → P ϕ (k) C n be the operator which maps the jth standard basis vector e j of C n−1 to P ϕ (k) C j . The matrix of T * T ∈ L(C n−1 ) is the Gram matrix (P ϕ (k) C j , P ϕ (k) C i ), and so by (2.10) and (2.12) we have . .
Let the polar decomposition of T be T = U(T * T ) 1 2 , so that U = T (T * T ) − 1 2 . Then U is unitary, so that Ue 1 , . . . , Ue n−1 are orthonormal in P ϕ (k) C n . Let ϕ (k) c be the n × (n − 1) matrix with columns Ue 1 , . . . , Ue n−1 . By the remark above, ϕ (k) c is inner. By the fact that P ϕ (k) C n ⊂ L ϕ (k) , the columns ofφ (k) c are pointwise orthogonal to ϕ (k) . Hence On combining this inequality with (2.11) we obtain That is, the jth column of ϕ (k) c tends to the jth column of ϕ c with respect to the norm of X(C n ). Hence V (k) → V in X(M n,n ). Finally, it follows from [P3, Lemma 1.2] that ϕ (k) c is co-outer. Corollary 2.7.. Suppose ϕ is co-outer and V (k) is constructed as in Lemma 1.6. For sufficiently large k, ϕ (k) is co-outer and so V (k) is thematic.
Proof. By [PY1, Theorem 1.2], det V is constant, hence has zero winding number about 0. Since det V (k) → det V uniformly on T, det V (k) also has zero winding number about 0 for sufficiently large k. Again by [PY1, Theorem 1.2], ϕ (k) is coouter.
Lemma 2.8.. Let E, F be Banach spaces, let T : E → F be a surjective continuous linear mapping and let x ∈ E, y ∈ F be such that T x = y. Let ε > 0 and let T ′ ∈ L(E, F ). There exists δ > 0 such that, whenever T ′ − T < δ, the equation Proof. We can suppose x = 1. By the Open Mapping Theorem there exists c > 0 such that the ball of radius c in F is contained in the image under T of the unit ball in E. Then (2.14) Thus T ′ maps the closed unit ball of E to a superset of the closed ball of radius c 2 in F . Since it follows that there exists ξ ∈ E such that ξ < 2δ c ≤ ε and T ′ ξ = (T − T ′ )x. Then x ′ def = x + ξ has the stated properties: x − x ′ = ξ < ε.
Proof. Let T = T X ϕ t : X + (C n ) → X + , so that T x = ϕ t x for x ∈ X + (C n ). Then T is a surjective continuous linear mapping and T f = 1. By Lemma 1.7 there exists δ such that T ′ − T < δ implies that the equation T ′ g = 1 has a solution g ∈ X + (C n ) satisfying f − g X < ε. If ψ ∈ X + (C n ) is such that ϕ − ψ X < δ then T X ψ t − T X ϕ t < δ, and so the lemma applies to T ′ = T X ψ t ; that is, there exists g ∈ X + (C n ) such that ψ t g = 1 and f − g X < ε.
Proof of Theorem 2.1. We proceed by induction on m.
Let {Φ (k) } k≥1 be a sequence of functions in X such that Φ−Φ (k) X(Mm,n) → 0. We shall show that some subsequence of AΦ (k) converges to AΦ in the norm of X: this will suffice to establish the continuity of A at Φ. Let v (k) be a co-outer maximizing vector for the operator H Φ (k) on H 2 (C n ). We can take it that the norm of v (k) in Let Ω be a positively oriented Jordan contour which winds once round the largest eigenvalue t 2 0 of H * Φ H Φ , contains no eigenvalues and encircles no other eigenvalues. It is easy to see from the axioms (A1)-(A5) that the operators in the operator norm of X + (C n ). It follows that for large values of k there are no points of the spectrum of H * X The vectors Pv (k) belong to the finite-dimensional subspace of maximizing vectors of H * Φ H Φ . Therefore there exists a convergent subsequence of the sequence {Pv (k) } k≥0 . Without loss of generality we may assume that the sequence {Pv (k) } k≥0 converges in X(C n ) to a vector v, which is a maximizing vector of H We also need the other Schmidt vectors corresponding to v and v (k) . We may assume that H Φ (k) = 0 for all k. Let The v (k) are co-outer by choice; the same is true of w (k) for sufficiently large k by Corollary 1.7. Now let us show that Theorem 2.1 holds when m = 1. In this case w and w (k) are scalar functions in X. By [AAK], |w(z)| = v(z) a.e. on T. By continuity, equality holds at all points of T. By Theorem 2.2, v (and hence also w) is non-zero at every point of the maximal ideal space T of X. Thus 1/w ∈ X. By virtue of the continuity of inversion in Banach algebras we deduce that 1/w (k) ∈ X for sufficiently large k, and 1/w (k) → 1/w in X. Again by [AAK], , the latter for large k. From these equations it is clear that AΦ (k) → AΦ in X. Thus the case m = 1 is established. Now consider m > 1 and suppose the theorem true for m − 1. We prove the induction step by block-diagonalisation of Φ − AΦ. Let v, w be as above and let h be the outer factor of v. Once again by [AAK], h is also the outer factor ofzw. It is given explicitly by the formula [H] h = e u+iũ where u = log v(·) andũ is the harmonic conjugate of u, u = −i(2P + − I)u.
Since v ∈ X + (C n ) it is clear from axioms A1 and A2 that v(·) 2 ∈ X. By Theorem 1.2, v(·) 2 does not vanish on T, and so its spectrum in the Banach algebra X is a compact interval of the positive real numbers. By the analytic functional calculus, u = 1 2 log v(·) 2 ∈ X. By A1 we have alsoũ ∈ X. Thus h = e u+iũ ∈ X. The above construction also makes it clear that if v (k) , h (k) are the corresponding entities for Φ (k) , so that v (k) → v in X, then h (k) → h in X. Indeed, since P + maps X into itself, it follows from the Closed Graph Theorem that P + is continuous on X, and hence the Hilbert transform u →ũ is continuous on X.
Note also that since |h| = v(·) is bounded away from zero, h is invertible in X and By Theorem 2.2, v (i) and w (i) are co-outer.
By Lemma 1.6 we can find thematic functions and Q (k) → Q in X. We can do this using a formula for Q which we gave in [PY2,Sec. 2,Remark 3]. Let Then y 1 , y 2 ∈ X and from the fact that the equations (2.16) are consistent (they hold with Q = AΦ) we have y t 2 v (i) = w t (i) y 1 (= w t (i) Qv (i) ). The components of v (i) are elements of the Banach algebra X + . By Theorem 1.2 they do not vanish simultaneously at any point of T, nor (since v (i) is co-outer) do they at any point of D. Hence they do not all belong to any maximal ideal of X + (see Lemma 0.1), and so the ideal they generate in X + is the whole algebra. Thus there exists f 1 ∈ X + (C n ) such that f t 1 v (i) = 1. Likewise there exists f 2 ∈ X + (C m ) such that f t 2 w (i) = 1. It is simple to verify that a solution of (2.16) is Now perform a similar construction to obtain Q (k) . Let . Then y (k) 1 → y 1 and y (k) 2 → y 2 in X. Apply Lemma 1.8 to f = f 1 , ϕ = v (i) . For any N ∈ N there exists δ N > 0 such that v (i) − ψ X < δ N implies that there exists g ∈ X + (C n ) with g t ψ = 1 and f 1 − g X < 1 N . Define a sequence of integers (k N ) and f (k N ) 1 ∈ X + (C n ) inductively as follows. Let k 1 = 1, f Then there exists f Passing to the subsequence (Φ (k N ) ) of (Φ (k) ), we may assume that f Then Ψ (k) → Ψ in X(M m−1,n−1 ). It is shown in [PY1,PY2] that where u 0 is a badly approximable unimodular function. It follows that the superoptimal singular values of Ψ are t 1 , . . . , t m−1 and are non-zero. Furthermore, every thematic factorization of Ψ−AΨ gives rise to one of Φ−AΦ, and hence the indices in any thematic factorization of Ψ − AΨ are all equal to 1. By the inductive hypothesis A is continuous at Ψ, and hence AΨ (k) → AΨ in X(M m−1,n−1 ). By [PY2], What if one of the superoptimal singular values t j of Φ is 0? One can see by considering diagonal examples such as diag{z, 0} that it is important whether A is continuous at 0 in (scalar) X, or equivalently whether A is bounded. This is not always so for spaces satisfying A1 to A5 (see [P2]), and so the conclusion of Theorem 1.1 does not follow if the condition t m−1 = 0 is relaxed. There is one case when it does.
Theorem 2.10.. Let X be the Besov space B 1 1 and let Φ ∈ X(M m,n ). If Φ − AΦ has a thematic factorization in which the indices corresponding to non-zero superoptimal singular values are all equal to 1 then Φ is a continuity point of the operator A of superoptimal approximation in X(M m,n ).
Proof. The fact that this statement is true in the case Φ = O is Theorem 5.6 of [PY1]. Note that X satisfies axioms A1 to A5. Let Φ have superoptimal singular values t 0 , . . . , t m−1 . Let r be the number of nonzero superoptimal singular values of Φ: r = inf{j : t j = 0}. We prove the result by induction on r. As in the proof of Theorem 2.1, let Hence A is continuous at Φ. Now consider r ≥ 1 and suppose the assertion holds for r − 1. Since t 0 = 0 the compact operator H Φ is not zero and so H * Φ H Φ has finite-dimensional eigenspace corresponding to t 2 0 . We now proceed as in the proof of Theorem 2.1: pick Schmidt vectors v, v (k) , w, w (k) , thematic functions V, V (k) , W t , W (k)t and L ∞ functions Q, Q (k) , Ψ, Ψ (k) exactly as described above. Once again (2.18) holds and the indices corresponding to any nonzero superoptimal singular value in any thematic factorization of Ψ − AΨ are all 1. Moreover, the superoptimal singular values of Ψ are t 1 , . . . , t m−1 , so that Ψ has r − 1 nonzero superoptimal singular values. By the inductive hypothesis AΨ (k) → AΨ in X(M m−1,n−1 ). The relations (2.19) now show that AΦ (k) → Φ in X(M m,n ) as k → ∞. Thus A is continuous at Φ.
Necessary conditions for continuity
It is conceivable that the sufficient condition for continuity of A which we established in Theorem 2.1 is also necessary for functions belonging to a space X satisfying our axioms A1 to A5. We can prove it for square matrix functions whose superoptimal singular values are all nonzero.
Lemma 2.1.. Let Φ ∈ X be of type n × n, and let ε > 0. Suppose that all n superoptimal singular values of Φ are nonzero and that A is continuous at Φ with respect to the norm of X. Then there exists Ψ ∈ X such that Φ − Ψ X < ε, all n superoptimal singular values of Ψ are nonzero and all n indices of Ψ are equal to 1.
Proof. Since A is continuous at Φ the same is true for the mapping G → det(G − AG), which maps X(M n,n ) to the space of constant functions in X; it maps G to the product of the superoptimal singular values of G. The latter mapping is nonzero at Φ, by hypothesis, and hence there exists ε 1 > 0 such that the product of the superoptimal singular values of G is nonzero whenever Φ − G X < ε 1 . It will therefore suffice to prove by induction on n the following Assertion: Let Φ ∈ X be of type n × n, and let ε, ε 1 > 0. Suppose that all n superoptimal singular values of G are nonzero whenever Φ − G X < ε 1 . Then there exists Ψ ∈ X such that Φ − Ψ X < ε, all n superoptimal singular values of Ψ are nonzero and all n indices of Ψ are equal to 1.
To prove this we show first that there exists Υ ∈ X such that H Υ > H Φ , Υ − Φ X is arbitrarily small and H Υ−Φ has rank one. Indeed, if H Φ has maximising vector v, H Φ v =zḡ for some g ∈ H 2 and ζ ∈ D is a point at which v is non-zero, then it suffices to take where η ∈ C n is a non-zero vector of suitably small norm satisfying η t g(ζ) > 0. We have (H Υ v,zḡ) By choosing η small we can ensure that Υ and Φ are close in any norm, in particular the X norm. Υ thus has the properties claimed. Since H Υ is a rank one perturbation of H Φ so that the maximising subspace of H Υ is one-dimensional. Let the superoptimal singular values of Υ be t ♯ j , j ≥ 0. The index of t ♯ 0 = s 0 (H Υ ) in any thematic factorisation of Υ is 1; for suppose otherwise. Then we have where V, W t are thematic functions, u is a badly approximable unimodular function and the Toeplitz operator T u has index less than -1. Thus dim Ker T u > 1. It is easy to see that {V f : f ∈ Ker T u } is a space of maximising vectors of H Υ , and this contradicts the simplicity of the singular value s 0 (H Υ ). Thus the index of t ♯ 0 is 1. Moreover, by [PY3,Theorem 1 0 . The case n = 1 of Assertion is established by choice of Ψ equal to Υ. Now consider n > 1 and suppose it true for n − 1. Pick Υ as above with and pick a thematic factorization (2.1) of Υ − AΥ, so that F ∞ < t ♯ 0 . Since multiplication is continuous in the normed algebra X(M n,n ) there exists K > 1 such that W * GV * X ≤ K G X for all G ∈ X(M n,n ). Let In the notation of (2.4) we have We claim that, for any E ∈ X(M n−1,n−1 ) such that F − E X < δ, the superoptimal singular values of E are all nonzero. We have h is the inner-outer factorization of a maximising vector v of H Φ (see [PY1,Section 2,or PY2]). This v satisfies Thus AΥ is a best (though typically not a superoptimal) analytic approximation to Φ E , and (2.2) is a first stage thematic factorization of Φ E − AΥ. It follows from [PY1,Lemma 2.4] that the superoptimal singular values of E are those of Φ E , all but the first. However, By hypothesis the superoptimal singular values of Φ E are nonzero, and hence those of E are also. This establishes the claim. By the inductive hypothesis there exists G ∈ X(M n−1,n−1 ) such that all superoptimal singular values of G are nonzero and all n − 1 indices of G are 1. Let In other words, Ψ = Φ G , and so by the above, the superoptimal singular values of Ψ consist of t ♯ 0 and those of G, hence are all nonzero. By (2.3), Φ − Ψ X < ε. Any thematic factorisation of G − AG induces one of Ψ − AΨ through the relation where we use the notation (2.4) for V, W . Since the indices of t ♯ 0 u and G − AG are all 1, so are those of Ψ − AΨ. The Assertion follows by induction.
Theorem 2.2.. Let X be a space of functions on T satisfying Axioms (A1) to (A5), let Φ ∈ X be of type n×n and suppose that the superoptimal singular values of Φ are all nonzero. If A is continuous at Φ then all indices in any thematic factorisation of Φ − AΦ are equal to 1.
Proof. Thematic functions have constant determinant [PY1,Theorem 1.2]. Hence det(Φ − AΦ) is a function of nonzero constant modulus on T whose winding number about 0 is the sum of the indices in any thematic factorisation of Φ − AΦ. Thus the winding number is n if and only if all the indices in any thematic factorisation are equal to 1. By Lemma 2.1, Φ − AΦ is a limit in the norm of X of a sequence of functions Ψ such that Ψ − AΨ has all indices defined and equal to 1, hence such that det(Ψ − AΨ) has winding number n. It follows that det(Φ − AΦ) has winding number n.
Remark. The proof shows a slightly stronger statement: if A is continuous at Φ as a mapping from X to BMO (which is a weaker hypothesis than continuity from X to X) then the same conclusion holds.
As we mentioned in our discussion of sufficiency, continuity of A at functions which have some superoptimal singular value equal to zero is related to the boundedness properties of scalar A on X.
Theorem 2.3.. Let X be one of the Besov spaces B s p , s > 1/p or the Holder-Zygmund spaces λ α , Λ α , α > 0. Then A is discontinuous at any matrix-valued function in X which has a zero superoptimal singular value.
Proof. It is shown in [P2] that A is unbounded on these spaces. Let Φ ∈ X(M m,n ). We can suppose that m ≤ n. Let t r = 0, some r ≤ m, but t j = 0 for j < r. We suppose r ≥ 1: the modifications for the case r = 0 (i.e. Φ ∈ H ∞ ) are easy. Consider a thematic factorisation By [P1], for 0 < δ < t 0 we may pick a scalar function ψ δ ∈ X such that ψ δ X < δ and Aψ δ X ≥ 1. Let Clearly Φ−Φ δ X → 0 as δ → 0. If we solve the superoptimal analytic approximation problem for Φ δ by successive diagonalisation then for the first r stages it proceeds exactly as for Φ (a detailed proof of this statement would be along the same lines as the proof of Lemma 2.1). It follows that Since Aψ δ X ≥ 1, it cannot be true that AΦ δ → AΦ in X. Thus A is discontinuous on X at Φ.
Continuity of superoptimal singular values
The first superoptimal singular value t 0 of Φ ∈ H ∞ + C is equal to H Φ , hence is continuous with respect to the L ∞ norm. Is the same true for the other superoptimal singular values? Or at least with respect to one of the norms · X discussed above? We will not venture a guess as to the answer to this question, but we can at least prove continuity with respect to · X under the same hypothesis as in Theorem 1.1. For Φ ∈ H ∞ + C we shall denote by t j (Φ) the jth superoptimal singular value of Φ.
Lemma 3.1.. Let X ⊂ H ∞ + C be a normed algebra of functions on T whose norm majorises the L ∞ norm and which is invariant under A. If Φ ∈ X is a point of continuity of A in X then Φ is also a point of continuity of each of the superoptimal singular values t j (·) with respect to · .
Proof. We recall that, for any matrix A of type m × n and any integer p, 2 ≤ p ≤ m, the pth exterior power ∧ p A is defined to be the matrix of type m p × n p whose entries are the p × p minors of A. Consider an m × n matrix function G ∈ X, m ≤ n, and any integer p, 2 ≤ p ≤ m. Define (∧ p G)(z) to be ∧ p (G(z)). Since the entries of ∧ p G are polynomials in those of G we have ∧ p G ∈ X and the mapping G → ∧ p G is continuous with respect to the X norms. Thus, if A is continuous at Φ, so is the mapping G → ∧ p (G − AG) ∞ . It is immediate from consideration of thematic factorisations that ∧ p (G − AG) ∞ equals the product of the first p superoptimal singular values of G. Hence t 0 (·), t 0 (·)t 1 (·), t 0 (·)t 1 (·)t 2 (·), · · · are all continuous at Φ. The result now follows from the following simple observation which is valid for any topological space. If f 0 ≥ f 1 ≥ f 2 ≥ · · · ≥ 0 are real-valued functions such that f 0 , f 0 f 1 , f 0 f 1 f 2 , · · · are all continuous at a point x then each f j is continuous at x (consider separately the two cases f j−1 (x) = 0 and f j−1 (x) = 0). Theorem 3.2.. Let X be a space of functions on T satisfying Axioms (A1) to (A5) and let Φ ∈ X(M m,n ), m ≤ n. Suppose that either t m−1 = 0 or X is the Besov space B 1 1 . If Φ − AΦ has a thematic factorisation with indices corresponding to nonzero superoptimal singular values all equal to 1 then t j (·) is continuous at Φ with respect to · X for 0 ≤ j < m.
The proof is immediate from Theorems 1.1 and 1.10 and the foregoing Lemma.
|
2014-10-01T00:00:00.000Z
|
1995-11-28T00:00:00.000
|
{
"year": 1995,
"sha1": "293aeeb40b368d2e591637eda399bbccc66b8c7c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/9511214v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8563262aac8c12e9c49aaaaec01a0afaad49ea68",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
231766014
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Supine Radiographs to Assess Curve Flexibility in the Treatment of Adolescent Idiopathic Scoliosis
Study Design: Retrospective cohort study. Objectives: The purpose of the study is to evaluate the role of supine radiographs in determining flexibility of thoracic and thoracolumbar curves. Methods: Ninety operative AIS patients with 2-year follow-up from a single institution were queried and classified into MT structural and TL structural groups. Equations were derived using linear regression to compute cut-off values for MT and TL curves. Thresholds were externally validated in a separate database of 60 AIS patients, and positive and negative predictive values were determined for each curve. Results: MT supine values were highly predictive of MT side-bending values (TL group: 0.63, P < 0.001; MT group: 0.66, P = 0.006). Similarly, TL supine values were highly predictive of TL side-bending values (TL group: 0.56, P = 0.001 MT group: 0.68, P = 0.001). From our derived equations, MT and TL curves were considered structural on supine films if they were ≥ 30° and 35°, respectively. Contingency table analysis of external validity sample showed that supine films were highly predictive of structurality of MT curve (Sensitivity = 0.91, PPV = 0.95, NPV = 0.81) and TL curve (Sensitivity = 0.77, PPV = 0.81, NPV = 0.94). ROC analysis revealed that the area under curve for MT structurality from supine films was 0.931 (SEM: 0.03, CI: 0.86-0.99, P < 0.001) and TL structurality from supine films was 0.922 (SEM: 0.03, CI- 0.84-0.98, P < 0.001). Conclusions: A single preoperative supine radiograph is highly predictive of side-bending radiographs to assess curve flexibility in AIS. A cut-off of ≥ 30° for MT and ≥ 35° for TL curves in supine radiographs can determine curve structurality.
Introduction
Adolescent idiopathic scoliosis (AIS) is a tri-planar deformity of the spine and the rib cage. Surgical intervention is usually indicated if the primary curve exceeds 45 -50 because the long-term natural history of untreated idiopathic scoliosis dictates that such curves progress even after reaching skeletal maturity. [1][2][3][4] Surgery involves instrumentation and fusion of the involved spinal segments with a primary goal of halting the curve progression during periods of growth and achieving a stable well-balanced spine in the coronal and sagittal plane with minimum levels of fusion. Selective thoracic or lumbar fusion has shown to result in favorable long-term outcomes. [5][6][7][8] Conventionally, a decision to proceed with selective fusion depends on the ratio of the thoracic and lumbar curve magnitude and their respective flexibility as measured on supine side bending radiographs. 6,9 This is because if performed in improperly selected patients, it can lead to decompensation of the unfused curves and progression of deformity.
Flexibility of a spinal curve has been traditionally determined by supine side-bending radiographs. [10][11][12] As has been previously reported, spinal flexibility differs in the proximal thoracic (PT), main thoracic (MT) and thoracolumbar (TL) regions. 11,[13][14][15] Klepps et al reported that in order to achieve maximal preoperative correction, thoracic fulcrum-bending radiographs should be obtained for evaluating main thoracic curves, whereas side-bending radiographs should continue to be used for evaluating both upper thoracic and thoracolumbar/ lumbar curves. 11 The major draw back of supine side-bending radiographs is that it is technician-and patient-dependent and can yield variable results depending on the curve type, apex of the deformity, and age of the patient. 16 Various modalities to determine flexibility have been studied, showing variable flexibility, including side bending, pushprone, traction, and fulcrum bending. Therefore, it is important to choose one method that is simple, reproducible, not technician-or patient-dependent, and most importantly, reliable to assess spine flexibility. Although previous studies have reported the effectiveness of a single supine radiograph in determining curve flexibility, its current applicability is relatively unknown. 10,17 The purpose of this study is twofold: 1) to evaluate the role of a single supine radiograph in determining flexibility in structural MT and TL curves, and 2) to establish cut-off values in supine radiographs that determine the structurality of a curve separately for structural MT and TL curves.
Data Collection
An IRB approved retrospective review of data from operated AIS patients with minimum 2-year follow-up from a single institution was carried out. Patients were included if they had Lenke curve types of 1, 2, 5, or 6, along with availability of preoperative standing anteroposterior (AP) and lateral radiographs, and supine side-bending and supine AP radiographs. Non-AIS curve types, atypical curves and patients with previous fusions were excluded. Data collected included demographic parameters (age, sex, BMI, Risser) and radiographic measurements. Coronal Cobb angles were measured for PT, MT, and TL curves separately in standing, supine, and sidebending radiographs. Similarly, sagittal measurements including T2-T5 kyphosis, T5-T12 kyphosis, T10-L2 kyphosis and lumbar lordosis (L1-L5) were measured. Radiographic measurements were performed by dedicated spine research fellows and any discrepancies were confirmed by an attending spine surgeon. For supine side-bending radiographs, patients were instructed to relax, and left and right maximal passive sidebending was then performed by supervised, trained radiographic technicians. Based on Lenke classification, patients were divided into 2 groups: MT Structural (Lenke types 1 and 2) and TL Structural (Lenke types 5 and 6).
Statistical Analysis
Pearson correlation coefficients were determined between standing, supine, and side-bending radiographs for MT, and TL curves in each group. For the TL group, linear regression modeling was used to derive an equation demonstrating the relationship between TL Cobb angles on supine films and TL Cobb angles on side-bending films. Similarly, for the MT group, linear regression modeling was used to derive an equation demonstrating the relationship between MT Cobb angles on supine films and MT Cobb angles on side-bending films. These equations were then used to establish cut-off values for determining curve structurality on supine radiographs by computing the supine Cobb angle when the side-bending radiograph was 25 . A value of 25 was chosen as this is the current gold standard value on side-bending radiographs to determine structurality of any curve according to the Lenke classification. 12 Using a separate set of 60 AIS patients from our institution, these supine thresholds were externally validated via comparison to the Lenke classification, in which a structural curve was defined as 25 on side-bending radiographs. Receiver operating characteristic (ROC) curves were obtained, and cross tabulation was performed to determine positive and negative predictive values for each structural curve type. P-values of < 0.05 were considered significant. All statistics were conducted with SPSS 25 software.
Results
Ninety patients were included in the study with a mean age 15.5 years. For patients in TL group, MT and TL Cobb angles on supine radiographs were highly correlated with MT and TL sidebending values, respectively (MT Cobb: r ¼ 0.64, P < 0.001; TL Cobb: r ¼ 0.56, P ¼ 0.001, Table 1). Using linear regression, the relationship between the TL Cobb angle on supine films (TL supine) and the TL Cobb angle on side-bending films (TL SB) was demonstrated by the following formula: Similarly, for patients in MT group, MT and TL Cobb angles on supine radiographs were highly correlated with side-bending values (MT Cobb: r ¼ 0.66, P ¼ 0.006; TL Cobb: r ¼ 0.68, P ¼ 0.001, Table 2). Using linear regression, the relationship between the MT Cobb angle on supine films (MT supine) and the MT Cobb angle on side-bending films (MT SB) was demonstrated by the following formula: Based on these derived equations using a side-bending value of 25 , MT and TL curves are considered structural when Cobb angles on supine radiographs are 30 and 35 , respectively. Figure 1). For predicting the structural nature of MT curves, the AUC was 0.931 (SEM: 0.03, CI: 0.86-0.99, P < 0.001, Figure 2).
Discussion
The primary goals of AIS surgery is maximal correction of the tri-planar deformity with the least number of fused segments leaving more mobile segments. It involves several correction maneuvers using segmental instrumentation. Selective fusion has been adopted by many surgeons especially after spontaneous correction of the unfused curves has been described in several studies. 9,18 Assessment of the flexibility of curves in AIS is essential to determine the structural nature of curves, hence selecting the proper instrumented levels. Furthermore, evaluating the flexibility of curves preoperatively helps to anticipate the response of curves to surgical correction and avoiding decompensation. 10 Spine flexibility represents the ratio between the displacement of spine and the force applied to produce this change. 19 Clinically, it is defined as the percentage of change in Cobb angle between upright standing position and corrected posture or under reduction force. 20 Various methods have been used to assess the flexibility of compensatory curves. The use of AP side-bending radiographs is a standard method of accessing flexibility. The patient is asked to make a maximal effort when bending into and then away from the separate curves. Since this involves patient's voluntary efforts and technician guidance, it is subject to inconsistency. 10 Luk et al have advocated Fulcrum bending method for flexibility evaluation as this technique has been shown to be predictive of curve correction through posterior techniques. 21 Push-prone radiographs are also good method for accessing flexibility especially for patients who are unable to make a full bending effort. 22 However, these above techniques either involve the patient's voluntary effort or physician/technician skill and experience, and are therefore are subject to inconsistency. In addition, the ability of these preoperative radiographs to predict intraoperative correction is limited. The reduction force applied during these methods do not account for the intraoperative correction obtained through anesthesia, release of soft tissue, gravity, new technology of segmental instrumentation and powerful derotation and translation techniques. 10 Institutions implement different protocols in performing these examinations and this would prevent physicians from different hospital to compare the results and build a formidable multi-institution database. 23 Supine radiograph on the other hand is a standard technique that is independent of patient cooperation and operator skills. It exposes the patient to less radiation since it involves only a single film to evaluate all the curves versus previously mentioned techniques. 11,21,22,24 In a study on AIS patients, Cheh et al showed that a single preoperative supine radiograph was highly predictive of side-bending films and even showed a better negative predictive value for determining structurality of the minor curves compared to side-bending radiographs. 10 Our study is similar in methodology with that of Cheh et al. In addition, we have performed an external validation of our initial results on a separate database to improve the accuracy of our results. In our study, we determined that a correlation of magnitudes of thoracic and lumbar curves between supine and side-bending films for structural MT and TL curve types. Using this correlation, we calculated a new cut-off Cobb angle values on supine radiographs (30 for structural MT and 35 for structural TL). These values were externally validated in a separate database, and we found that for structural MT curves, the PPV was 0.95 and the NPV was 0.81 to predict structurality of MT curves. For structural TL curves, the PPV was 0.81 and NPV was 0.94 to predict structurality of TL curves. Furthemore, ROC analysis of supine radiographs as a utility to predict the structural nature of the curves revealed a high statistical significance and efficiency. Area under curve for supine films predicting the structural nature of curves was 0.931 for MT curves and 0.922 for TL curves.
There is no single way to assess the structural nature of a spinal curve in AIS. Our study is not proposing that supine radiograph be replaced for side-bending radiographs to determine flexibility. However, supine radiographs are an excellent modality to assess whether a curve is structural or not. The question arises to exactly define the role of flexibility x-rays in preoperative planning in AIS patients in today's era of powerful modern instrumentation and derotation techniques. The authors believe that the role of evaluating bending films is mainly twofold: 1) to evaluate the disc below the lower end vertebra in structural TL curves especially when there is a dilemma in choosing the LIV as L3 or L4 and 2) in severe long-standing rigid curves, to quantify the amount of flexibility, which cannot be determined by supine radiographs. Moreover, since thoracic and lumbar curves bend differently, a single cut-off value to determine their structurality is suboptimal.
Our study is not without limitations. Our sample size is relatively small, especially when considering performing external validity. Although the supine radiograph is a simple, standard method that can be used for preoperative planning of surgical instrumentation, it cannot quantify flexibility. Lastly, our study does not compare the use of supine radiographs with other established techniques like the push-prone or traction under GA for predicting curve flexibility. Nonetheless, supine radiographs show excellent predictability for determining the structural nature of a curve in AIS patients.
Conclusions
Our study demonstrates that a single preoperative supine radiograph is highly predictive of side-bending radiographs in assessing curve flexibility in patients with AIS. Furthermore, supine radiographs can determine curve structurality using cut-off values of 30 for MT curves and 35 for TL curves.
|
2021-02-03T06:17:16.525Z
|
2021-01-28T00:00:00.000
|
{
"year": 2021,
"sha1": "6759104618b22f03f3cdbe0864f4cbed5bc7c48a",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Sage",
"pdf_hash": "5322280aa3ad6ca7ff594912fbd8d015139867ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
44162301
|
pes2o/s2orc
|
v3-fos-license
|
Antitumor‐specific T‐cell responses induced by oncolytic adenovirus ONCOS‐102 (AdV5/3‐D24‐GM‐CSF) in peritoneal mesothelioma mouse model
Oncolytic adenoviral immunotherapy activates the innate immune system with subsequent induction of adaptive tumor‐specific immune responses to fight cancer. Hence, oncolytic viruses do not only eradicate cancer cells by direct lysis, but also generate antitumor immune response, allowing for long‐lasting cancer control and tumor reduction. Their therapeutic effect can be further enhanced by arming the oncolytic adenovirus with costimulatory transgenes and/or coadministration with other antitumor therapies. ONCOS‐102 has already been found to be well tolerated and efficacious against some types of treatment‐refractory tumors, including mesothelin‐positive ovarian cancer (NCT01598129). It induced local and systemic CD8+ T‐cell immunity and upregulated programmed death ligand 1. These results strongly advocate the use of ONCOS‐102 in combination with other therapeutic strategies in advanced and refractory tumors, especially those expressing the mesothelin antigen. The in vivo work presented herein describes the ability of the oncolytic adenovirus ONCOS‐102 to induce mesothelin‐specific T‐cells after the administration of the virus in bagg albino (BALB/c) mice with mesothelin‐positive tumors. We also demonstrate the effectiveness of the interferon‐γ the enzyme‐linked immunospot (ELISPOT) assay to detect the induction of T‐cells recognizing mesothelin, hexon, and E1A antigens in ONCOS‐102‐treated mesothelioma‐bearing BALB/c mice. Thus, the ELISPOT assay could be useful to monitor the progress of therapy with ONCOS‐102.
mesothelioma, ovarian, and lung adenocarcinoma. Therefore, it could potentially be used as a tumor marker or as an antigenic target of vaccines. 4 No effective therapeutic modalities exist for malignant mesothelioma apart from surgical resection in 10% to 15% of the patients. In the advanced disease, chemotherapy has a marginal effect and the prognosis is extremely poor. 5 Hence, there is an urgent need for new and more effective therapies. Oncolytic adenoviruses are promising immunotherapeutic agents for advanced and treatment-refractory cancer patients. Their antitumor activity is based on the direct lysis of cancer cells and the induction of systemic antitumor immunity. [6][7][8][9] Oncolysis leads to the release of tumor epitopes that can be processed by antigen-presenting cells [10][11][12][13][14][15][16][17][18] to activate antigen-specific CD4+ and CD8+ T-cell responses. Immunogenic cell death leads to changes in cell surface structure, such as exposure of calreticulin in the outer plasma membrane and subsequent release of high-mobility group box 1 protein and adenosine triphosphate. 19 Activated CD8+ T-cells can expand into cytotoxic effector cells and infiltrate tumors where they mediate antitumor immunity after antigen recognition.
Granulocyte-macrophage colony-stimulating factor (GM-CSF) mediates antitumor effects by mobilizing and maturing dendritic cells as well as increasing the activity of cytotoxic T-cells. [15][16][17] Systemic use of recombinant GM-CSF is associated with well-
Splenocytes were isolated to determine counts of T-cells responding to mesothelin, human adenovirus 5 E1A, and hexon peptides by secretion of IFN-γ. Harvested splenocytes were stimulated with peptide pools of the complete murine mesothelin protein sequence, human adenovirus 5 E1A, and hexon proteins. IFN-γ production by T-cells was evaluated by using IFN-γ ELISPOT (Abcam, Cambridge, UK). A single-cell suspension of 2.5 × 10 5 splenocytes/well was plated in RPMI medium including 200 ng of peptide. After incubating overnight at 37°C and 5% CO 2 , plates were washed and stained with biotinylated anti-mouse IFN-γ and incubated for 2 hours, followed by streptavidin conjugate enzyme. The spots were counted using the ELISpot Reader (AID, Strasberg, Germany).
| Statistical analysis
Statistical significance was analyzed by using the Mann-Whitney test.
All statistical analysis, calculations and tests were performed using GraphPad Prism 7 (GraphPad Software, San Diego, CA). Results are presented as mean ± standard deviation. All P values were 2 sided and considered statistically significant when ≤.05. Oncolytic adenoviruses are immunotherapeutic agents with the ability to prime and boost immune responses, leading to the development of anticancer immunity. 15 ONCOS-102 is an engineered adenovirus (Ad5/3) that codes for GM-CSF. Its chimeric 5/3 capsid contains a fiber with a c-terminal knob derived from serotype 3, which binds to tumor-associated desmoglein 2 receptor instead of the coxsackieadenovirus receptor, which is found to be downregulated in advanced tumors. 7 The 24-bp deletion in the Rb binding site of the E1A gene causes the virus to replicate selectively in cells with p16-Rb pathway defects, which includes most cancers. 20 ONCOS-102 causes immunogenic cancer cell death 19 T-cell epitopes that could be used to prime antigen-specific T-cells and challenge adoptively transferred T-cells in vivo. These epitopes span conserved regions of the hexon protein and would be useful to monitor immune response before and after immunotherapy. 24 As the capsid of ONCOS-102 contains hexon protein, which plays an important role in virus entry into cells, and E1A protein, which binds to pRb/p300 family of histone acetyltransferases and induces p53-dependent apoptosis in cancer cells 5 ; they were used to stimulate the splenocytes in ELISPOT.
Mice treated with the virus generated specific T-cells against hexon and E1A antigen, as can be seen in Figure 1, in which the signal was detected from only ONCOS-102-treated mice. This is not surprising as viral capsid components such as hexon, 22
|
2018-06-07T12:41:45.991Z
|
2018-06-11T00:00:00.000
|
{
"year": 2018,
"sha1": "02482b746af6e90a87cc59479b311f10c4eca976",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jmv.25229",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0a55537125ca1e6ba7116178234121b36b4dbe65",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248498798
|
pes2o/s2orc
|
v3-fos-license
|
Robotic-assisted minimally invasive Ivor Lewis esophagectomy within the prospective multicenter German da Vinci Xi registry trial
Abstract Purpose Robotic-assisted minimally invasive esophagectomy (RAMIE) has become one standard approach for the operative treatment of esophageal tumors at specialized centers. Here, we report the results of a prospective multicenter registry for standardized RAMIE. Methods The German da Vinci Xi registry trial included all consecutive patients who underwent RAMIE at five tertiary university centers between Oct 17, 2017, and Jun 5, 2020. RAMIE was performed according to a standard technique using an intrathoracic circular stapled esophagogastrostomy. Results A total of 220 patients were included. The median age was 64 years. Total minimally invasive RAMIE was accomplished in 85.9%; hybrid resection with robotic-assisted thoracic approach was accomplished in an additional 11.4%. A circular stapler size of ≥28 mm was used in 84%, and the median blood loss and operative time were 200 (IQR: 80–400) ml and 425 (IQR: 335–527) min, respectively. The rate of anastomotic leakage was 13.2% (n=29), whereas the two centers with >70 cases each had rates of 7.0% and 12.0%. Pneumonia occurred in 19.5% of patients, and the 90-day mortality was 3.6%. Cumulative sum analysis of the operative time indicated the end of the learning curve after 22 cases. Conclusions High-quality multicenter registry data confirm that RAMIE is a safe procedure and can be reproduced with acceptable leak rates in a multicenter setting. The learning curve is comparably low for experienced robotic surgeons. Supplementary Information The online version contains supplementary material available at 10.1007/s00423-022-02520-w.
Introduction
Currently, esophagectomy within a multimodal treatment plan is the preferred management of patients with resectable esophageal cancer [16]. The introduction of minimally invasive techniques for esophagectomy has revolutionized surgical treatment leading to lower perioperative morbidity and better quality of life [18,21,26]. Hybrid laparoscopic/ thoracoscopic minimally invasive esophagectomy (MIE) and robotic-assisted minimally invasive esophagectomy (RAMIE) have both led to a significant reduction in pulmonary infections and postoperative pain in randomized clinical trials [2, 19,34] while maintaining oncologic radicality [24]. Some key advantages of the robotic-assisted technique, especially during transthoracic resection and reconstruction, are an increased range of motion of the instruments within the rigid thoracic cage, the optional use of three arms, and an improved surgical view with standard 3DHD visualization [13]. Although several techniques of reconstruction after MIE have been reported, the majority of European centers favor minimally invasive intrathoracic esophagogastrostomy [14,35]. The use of a circular stapler appears to be advantageous with regard to the AL rate, although this question has not yet been conclusively clarified [5]. Experienced centers have published the first larger single-center reports of RAMIE with excellent oncological results and low mortality rates of 1-3% [25,33].
The German da Vinci Xi Registry trial was set up in 2017 in five tertiary German university centers. The aim was multicenter prospectively monitored data collection to assess the outcomes of robotic-assisted abdominal and thoracic surgery. After individual experiences in the initial period, the centers agreed on a basic consensus technique for RAMIE, which is based on an Ivor Lewis reconstruction with intrathoracic circular stapled end-to-side esophagogastrostomy [7].
The aim of the present study was to demonstrate the safety, learning curve, and short-term results of this prospective multicenter RAMIE program.
Study design
The study was designed as a multicenter prospective registry investigator-initiated trial. Five German university centers at Berlin, Dresden, Hamburg, Heidelberg, and Kiel participated in the prospective German da Vinci Xi Registry trial. The trial was in compliance with the ethical principles of Helsinki, and the protocol was approved by the responsible independent ethics committees of all participating centers, i.e., the local ethics committee at TU Dresden (EK296072017), Christian Albrechts University in Kiel (AZD421/13, D451/19), Heidelberg University Faculty of Medicine (S-341/2017), Charité University Hospital in Berlin (EA4/084/17) and University Medical Center Hamburg-Eppendorf (PV5591).
The study was supported with a research grant by Intuitive (Sunnyvale, CA, US).
Patients
All consecutive patients who underwent elective RAMIE with an intrathoracic circular stapled end-to-side esophagogastrostomy at each study site between Oct 15, 2017, and Jun 5, 2020, were considered for inclusion if they met the following criteria: age ≥18 years and written informed consent. All patients with esophageal cancer or an indication for esophagectomy who were suitable for a minimally invasive approach using gastric tube reconstruction were considered for the primary robotic-assisted minimally invasive approach. There was no standard selection criteria for the avoidance of the robotic approach. The da Vinci Xi robotic surgical systems were available for all scheduled patients without limitations. One center was delayed in starting RAMIE because of a concurring randomized trial of open versus total minimally invasive laparothoracoscopic esophagectomy. Surgeons' and patients' preferences as well as availability of the da Vinci robotic system were taken into account regarding the choice of the procedure in the latter center. The exclusion criteria were emergency operations, patients with a survival prognosis of less than 1 month, operations for which the da Vinci Xi system was not approved (according to the manufacturer), and pregnancy. The study was managed and monitored by the local study centers at the participating sites. Intraoperative documentation was performed by authorized senior surgeons.
Surgical technique
The basic steps of the surgical technique used for all operations were published recently [7,9]. After such consensus of the basic surgical steps was obtained, a proctoring during initial operations was conducted by the most experienced surgeons [14]. All operations were performed on a da Vinci Xi surgical system (Intuitive, Sunnyvale, CA, US). Briefly, the patients were placed in a supine and 15°-20° reverse Trendelenburg position for the abdominal part. The four da Vinci Xi trocars were inserted on a horizontal line usually above the umbilicus supplemented with one additional assistant trocar. Lymphadenectomy included lymph node stations along the hepatic and splenic arteries centrally toward the left gastric artery and the celiac trunk. The left gastric vessels were divided using clips, and the lymph node package ventral to the aorta at the hiatus was resected en bloc with the esophagogastric junction. The gastric tube was trimmed to a semicircumference of 45 mm using linear stapling devices 45 or 60 mm in length beginning at the angular notch. The semicircumference of 45 mm was calculated to allow a circular anastomosis with a 28--29-mm diameter. Indocyanine green (ICG) fluorescence analysis was routinely performed to define the optimal perfusion margin of the gastric tube. For the thoracic part, the patients were turned into a left lateral to semiprone position, and the right lung was not ventilated. The four da Vinci Xi trocars were placed in a banana-shaped fashion between the fourth and tenth intercostal spaces of the right hemithorax. One additional assistant trocar was used. After docking, the azygos vein was divided, and the esophagus was dissected, including the adjacent paraaortal and mediastinal lymph node stations. The thoracic duct was identified and clipped. If preoperative imaging analysis indicated lymph node metastases along the recurrent nerves, an extended paratracheal lymphadenectomy was performed, e.g., lymph node level two. After complete resection, the esophagus was divided and closed with a purse-string suture. The proximal resection margin of the esophagus was assessed by intraoperative frozen section analysis. At this point, the esophageal resection specimen was extracted using a minithoracotomy at a trocar site, and the stapler anvil was inserted into the oral esophageal stump. Consecutively, the end-to-side esophagogastrostomy was stapled through the minithoracotomy, and the proximal part of the stomach was stapled 2 cm from the esophagogastrostomy using a linear stapling device. The esophagogastrostomy anastomosis was reinforced using either a robotic-assisted running suture or an omental wrap or both. A nasogastric tube was intraoperatively placed distally to the esophagogastrostomy, and one to two chest tubes were placed according to the SOP at each center at the time of operation.
Definition of postoperative morbidity
Postoperative morbidity was assessed using the classification of postoperative complications according to 6] with substantiation by the "Japan Clinical Oncology Group" [12]. The following complications were considered: anastomotic leak (AL), pneumonia, respiratory insufficiency, readmission to the intensive care unit (ICU), disorder of gastrointestinal passage, recurrent laryngeal nerve palsy, cardiac arrhythmia, chylothorax, postoperative hemorrhage, wound dehiscence, surgical site infection, intrathoracic fluid collections, mediastinitis, empyema/pyothorax, thromboembolic events, acute renal failure, cardiac decompensation, bacteremia/sepsis, and multiple organ failure.
Based on the recommendations of the Esophagectomy Complications Consensus Group (ECCG), AL was defined as a full-thickness gastrointestinal defect involving the esophagus, anastomosis, stapler line, or conduit irrespective of presentation or method of identification [17]. Diagnosis of AL was made on the basis of contrast leakage on CT, endoscopically or both. For the definition of pneumonia, at least one major and two minor criteria had to be fulfilled (major: new infiltrate in chest imaging; minor: fever >38.5°C or hypothermia <36.5°C, new elevation or permanent high infection markers (leucocytes/C-reactive protein/procalcitonin), productive cough with sputum, pathogen detection) [11].
Textbook outcome was defined as R0 resection with no conversion, a lymph node yield ≥15, no complications of Clavien-Dindo ≥3a, no reinterventions or reoperations, no readmission to the ICU, length of hospital stay ≤21 days, no hospital readmission, and no mortality within 90 days postoperatively [14].
Statistical analysis
The SPSS (version 27.0.0.0) software package was used for statistical calculation and data plots. The significance level for all calculations was set at p=0.05. The operative time was defined from the start of the operative procedure with a skin incision at the abdomen until skin suturing of the thoracic part, including docking and repositioning times. For learning curve analysis, a subgrouping of 15 consecutive patients was performed for each center. Moreover, we accomplished a case grouping (n=15) for each surgeon with more than 30 cases and studied the median reduction in the operative time of subsequent cases compared with the initial 15 cases. For further investigation of the learning curve, a cumulative sum (CUSUM) analysis of the total operative time was performed. This technique is a graphical method to transform raw data into a running total of differences from the group average. Therefore, a chronological arrangement of all cases from the first to the last by the center (or by the leading surgeon, respectively) was performed. Then, CUSUM values were calculated according to the following formula: CUSUM = Σ (x i −μ), where x i is the total operative time of the individual case and μ is the mean operative time of the corresponding center or leading surgeon [30,36]. Finally, the CUSUM values were plotted on the vertical axis according to their case number on the horizontal axis. Learning curves could be determined by visual interpretation of the chart. The end of the learning curve was predefined as inflection of the curve to a plateau or decrease.
Continuous variables are presented as medians with interquartile ranges (IQRs). The evaluation for nonparametric variables was performed with the Mann-Whitney U test. Univariate analysis was computed using cross tabulation and the chi-square test or Fisher's exact test.
Patient characteristics and histopathological results
In total, 220 patients were included in the analysis (center 1: 72; center 2: 41; center 3: 83; center 4: 10; center 5: 14). The median age of the patients was 64 years (IQR 58-72), and 85.5% (n=188) were male. Two-thirds of the patients had significant comorbidities (ASA ≥III), and the median BMI was 26.2 kg/m 2 (IQR 23. 6-29.4). No further information of race or ethnicity of the patients was collected routinely in the Xi trial. The indications for esophagectomy were adenocarcinoma in 81.4%, squamous cell carcinoma in 15.5% and other diseases (malignant and nonmalignant) in 3.2% of the cases. Most of the patients had neoadjuvant treatment, including chemoradiation (32.7%) or chemotherapy (47.7%) ( Table 1). The distribution of pTNM stage and UICC stage is shown in Table 1.
Surgical technique
A totally robotic-assisted operation of both the abdominal and thoracic part (RAMIE) was accomplished in 189 cases (85.9%), whereas a hybrid minimally invasive approach (hRAMIE) with a robotic thoracic part and open abdominal part was performed in 25 cases (11.4%). A hybrid robotic abdominal operation with open thoracotomy was performed in 6 cases (2.7%). Laparoscopy or thoracoscopy was not used alternatively for hybrid approaches. The main reasons for a hybrid approach were a learning phase strategy (n=14), extended dissection for lymph node metastasis (n=4), and adhesions/prior surgery (n=4). Overall conversion to an open procedure was necessary in 16 cases (7.3%). The most frequent reasons for conversion were adhesions (n=5) and intraoperative bleeding (n=4). Extended thoracic resection because of lung infiltration was necessary in 7 cases. One center routinely placed jejunal feeding tubes during the abdominal part. A circular stapler size of ≥28 mm was used in most cases (84%). The median blood loss was 200 ml (IQR 80-400), and the median operative time (OT) was 425 min (IQR 335-527) ( Table 2).
Oncological resection with microscopically tumor-free margins (R0) was achieved in 92.9% of patients. Reresection because of a tumor-infiltrated resection margin reported by the intraoperative frozen section examination (which was available for all cases) was required in only two cases. The median number of resected lymph nodes was 25 (IQR 19-30) ( Table 2).
Postoperative short-term outcome
Twenty-one percent (n=46 patients) of the total cohort developed major postoperative complications (CDC grade ≥3b) ( Table 3). The rate of postoperative anastomotic leakage was 13.2% (n=29). Most patients (82.8%; n=24) with AL were successfully treated using endoluminal approaches (predominantly endoluminal vacuum therapy), whereas reoperation was indicated in 5 cases. Thereby in most cases, an esophageal diversion as well as an insertion of a jejunal feeding tube was performed. Furthermore, 27 patients underwent postoperative endoscopic interventions for indications other than AL. The rate of postoperative AL differed between the centers, and the two centers with >70 cases had leak rates of 7.0% and 12.0% (p=0.213), respectively (Fig. 1).
Overall, 67 patients (31.6%) had a defined textbook outcome with an optimal intra-and postoperative course.
The impact of the learning curve on operative time
The median total operative time within the first 15 cases of each center (n=69) was 488 min, which was significantly longer than that in the consecutive case groupings (p=0.024). The median operative time subsequently dropped . 2a). Similar results were observed if the median operative time of the thoracic part only was analyzed: the median operative time of cases 1-15 was longer than that of the consecutive case groupings (p=0.023). However, a significant reduction in the operative time for the abdominal part was not seen until a caseload >60 (p≤0.001) (Suppl. Fig. 1). The three most experienced surgeons of the participating centers could significantly improve their individual median operative time by approximately −4.8% after cases 16-30 (n=45) and approximately −11.6% after >30 cases (n=50) (p≤0.021). Likewise, there was a trend toward a further operative time reduction from cases 16-30 to cases >30, but without statistical significance (p=0.057) (Fig. 2b). The pooled CUSUM graph for all centers showed a peak (inflection point) with a slow decrease after 22 cases, indicating the end of the learning curve for the total operative time (Fig. 3a). The CUSUM graphs for the three centers with more than 22 RAMIE procedures revealed different end points of the learning curve: the inflection point in center 1 was at 22 cases, center 2 reached a plateau after 13 cases, and center 3 reached a plateau after just 10 cases (Fig. 3b). The CUSUM analysis for the leading surgeons of the three most experienced centers showed the same end points of the learning curve of surgeons B and C as for their related centers 2 and 3 (Fig. 3c). This finding is not surprising because the leading surgeon in these two centers performed (nearly) all procedures (78.3% and 100%, respectively). In center 1, three surgeons routinely performed RAMIE, which explains the longer learning curve for this center. The point of inflection for leading surgeon A of center 1 was at the 9th case (Fig. 3c).
The impact of the learning curve on intraoperative findings, postoperative course, and mortality
Based on the CUSUM analysis, perioperative outcome parameters were compared in relation to the pre-and postlearning curve cohort (≤22 and >22 cases). The latter cohort was operated on with less blood loss (p<0.001), a shorter operative time (p<0.001), and a lower rate of postoperative pneumonia (p=0.046). Additionally, there was a trend toward a lower conversion (11.1 to 4.6%; p=0.061) and 90-day readmission rate (12.2to 5.4%, p=0.059) after >22 cases. Other outcome parameters, including major complications CDC ≥3b, AL rate, textbook outcome, and intensive care parameters, were not significantly different between the two groups ( Table 4).
All operated cases (n=220) were included in a univariate analysis to identify predictive factors for the development of AL (Suppl. Tab. S1). However, none of the tested variables significantly correlated with the occurrence of AL.
Discussion
This is the first report of a prospective multicenter registry trial evaluating the short-term outcome of RAMIE with an intrathoracic circular stapled anastomosis. Because of Fig. 1 Cumulative occurrence of anastomotic leakage per case number stratified by center the potentially beneficial effects of the RAMIE approach on short-term patient outcome, the participating university centers agreed on a prospective multicentric registry study to evaluate the safety and potential benefits of RAMIE during the implementation phase and beyond with a standardized technique. The aim of this registry study was to generate data to assess the da Vinci Xi surgical system for esophagectomy regarding clinical outcome. The multicenter RAMIE program included a uniform technique with an intrathoracic (Ivor Lewis) circular stapled esophagogastrostomy using a minithoracotomy. According to the present knowledge, circular stapled anastomosis is the most frequently performed anastomosis technique during RAMIE [14].
Overall, approximately 80% of the operations were minimally invasive using the da Vinci Xi robotic system (fully robotic), and the thoracic part, including the anastomosis, was robotic-assisted in 94% (207) of the cases; whereas in thoracoscopic approaches (MIE), the conversion rate was 14% [2,28]. This result demonstrates that the RAMIE technique is feasible in most cases. The rate of a fully roboticassisted approach was higher than that in a recent international registry report, where only 54% of the cases were not hybrid procedures [14]. Other high-volume centers for RAMIE combine an abdominal open or laparoscopic part with the robotic-assisted thoracic part [25]. A direct comparison of an open abdominal operation phase and a total RAMIE revealed no significant differences regarding oncological radicality and recurrence-free survival, suggesting that robotic-assisted abdominal lymphadenectomy is adequate [23]. The oncological quality in the present study, as indicated by the R0 resection rate (93%) and the median number of resected lymph nodes (n=25), is comparable with recent single-center series [14,25,34].
The current results from leading esophagus surgery centers support the use of a circular stapler esophagogastrostomy, especially for minimally invasive intrathoracic [5,22]. The stapler diameter should be selected according to the individual anatomical situation of the patient but with preference for the largest possible diameter; however, a significant difference regarding anastomotic leakage and stricture was not identified between 25-and 28-mm diameter sizes [29]. In our analysis, in 84% of all cases, a stapler size equal to or greater than 28 mm was used for esophagogastrostomy, which could contribute to the markedly low leakage rate.
According to the available randomized data, the strength of MIE/RAMIE is the lower postoperative morbidity, especially a reduced rate of pulmonary complications. A recent propensity score-matched comparison and meta-analysis concluded that RAMIE has significantly lower rates of pneumonia or pulmonary complications than laparoscopic MIE and should potentially be considered the standard technique for esophagectomy [31,37]. In the present trial, the rates of postoperative pneumonia and anastomotic leakage were 19.5% and 13.2%, respectively, compared with 23% and 20%, respectively, in the international registry (out of the 331 fully robotic Ivor Lewis cases) [14].
Interestingly, the present analysis showed that key characteristics and complications such as operative time, blood loss, the rate of pneumonia, and anastomotic leakage can be further improved after a learning experience of 22 cases, which was the initial CUSUM-based learning curve plateau for all five centers. The CUSUM analysis was designed for detecting minor changes in datasets to visualize trends describing the learning curve [8]. Interestingly, in another German single-center analysis, also a case load of 22 was necessary to overcome the learning curve for RAMIE procedure [1]. A comparable number of cases for completion of the learning curve (20-24 cases) have been reported by other centers, especially for experienced robotic-assisted surgeons [10,15,32]. In contrast, MIE was usually coupled with longer learning processes with flat learning curves; 54-119 cases were reported to be required to reach a stable plateau [3]. Robotic-assisted surgery instead displays distinct steeper learning curves, likely due to special da Vinci surgical system training programs and the existing competence of most participating surgeons in MIE surgery [27]. The present study further confirms that single experienced surgeons can reach the plateau for RAMIE within a proctored program even earlier.
Prior experience in robotic-assisted surgery seems to be of high importance. Increased overall and pulmonary complications and reoperation rates were observed after the TIME trial setting, with excellent short-term outcomes after MIE implementations in nationwide practice. The authors concluded that this may reflect the completion of the MIE procedure by nonexpert surgeons in a nonstandardized fashion outside of high-volume centers [20]. Therefore, recommendations toward RAMIE should be given after considering the center volume and experience of the leading surgeons.
The advantage of the study design is the multicenter setting with a uniform technique and the high quality of the data that was prospectively recorded and closely monitored. Alternatively, the data are limited by a heterogeneous set of lead surgeons and assistants in different centers, and minor modifications of the standard operative techniques were observed (e.g., insertion of jejunal feeding tubes or differences in the number of chest drains or the use of oral antibiotics on the day before the operation).
Conclusions
In conclusion, the present high-quality multicenter registry data confirm that RAMIE is a safe procedure and can be reproduced with acceptable leak rates and promising shortterm results in a multicenter setting. The learning curve is comparably low at approximately 22 cases for experienced surgeons and in a setting with interinstitutional proctoring.
|
2022-05-03T13:43:50.395Z
|
2022-05-02T00:00:00.000
|
{
"year": 2022,
"sha1": "8e4f5ba39c825a035552ae7c62f556dd26b7f739",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00423-022-02520-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "0945576d75c3980836c1ab683b10ab8a85b3ef80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247298716
|
pes2o/s2orc
|
v3-fos-license
|
Discontinuities and Discrepancies in the Hybridization Process of Nangnang Culture
This article marks an attempt to approach one of the most controversial topics in ancient Korean history, Nangnang, in order to find an alternative narrative that can explain some processes in the development of this territory from the point of view of the local people. The tendency to label and consider Nangnang merely as a “Han commandery” risks transmitting a distorted picture of the evolution of its local culture, inasmuch as the expression “Han commandery” itself appears to be obsolete and inadequate at capturing what this complex society was. It is likely that a community or even a kingdom called “Nangnang” existed well before the Han invasion, and the migratory flow of people from other regions of the continent led to the progressive remodeling of local culture and institutions and embedded them in a more international network. The same phenomenon also occurred in areas that were not under the direct control of the Han Empire. The best efforts at this sort of approach to Nangnang’s history until
Introduction
This article marks an attempt to approach one of the most controversial topics in ancient Korean history, Nangnang, in order to find an alternative narrative that can explain some processes in the development of this territory from the point of view of the local people. The tendency to label and consider Nangnang merely as a "Han commandery" risks transmitting a distorted picture of the evolution of its local culture, inasmuch as the expression "Han commandery" itself appears to be obsolete and inadequate at capturing what this complex society was. It is likely that a community or even a kingdom called "Nangnang" existed well before the Han invasion, 1 and the migratory flow of people from other regions of the continent led to the progressive remodeling of local culture and institutions and embedded them in a more international network. The same phenomenon also occurred in areas that were not under the direct control of the Han Empire.
The best efforts at this sort of approach to Nangnang's history until now can be credited to Pai Hyung Il (2000), 2 who tried to reevaluate the cultural legacy of the "commandery" in the broader context of early Northeast Asian civilization. 3 In her book Constructing Korean Origins, she offers an alternative explanation for the problem, applying acculturation theory to Nangnang. In this article, we will focus more on the analysis of "discontinuities" and "discrepancies" in Nangnang's history to better reconstruct the evolution of its past. More specifically, we take inspiration from the approach of Edward Said and his concept of "discrepant experience," which proposed to explore different narratives of the past, not just those based on the historical accounts of the victors. In this attempt, we also argue that it is unrealistic to interpret Nangnang's history within uninterrupted continuity boundaries, but rather, we should consider it as a succession of rifts, adjustments, and continuous modulations. In fact, accounts tend to vastly oversimplify some crucial problems to project an idea of uniformity and linearity. In this article, as the title indicates, I will also adopt the word "hybridization"-referring to the opening and contamination of Nangnang culture by continental trends-instead of "Hanization 4 "as this word would imply a unilateral and irreversible process of acculturation and influence from China, which erroneously transmits the idea of a progressive and unidirectional replacement of local culture and customs. Hybridization, which also includes that of material culture, can be a consequence of various factors, for example, objects traded from other territories may have inspired artisans to incorporate (which refers more specifically to Han Empire or Han ethnicity), as I find it more it pertinent than a more general "sinicization", even if this is much more common. A hypothetical expression constructed on the term Central plain (Zhongyuan) could be philologically more correct, but it is not used as far in the current literature.
stylistic features that originated far away 5 . This term translates the idea of a cultural syncretism, more than a process of imposition of norms and cultural aspects over a non-Han population. Especially after the establishment of the Han Empire, the construction of long-distance communication systems and the resettlement of people in different territories for military or economical purposes created the conditions for a larger and stronger circulation of ideas and objects, and the culture of Nangnang region reflects these syncretistic qualities 6 .
Preliminary remarks
For a period of 423 years, beginning with the installation of the "four Han commanderies" from 109 BC to AD 314, part of the territory of the northwestern Korean peninsula-which also encompasses the P'yŏngyang area-was under the political or at least cultural influence of external, nonlocal communities. This is basically what we know primarily from Chinese written sources. Wang proposes to call this significantly long phase of Northeast Asian history the "Period of the Nangnang Commandery," 7 as Nangnang (c. Lelang) was the most enduring and 5 We prefer here the word 'hybridization' to 'creolization' or other similar terms as it translated better the idea of a contamination of tastes and trends in a not colonial context. Standing to the definition of 'creolization' by Françoise Vergès "it occurred under a situation of severe constraints, under the yoke of slavery, colonialism and racism, in situations of deep inequalities, of forced circumstance and of survival strategies." See Vergès, F., Creolization In this article, we prefer the Korean reading "Nangnang" over the Chinese "Lelang," first of all because the use of the term "Lelang" itself implies origins in Chinese culture; however, before lending its name to the commandery, Nangnang was a toponym, and the name-unrelated to the role the territory would later play-likely existed even before the Han Empire. "Every third day of the influential of these "four Han commanderies." This phase of hypothetical external interference can potentially be extended another eighty-five years, from 194 to 109 BC, to include the period in which Wiman (c. Wei Man) was designated a 'foreign vassal' by the Han dynasty. 8 In addition, Yi notes that several titles and official positions (such as Nangnanggong, Nangnang kungong, Nangnang kunwang, Nangnang t'aesu, Nangnang wang) were conferred by the Chinese court as symbols of prestige and authority even after the fall of these commanderies; 9 this means that, at least partially, this phase of influence lasted beyond the year 314. 10 ed on elements of homogeneity, coherency, and continuity. On the contrary, it has proven to be one of the most fragmentary and discontinuous chapters in the peninsula's history, too difficult even for the ancient historians themselves to have left us a clear account of it. On the basis of historical events, Komai delineated a five-period division of the history of Lelang, 11 from its establishment until its collapse; however, considering the turmoil the continent experienced from the first to fourth century BC, it is likely that Lelang's history witnessed even more ruptures and changes. Moreover, even if we are certain that the Central Plain cast its influence on this territory, it is hard to define the degree to which this territory was subject to the fickle political climate of the rest of the continent. It is at once one of the most crucial, yet most delicate chapters in all of premodern Korean history, an important key to understanding its early development. Nevertheless, it has generally been neglected in traditional history books (Samguk Sagi and Samguk Yusa, for example), and in most recent history textbooks. From the point of view of ancient local historians, Nangnang has been described as a constant menace. In the "Silla Pongi" of the Samguk Sagi alone, we find six passages referring to hostilities that took place from the time of Pak Hyŏkkŏse to the era of Yuri Isagŭm. The following is the first one recounted in the pages of the book: "In the third year of Pak Hyŏkkŏse, Nangnang led its army to Silla and tried to attack it." 12 From the first volume of the Samguk Sagi, Nangnang people's behavior is described as "no different from that of thieves" (無異於盜). We also discover in the Samguk Sagi some episodes of military conflict with 11 Komai: 1965, 24-28, Paekche from the time of King Onjo onwards, although there is some doubt as to the reliability of the proposed dating and, more generally, the contents themselves. 13 Thus, though extremely relevant for researching the origins and the identity of Nangnang, some passages of the Samguk Sagi and Samguk Yusa appear to be contradictory and incoherent, and thus not fully credible.
"According to these sources, Nangnang may be identified with P'yŏngyang-sŏng. But other sources maintain that Nangnang was the land of Malgal at the foot of Mt. Chungdu, and the Salsu is now the Taedong river. It is hard to tell who is right." 14 In this passage, Iryŏn sincerely confesses the limits of his knowledge with regard to Nangnang, as there are many theories to its location, to the point that he himself doesn't know which one is correct. In a further passage, Iryŏn seems to complain about the chaotic state of the information on Nangnang he has gathered. "King Onjo of Paekche said that Nangnang is in the east and Malgal is in the north. This would certainly make Nangnang a ter-13 An-sik Mun, "Samguk sagi ch'ogi kirog-e poinŭn Nangnang-ŭi silch'e-e taehayŏ" [On the Real Existence of Nangnang on the Basis of the First Accounts in the Samguk Sagi], Han'guk chŏnt'ong munhwa yŏn 'gu, 2008, 198. As Kang notes ("The Real Existence," 136-7), it is likely that some of the first passages of the Silla and Paekche pongi of the Samguk Sagi referring to Nangnang may be somehow incorrect, as they treat Nangnang as a neighbor of Silla and Paekche, while in fact its territory was significantly far from the southern part of Korea. Possibly, he states, it could be a fabrication by later scholars, who probably replaced Chinhan with Nangnang in the records (ibid., 140). 14 據上諸文, 樂浪卽平壤城, 宜矣. 或云樂浪, 中頭山下靺鞨之界, 薩水今大同江也, 未詳孰是. Samguk Yusa, Wonder 1, Nangnangguk. The translations of the passages from the Samguk Yusa in this article are adaptations from the English version of Ha Tae-Hung and Grafton K. Mintz (Yonsei Press, 1972).
ritory under the rule of the commandery of Nangnang at the time of the Han Empire. But the people of Silla called their own country Nangnang and even now call a noblewoman 'a lady of Nangnang.' This is shown by the fact that King T'aejo called his daughter, whom he gave in marriage to the surrendered king of Silla, the princess of Nangnang." 15 The Samguk Sagi has also been marred with various inconsistencies as well as probable mistakes, such as we find in the following passage: "In the east, there is [the territory of] Nangnang, and in the north, [that of] Mohe [Malgal]." 16 As already reported in a note on Yi Pyŏng-dos's translation of the Samguk Sagi, there is an evident contradiction in the text, as Nangnang should be in the north, and Malgal in the east. At any rate, these examples show that there was already an active debate about Nangnang among scholars of the Koryŏ period, and even by then, many doubts concerning Nangnang were far from being solved. In recent times, research in this field has failed to produce an exhaustive reconstruction of this phase. By way of example, in a 593-page book titled Ancient History of Korea (Han'guk kodaesa), written by Sin Hyŏng-sik in 1999, there is neither a chapter nor even a single section dedicated specifically to the history of Nangnang; moreover, in the fourth edition of the Outline of Korean History ( , an entire chapter, entitled "The Northern Territory," is dedicated to Nangnang archaeology. So far, the most detailed and complete English-language work on Nangnang is the 2013 book The Han Commanderies in Early Korean History, edited by M. E. Byington. Interestingly, the fact that it forms a volume of the Early Korean Project series seems to reinstate Nangnang as part of the history of the peninsula, and avoids estranging it from Korea's past, as previous books have done. From the very first studies-which date back to the late Chosŏn period and the first archaeological surveys by Sekino Tadashi in 1909-until modern times, 17 much of the relevant academic literature has focused on the localization of the "Four Commanderies" 18 -probably due to the reluctance to acknowledge the presence of foreign rule in Korean territory-or, alternatively, on the identity of the people buried in excavated tombs. 19 19 Song, "Historical Character," 37. By denying the existence of Han Nangnang, North Korean archaeologists and Yun Nae-hyŏn attempted not only to eradicate undesirable "foreign" (Chinese) influence on Korea's ancient past, but also to dis-was a significant hindrance to the legitimacy of the historical narrative, a continuous, linear, and monoethnic historical development rooted in the Tan'gun myth that was strongly asserted in those years. But these studies, from the point of view of the Japanese rulers, served to demonstrate Korean culture's high level of dependence on the Chinese. 20 In addition to the obvious sensitivity of the subject, one insurmountable obstacle has been the paucity of archaeological data produced by North Korean academics, both for lack of funding and because the misuse of this kind of research could lead to a radically altered perspective on this chapter of the Korean history than what is traditionally maintained, based on a nationalistic theory of "land pureness" (yŏngt'o sun'gyŏljui) 21 whose very center was P'yŏngyang. Historiographical sources, too, fail to transmit a detailed account of Nangnang's past and compel us to rely on a few passages, reported in nonobjective written records, that have only accidentally survived under exceptional circumstances. The most difficult aspect of this reconstruction is that all existing sources have been written from the point of view of the "rulers"-i.e., of the "winners"-and are reported mainly in the official dynastic histories. Moreover, the archaeological data has been recovered from the tombs of the elite, thus there is no voice allowed to Indigenous people nor to local ones, which further distorts our reconstruction of local society and identity. Consequently, we completely lack an alternative narrative by which to understand this long phase of Korean history from the point of view of the local people-likely the majorityand our reconstruction is limited to the standard perspectives. These "local people" were also a complex ethnic amalgam, having settled over centuries of immigration, struggle, and alliance formation. A reference to the waves of migration directed at the Korean peninsula is found in a passage on King Ŭija in volume 28 of the Samguk Sagi: "After the turbulences of the Qin [period], many Chinese people left the Han [territory], fleeing to the Haedong territories."
秦亂漢離之時中國人多竄海東.
Further archaeological data can hopefully help provide more information on other aspects of the culture and society of this territory in the future, but most of the excavations will still come from the necropolis, which ipso facto means a near-total lack of data from sites inhabited by living people. The number of tombs excavated is also very limited if we consider that only three thousand tombs in the P'yŏngyang area and only a hundred others around the earthen walls of Unsŏng-ni have been discovered until now. They are especially few in comparison to Koguryŏ's tombs, found in the Tonggou, Ji'an area (around 10,782 in number), and also in comparison to the population of Nangnang, which reached a peak of about 400 thousand inhabitants; if we assume an average life expectancy of fifty years-probably a little optimistic for ancient and preantibiotic times-the overall number of people who lived in Nangnang over those four centuries could have been more than three million. Thus, we may assume that the number of remains is astonishingly disproportionate to the actual population of Nangnang. Moreover, these relics belonged only to the material culture of a specific stratum of the local elite; this makes it almost impossible to understand and reconstruct the life circumstances and identity of the Indigenous people. Though we are well aware of these limitations, in this article, we attempt an analysis of Nangnang, trying to discover some elements that may demonstrate-even narrowly or incompletely-some expressions of cultural resistance from the local people of the frontier settlement, so that we may be better aware of the traditional, and distorted, notions of this part of Korean history.
Inadequacy of the expression "four Han commanderies"
At the beginning of the twentieth century, Korean scholars began to seriously question the reliability of those Book of Han passages related to the foundation of "commanderies" in the old territories of Chosŏnabove all, those referring to Nangnang and Hyŏnt'o-to the point that Sin Ch'ae-ho considered them "a false interpolation by scholars of later periods." 22 However, the seals discovered in the area of P'yŏngyang and the Stele of Chŏmjehyŏn秥蟬縣 (c. Nianti district)-the oldest inscription found in Korea, discovered by Sekino Tadashi in Yonggang-gun in 1913 23 and named after one of the twenty-five districts of Nangnanghave proved the historical existence of a district named "Nangnang-gun." However, the expression "the four [Han] commanderies" that we often encounter in articles and books does not stem from any archaeological relic but appears for the first time in the Records of the Grand Historian, where it is reported that Emperor Wu installed them after a long military campaign against Wiman. The expression "four commanderies" also appears in the Samguk Sagi. 24 For just a short period, in fact, there did exist four distinct commanderies-those of Nangnang, Imdun (c. Lintun), Chinbŏn (c. Zhenfan), and Hyŏndo (c. Xuantu). However, some of the four were dismissed shortly after their creation; several other "Han commanderies" also existed in parallel, in Liaoning province or other regions 22 Pak, "Three Rivers," 9. 23 26 The area's cyclical enjoyment/restoration of a degree of autonomy is also attested by a passage in the Samguk Yusa, where it is reported: "[Nangnang and North Taebang] are the names of two counties established under the Western Han; later, North Taebang impudently assumed the status of a country, but then surrendered." 27 Further, when the Suowen Jiezi, compiled in the 121 AD, refers to Nangnang, it sometimes calls it "county" (kunhyŏn, 郡縣), other times "vassal reign" (出樂浪藩國). 28 This expression can be found only in passages related to the territory originally belonging to Ancient Chosŏn, and 30 Moreover, due to rebellions and turmoil in China, rulers must have periodically lost their influence over these frontier territories, and the interruption of transportation links with inland China probably often led to a shortage or even a complete lack of Han personnel and of military and political control over Nangnang. 31 The relation of these commanderies to the Han dynasty, though recognized by the Samguk Yusa (Wonder 1, Old Chosŏn), is nonetheless expressed in a very imprecise passage, where it is reported: This confirms that already by the time of the Samguk Yusa's compilation, there was a degree of uncertainty as to the origins, evolution, and identity of these "Han commanderies." Moreover, as seen in the previous passage, these four kun (郡) were later converted or integrated into two pu (府). This suggests that the status of Nangnang changed significantly over time. In any case, the word "commandery" is problematic, as it already implies the complete military character of this institution. The term pu or "external administrations" (外府) 32 may to some extent be linked to a sort of autonomous and external administration (都督府) with jurisdiction over a wider territory. However, the governors (or magistrates) who administered the territory, while surely a military power, 33 were not necessarily soldiers or generals, but more probably high-ranking civil functionaries or even local chiefs. If we look at the images of the governors (or magistrates) depicted some decades after the fall of Nangnang in North Korean 32 The term "prefecture" also appears in a previous passage, which reports: "In the Book of Han mural paintings, specifically those from Anak tomb no. 3 (AD 357) and the Tŏkhŭng-ni tomb (AD 408), we can see that they don't wear armor, preferring to wear civil garments instead of military ones. In addition, archaeological surveys show that the amount of weapons and military equipment progressively decreases in the tombs, especially after Wang Diao's revolt. 34 Naturally, the use of military power was essential for the control and maintenance of these frontier and lucrative lands. The prevalence of weapons as burial goods in several tombs undoubtedly suggests that, to some extent, the status of warrior was a particularly relevant social factor at this time, but the same assumption is also true for previous and later periods and is generally applicable to ancient periods on the whole. Moreover, this territory was a frontier and so inevitably required a degree of militarization and defense systems more sophisticated than other inland territories. Based on the meaning of its components, the Chinese character jun 郡 on its own refers to "a village under a lord," and its translation as "commandery" seems inappropriate or forced, as we find no explicit reference to a uniquely military administration. A more suitable translation would probably be the more neutral "frontier settlement," as the term jun may be a contraction of bianjun (邊郡), or "frontier settlement." We don't know much about the degree of autonomy a "frontier settlement" would have had at that time. The concept of "colonial installation" or "colony" may fit somewhat better than that of "commandery," though the word "colonial" itself is problematic as it contains many more political implications and also seems to imply a sense of complete and irreversible control. Though we normally refer to ancient Phoenician, Greek, and Roman communities abroad as "colonies" without reservations, in the case of Korea, the Japanese "colonial" experience and its own uncomfortable geopolitical position in Northeast Asia has made some scholars particular- ly sensitive to the potential use of this word. 35 In the ancient world, a "colony" (from the Latin verb colere, "to cultivate") originally referred to a group of people or community who settled in a distant land to inhabit it, cultivate it, and impose the rules and habits of their motherland, with which they maintained legal and economic ties (e.g., the Greek colonies of Sicily). From the point of view of the empire, veritable "control" of a noncontiguous frontier territory, such as Nangnang, probably appeared precarious and onerous; to some extent, influence over it was more cultural or psychological than strictly military or economical. The empire's influence was generally exerted from a distance (Luoyang, the capital of the Han Empire, was 1,400 km away, or 5,000 li as reported in the Book of the Later Han) and the empire's interference was frequently contested. Wang Diao's rebellion in AD 25 is clear proof of the level of local dissatisfaction with their external management: on this occasion, Wang Diao, a t'oin (土人)-that is, an Indigenous person-killed Liu Xian, the governor of Nangnang, and established his own regime. 36 show a reduction of the empire's capacity to cast its influence or even collect information from those territories.
Perhaps, even though it is geographically and chronologically distant from the case at hand, we might try to consider the term oppidum, used also to refer to some Roman colonies in French territory. In some cases, this designated the capital of allied people, 37 but under a different statute. The people of ancient Mediterranean France developed important trading relationships with Etruscan merchants, importing wine in ceramic amphoras, bronze vessels, and fine drinking bottles from them. These colonial encounters intensified after BC 600. At that time, the Indigenous populations of Iron Age Mediterranean France began to concentrate in highly fortified, densely settled sites, often situated on commanding hills or at strategic points. Archaeologists generally refer to these Iron Age settlements as oppida, of which Lattara, Nages, and Ambrussum are typical examples. 38 Among the hypothesis the word derives from the archaic Latin ob-pedum, "enclosed space," probably a fortified one. Thus it is a fairly fluctuating category of protohistoric agglomerations, generally defined according to the presence of such an enclosure, 39 within which the entire economy and political control of the territory was concentrated. In a passage from the Book of Later Han dating back to the reign of Guangwu, Nangnang is, interestingly, called "fortress" (夫餘犯樂浪塞). 40 other territories; 41 this corroborates the hypothesis that first Wanggŏm, then Nangnang was the real political and economic center of the wider territory, as was the case of some European oppida.
Alternatively, if we consider the economic relevance of this strategic installation, we may find some elements of a trade center in which continental communities joined local elites to integrate local commerce into a wider international network. As happened similarly in the archaic Mediterranean, the Greeks created "political communities, mostly independent, with no contiguous territory and no single political center coordinating them, which functioned as a decentralized network." 42 Ultimately, Nangnang was probably the result of the intensification of trade and contact with the Central Plains, but at the same time, it was built on compromises and alliances between external, multiethnic powers and communities and local, native groups. A relatively small number of imperial servants dispatched to such a far territory could hardly control the entire territory military and command a much larger population, imposing taxes and corvée labor on Indigenous people. Moreover, the territory was hardly accessible by land, and even if it was originally and formally under the control of the emperor, direct management of the territory was unrealistic. Several functionaries were sent from neighboring territories, such as Liaodong, but ultimately, it was necessary to recognize the authority of local chiefs, to delegate to them, and to cooperate with them in maintaining public security. 43 As already underlined by Kim, 44 the Han generals who led the attack against Wiman Chosŏn were punished after the war, and paradoxically, only subjects from the defeated Wiman Chosŏn were rewarded at the end of the hostilities. 45 We cannot know with certainty how the Indigenous people felt emotionally after the fall of Wiman, and whether they started thinking of belonging to the empire. As Kim asserts, 46 if it is true that Wiman Chosŏn was not a centralized kingdom, but rather a sort of confederation whose leading power was that of Wiman Chosŏn, it is likely that, after its fall, the Indigenous people may have noted only a formal shift in the leadership of the confederation, while the local order was generally preserved. It was probably an opportunity for some of them to start new lucrative ventures along the new routes of the Han Empire. 47 Surely. they had to start quickly to adapt to the new standards of administration and new bureaucratic systems. An augmented foreign presence in the area incontrovertibly activated new channels for the transmission of continental trends in the peninsula, and progressively strengthened commercial routes between Northeast Asia and other regions of the empire. It was also likely that local people were officially sent to the Chinese court or to Han cities to learn the law, morals, and habits of the empire, and, once there, they were fascinated by the prestige of the Central Plains civilization.
Increase in the bureaucratization of the Chosŏn territory
Historical sources generally tend to explain military actions toward subjugated peoples as principally defensive, but a military campaign so risky and far away as that of the Han against Chosŏn was unlikely to have been truly "defensive." 48 Moreover maintaining control of these territories was extremely onerous and difficult, as we can assume from the fol- 46 Kim, "Wiman Chosŏn-ŭi," 75. "When the Han arose, finding it difficult to control this remote territory, they restored Liaodong fortress and established the frontier along the P'aesu river." 漢興爲遠難守, 復修遼東故塞, 至浿水爲界 49 .
It is interesting to note that, even in Roman sources, texts repeatedly report that the Roman emperor decided to launch an offensive "in order to defeat tribes that invaded Roman territory." Isaac argues that this set of actions is described "so frequently and in so similar a fashion" 50 that it seems to be the standard explanation for any military venture the emperor decided to undertake; he characterizes Roman expansionary policy as "opportunistic." 51 The territory of Nangnang must have seemed very strategic and lucrative in the Hans' eyes. As we have already seen, from the prehistoric period onward, many people from China had already moved to Northeast Asia, flowing into the Korean peninsula, and even before 108 BC, many waves of immigrants had cyclically arrived in the territory in search of safety or better living conditions. 52 Nangnang represented a crucial hub in the middle of East Asia, between the Han Empire and East Asia, and held a privileged position that allowed it to control commercial activities down to the Japanese archipelago. Especially after the diffusion of iron weapons and utensils, the need for iron caves made it strategic to maintain commercial routes with southern Korea. The importation of other goods, such as salt and hemp 53 or fishery products, was likely profita- ble as well. As Yi notes, Nangnang was situated at a latitude near which many of Eurasia's ancient cities were built, in the very middle of a territory that harbored different cultures based on different subsistence economies, such as agriculture, hunting, fishing, pastoralism, and nomadism. 54 It thus became a crossroads between the routes leading to Northeast Asia, linking the Korea peninsula, the Japanese archipelago, and the Chinese mainland, but was also connected with the northern regions of Siberia.
The conquest and then control of that territory was very demanding, and required a considerable amount of resources, as confirmed by the records of the ruinous campaigns against Wiman reported in the Records of the Grand Historian. Fortunately for the Han Empire, from its foundation to the era of Emperor Wu, the empire enjoyed a period of seventy years free from wars and calamities, during which it saved enough resources to invest in military operations. The Han Empire adopted a policy of expansionism, aimed at conquering more territories, but at the same time, it also had to prepare official internal propaganda to assure its domestic support and not undermine its image. As Brent Shaw has noted, there was an "ideological necessity for a negative image of the barbarians," 55 whom the authors portrayed in stereotypical ways. This helped create the premises of iusta causa that could legitimate aggression against these peoples. First of all, as already seen in the quotation from the Records of the Three Kingdoms, the sources tended to emphasize the binary oppositions of Han and Hu, Han and non-Han, civilized and uncivilized, locals and nonlocals. This rhetorical process itself implicitly placed "non-Han people" in a potentially dangerous group that did not adhere to common rules or principles or who enjoyed fewer rights. In volume 73 of the Han Book, the people of Chosŏn were expressly depicted as a menace to 54 It is difficult to estimate how much such military campaigns could cost, but we could refer to the Roman case for a conjecture: according to Duncan, the cost of the army and salaries for officials alone probably made up over 85 percent of the estimated imperial expenses of the empire. 58 The Han, replicating their juridical and fiscal standards locally, probably tried to create a system that could allow Nangnang to be economically independent of the empire's aid. This required that a strict and detailed code of rules be enforced on the local people. The "Treatise on Geography" in the Book of Han, volume 28, reports that under Chosŏn, only eight distinct laws existed, which increased exponentially, to more than sixty, after the installation of Nangnang. 59 We do not know the extent to which the Nangnang law differed from the empire's, 60 but it at least supplied proof of the maturation of the local administration's standards. Contextually, the presence of seals recovered from archaeological sites also demonstrates an enhanced level of bureaucratization of tax management within the territory. The empire also undertook a systematical ethnographic survey of these territories, trying to form a better idea of the languages, cus- toms, and economy of frontier territories, as shown by the Shuowen jiezi. 61 In any case, the most credible proof of the hybridization of the local bureaucratic standards was the discovery of a census register dating back to 45 BC; called "The [two characters are missing] of the Increase and Decrease in the Number of Households in Lelang Commandery by County in the Fourth Year of Chuyuan" (Nangnang Ch'owŏn sanyŏn hyŏnbyŏl hogu taso, 樂浪初元四年 縣別 戶口多少□□), it was found at the beginning of the '90s in the wood-frame tomb of Chŏngbaek-tong, no. 364, though it was officially published by North Korean scholars only in 2006. 62 This represents an important source providing information on the governing system of the Han dynasty in this frontier region. As Park underlines, this discovery has allowed scholars to learn that the central government had a detailed understanding of the number of households in Nangnang, and that such detailed records could only be created within an established administrative system based on official written documents and laws. 63 The tomb probably belonged to a songni who was in charge of the document's redaction, as we can deduct from the traces of a knife used to delete mistakes from the document, as well as the ring of a belt. 64 The number of registered people is divided into two categories: pŏmho (凡戶), which probably refers to the entire population or, as Park claims, denotes the sum of all households and residents in Lelang Commandery (as also 61 Kim, "Sŏlmun Haeja-ŭi Ko Chosŏn," 5-28. shown in the household registers of other "commanderies" during the Han dynasty), 65 and kiho (其戶), which, according to Son Yŏng-jong 66 -who first introduced the register to the academic world-are the locals, as they represent 86% of the entire population. It is generally accepted by the academic community that kiho corresponds to the Indigenous population of ancient Chosŏn, also called t'oho (土戶). They were most probably people without rights, such as Roman peregrini. 67 The census also shows that the population was concentrated in the two main districts, Chaoxiang in the north (approximately 55,000 people in 10,000 households) and Taebang in the south (29,941 people in 4,346 households). 68 There is no doubt that the census was neither infallible, but by premodern standards, it must have been a useful instrument for the government to guarantee the financial basis of this frontier territory. Perhaps the amount of wealth collectible from the people increased over time, as we can deduce from the increased complexity of the tombs over time. A single two-chamber brick tomb required more than ten thousand bricks, based on a calculation made on the walls of Yangdong-ni nos. It is interesting to note that in the first and second centuries, the majority of the Roman Empire's population, about 80 to 90%, was made up of peregrini. In ancient Rome, a peregrinus was a free person who was subject to Roman rule without having Roman citizenship, and therefore lacked many rights reserved for Roman and Latin cives. 68 Byington, The Han Commanderies, 30. 69 Chung'ang munhwajae yŏn'guwŏn, Nangnang kokohak kaeron, 67. new continental communities, and probably the locals as well, and to assure that taxation rules would also be respected in rural areas. 70
Hybridization of Nangnang culture
At the time of the fall of Wiman, there existed no standardized Han culture, but rather an assortment of different local traditions, constantly changing over the centuries. Nangnang culture itself was extremely complex, inasmuch as it was a frontier where identities were in a state of continuous change and competition with each other. As Di Cosmo asserts, "The imperial frontier system established by the Han was a living administrative organism that changed over time in response to both local and central pressures, and the experimentation carried out by the Han along the frontiers impinged upon later theories, debates, policies, and practices." 71 Furthermore, neither its ruling class nor its population was uniform in its ethnic composition over time; each was an amalgamation of multiple ethnic and regional groups. We might apply to the Han Empire what Wells asserts of Roman tradition: "Past approaches have tended to overemphasize standardization in the Roman provinces and to neglect the important evidence of local and even individual variation." 72 Though Han culture influenced the local one and probably fascinated the local elite, we should avoid the word "Hanization" (Lelang-ization 73 or even more inappropriately Sinization) as this kind of expression is "based on a now outmoded concept of acculturation, whereby the representatives of the larger, more complex culture brought the obvious benefits of their lifestyle to more primitive peoples, who eagerly adopted it." 74 Changes in Nangnang were very complex and cannot be reduced to any specific formula. Such changes started even before the Wudi era, but they were the result of the deliberate strategy of local elites. A tendency to draw cultural elements from abroad was not typical of this period or previous ones; we find a very telling late passage on King Hŭngdŏk of Silla in the Samguk Yusa: "But the customs have deteriorated and people quarrel as they vie for luxury and extravagance, and they only treat precious objects from abroad with respect, and consider the local ones vulgar."
只尙異物之珍貴却嫌土産之鄙野. 75
The archaeological data shows a complex pattern in which Indigenous traditions were combined with elements introduced by Han culture; however, the phenomenon of hybridization probably started well before the installation of Nangnang, and also indirectly involved territories not under the control of the empire, such as southern Korea and Japan. 76 Moreover, its influence persisted even after the fall of Nangnang and Taebang-as we see, for example, from the fact that brick-chambered tombs did not fall out of fashion after the fall of these installations, but continued to be built in the territory until at least AD 407. 77 Also, as Pai underlines, "The geographical distribution exhibits a decrease in Han materials and influence with increasing distance from the center at P'yŏngyang." 78 We should be aware that by labeling this phenomenon as "Hanization," we risk creating the artificial impression of continuity and homogeneity in the culture of Nangnang and completely annihilating the role of the local people and culture in the construction of its identities. We should try to ascertain how the local people and ruled subjects mutually participated in the construction of Nangnang society, or better, the "societies" that emerged in this frontier territory. As the archaeology of Roman colonies demonstrates, "The people inhabiting the frontier provinces became increasingly heterogeneous as they responded to the changes resulting from the Roman presence" 79 : results from archaeological surveys also confirm this trend for the area under the rule of Nangnang. For instance, the type of buried objects changed significantly over time, showing the progressive contamination of local tastes under the influence of new, continental ones; nevertheless, local artifacts were not banned from burial practice. In the Nangnang tombs, we have found bronze vessels, jades, and expensive lacquerware 80 -quite unusual in the field of Korean archaeology-yet these were often buried alongside such locally manufactured objects as bronze daggers.
Especially after the appearance of the log-frame tomb type, there was a significant increase in the number of objects from the Central Plains, which came to predominate in the tombs. This undoubtedly reflects the growing prestige of Chinese culture to people living in this territory, and it is unlikely that the cultural level of Nangnang was so different from that of Luoyang; Song asserts that such wealth was not frequent, even in tombs excavated in China. 81 Some lacquerware found in the tombs of Nangnang was of the same type as that used by the imperial house: 82 these objects were extremely highly valued, to the extent that they could 79 Wells, The Barbarians Speak, 187. 80 Tombs excavated in North Korea likely belong primarily to local elite, as corpses of the foreign functionaries were not interred locally, judging from the practice of sending their bodies back to the empire for burial. See for example, Cho, "The Problem of the Identity," 164. 81 Byington, The Han Commanderies, 159. 82 Song, "Historical Character," 28. cost 1,200 each, as reported in the "Biographies of Moneymakers" in the Records of the Grand Historian. Further, the presence of molds to make coins and bronze objects 83 proves that Nangnang had an active and independent economy and was a site of advanced production at that time. Although a large quantity of lacquerware has been discovered in China, it is difficult to find precious items comparable to those found in Nangnang; however, this is also probably due to the fact that wood is easily perishable, and many precious objects buried under the earth have since completely decomposed. 84 As reported in the Lunheng, vol. 19, Huiguopian (恢國篇), Lelang was recognized as a highly civilized place where people wore leather crowns (皮弁), a place where in the Zhou required two interpreters, and now they could chant the Shijing and the Shujing. 85 As Ko notices, even the calligraphic style of the census register follows continental trends. In the passage in clerical script, some characters from the seal style were also used, and this same style can broadly be seen in the second part of the Eastern Han period. 86 Perhaps Han portable material culture-just like its counterpart in temperate Europe-was used by Indigenous peoples, especially elites, who wanted to display their familiarity with and access to the latest cosmopolitan trends in architecture, pottery, glassware, and personal ornamentation. 87 This was also an instrument for local elites to show off their power and international reputation, and to allow them to have their power recognized internally.
In fact, just as in Rome, once the initial trauma of invasion was overcome, 88 the society's change of circumstances was probably viewed as a 88 For a vivid description of trauma following an imperial invasion, see for example positive development by some of the population, as it introduced a higher level of technology to the territory while also allowing native talent access to new opportunities within the empire. If we once again consider the case of colonial Rome, this process of opening up would mostly have involved the elite class, who were the agents most active in first promoting the adoption of Chinese as the official language, and then new styles of dress, architecture, and behavior, "while people lower down the social spectrum experienced a more diluted version of [influence], through emulation of their social betters." 89 Indigenous people continued to use everyday objects made mostly of perishable materials like wood and clay, while those used by their elite counterparts-made of bronze, jade, and lacquer-were much more durable.
Furthermore, we may note that, in spite of the installation of Nangnang in 108 BC and the sudden political changes in the area, the production and use of the local slender bronze daggers-produced in this territory from at least 300 BC-was not interrupted, as we see from the woodframed tomb of Chŏngbaek-dong no. 1, where a local chief was buried together with his locally made weapons. Slender bronze daggers were unique to the Korean peninsula and can clearly be distinguished from the Liaoning style. 90 Even in the tomb of Changbaek-tong no. 364, a slender dagger has been discovered, together with the census register and volumes 11 to 12 of the Analects. 91 This eclecticism of funerary goods shows the high level of hybridity reached by local elites.
This further proves that the foundations of local economy and power had not yet completely eroded, a hypothesis that is also corroborated by the discovery of other items, such as Korean-style sheaths, bronze spear- heads, and harness equipment. 92 In addition, some silver and golden fibulae and objects manufactured with inlaying techniques 93 from northern cultures have been discovered in tombs dated to the first century AD, such as that of Sŏg'am-ni, no. 219. Thus, even ties with Xiongnu or the northern populations were not completely severed after the installation of Nangnang. All these relics may simply be interpreted as a natural continuation of local traditions, as partial cultural autonomy, or as a show of tolerance for local habits. However, they cannot be seen merely as the passive heritage of the past, but more as an active manifestation of cultural resistance in the face of external influence. These objects mirrored the identity of part of the local society and embodied their values either in part or in whole. Another feature to underline is that, based on the analysis of artifacts discovered in the Nangnang tombs, the reception of Chinese culture from the Central Plain was much more intense during phases of instability, such as the end of the Eastern Han or throughout the Xin dynasty, rather than when the power of the Han Empire was at its height. This again suggests that the transmission of some aspects of Han culture was not a massive, unilateral imposition from the Han Empire, but rather a deliberate and gradual adoption on the part of local elites probably fascinated by Han culture, which had been introduced by Han immigrants and was immediately appreciated in other cities of the empire. As we have seen, Nangnang was culturally far from homogenous. Along with the buried objects, the structure of the tombs itself varied greatly, and these inconsistencies can be understood as an indicator of radical change in local society-more than just a simple maturation of architectural techniqueor as evidence of change and disparity in the succession of power. to a log-framed one (귀틀묘, first and second century AD), and then to a brick-chambered type (third and fourth century AD, 전실묘). 94 Despite this considerable diversity, the tendency to label Nangnang as a unique outcome of Han civilization risks transmitting a distorted perception of the development of local society and culture. Just as there was no homogeneous Han culture, there was also no single, standard Nangnang one. Therefore, we should concentrate more closely on the analysis of these fractures and gaps in Nangnang's history to better reconstruct the evolution of its past. We could take inspiration from the approach of Edward Said and his concept of discrepant experience, which proposed to explore different narratives of colonial pasts, not just those based on the historical accounts of the victors. In fact, such accounts tend to vastly oversimplify some crucial problems, and to reduce them to binary oppositions between ruler and ruled (Han and natives, in this case), as we see in the following passage from the Records of the Three Kingdoms: "[The Emperor Wu of Han attacked Chosŏn and defeated it and installed in that territory four commanderies]. Since then, the distinction between Hu and Han has progressively increased. (漢武帝伐滅朝鮮,分其地為四郡。自是之後,胡漢稍別.)" 95 The term "Hu" refers to local people, as we also find explicitly in the expression "the Hu Kingdom of Chosŏn," reported in a passage from the Book of Han's "Treatise on Geography," where it refers to local people before the installation of Chinbŏn. 96 We should ideally avoid such dichotomies and instead explore the full range of divergences between these two communities, Chinese and "barbarians"; however, the limits of the archaeological finds make this work especially difficult. installation of Nangnang, there existed a large group of mainland immigrants, probably people exiled from the communities attacked by the Qin and the Han. The presence of exiled people from previous times was so prominent that the native people of Nangnang called themselves "Qin." 97 At a certain point, it is likely that a consistent community from Nangnang moved to Silla, as a passage in the Samguk Yusa reports that the Silla people called themselves "Nangnang people." "The people of Silla called their own country 'Nangnang,' and even to this day call a noble lady a 'lady of Nangnang." 98 This exodus from continent to the peninsula seems to have intensified with Wiman's rise to power, who himself was an exile from the state of Yan. 99 The Records of the Grand Historian's "Account of Chosŏn" states that the number of Han Chinese taking refuge in Old Chosŏn had increased considerably and that these people were illegally accepted. The migration of many Han people risked tarnishing the emperor's image, which is why the situation provoked a reaction from the Han court. Even among these immigrants, there were many heterogeneities, for example in their class affiliations: among them we find high-ranking officials, such as governors (taishou), magistrates (ling and zhang), and their aides (shuli), with all their entourage; 100 to these categories, we can also add soldiers, doctors, architects, carpenters, and technicians. Historical records also confirm the presence of merchants, who came in pursuit of economic profit. Over the centuries, a very heterogeneous and multiethnic society arose in Nangnang, which suffused the territory with values, languages, and lifestyles that were very different from each other.
Conclusions
The presence of continental communities in Korean territory was not a new phenomenon on the Korean peninsula, but the temporal and spatial scale as well as the systematic control that characterized the settlement of Nangnang was quite unprecedented and unique. Numerous continental communities on the peninsula were part of a decentralized network, bringing in new ideas and cultural elements. Even so, Nangnang's past represents some of the most fragmentary and discontinuous parts of all of East Asian history, and its reconstruction remains a deep black hole. Nangnang was a frontier settlement, a political center within an international network: it was mainly a significantly independent and commercebased fortified city and territory, not necessarily under the coordination of a single political center.
In this article, we have approached the history of Nangnang by trying to analyze some discrepancies in the historical development of its society-an attempt to construct an alternative narrative that could explain some processes in the history of this territory from the point of view of the local people, transcending any excessively dichotomizing and atomizing approaches. The interpretation of the external presence in Nangnang territory should depart from conventional models of power, and several development processes should be seen not as an external imposition, but as a result of negotiation processes. In fact, the notion of unilateral acculturation risks obfuscating the fundamental diversity of our research questions. Behind external rulers' attempts to impose new values on and indoctrinate the local people, we find evidence of resistance on the part of the local people and elites, who did not passively accept the role of receptors of a foreign culture, but actively tried to find new opportunities in their new geopolitical context while partially maintaining their own origi-nal culture and values.
Beyond resisting the tendency to resort to modern colonialist frameworks, we should rely only on archaeological evidence and hope that new discoveries or disclosure of materials from North Korean scholars will allow a more detailed analysis of the reactions of local people in the Nangnang area. In the absence of records written by local and contemporaneous inhabitants of Nangnang, we may for the time being focus on some discrepancies in the course of the local culture's development, trying to find some evidence of resistance to external rule or local people's motivation to take advantage of their new circumstances. Although we are well aware of the limitations of this kind of approach, mostly due to the scarcity of primary sources, this attempt is nonetheless relevant in consideration of the sensitive nature of this chapter in East Asian history, which has been often constrained by restrictive ideological frames. However, in contrast to Wang's delineation of a long phase of Northeast Asian history known as the "Period of the Nangnang Commandery," we cannot consider this chapter of East Asian history as a whole-an era of continuity and linearity-but, on the contrary, rather as a unilinear succession of shifts and ruptures. In our attempt to reconstruct its past, we should focus more closely on discerning the tone of those changes and the signs of resistance that represent the unique voice of the local people.
Andrea de Benedittis
This article analyses some "discrepancies" in the development of Nangnang society with a view to finding an alternative historical narrative for the territory, namely one that can explain some processes in Nangnang's history from the point of view of the local people. In the absence of any records written by contemporaneous inhabitants of Nangnang, we try to examine some discrepancies in the course of the local culture's development, aiming to glean some evidence that could be interpreted as resistance-a form of silent reaction to external rule. Though we are well aware of the limits of this kind of approach, mostly due to the scarcity of primary sources, this attempt is relevant in consideration of the sensitive nature of this chapter of East Asian history, which has often been constrained to narrow identities and ideological frames.
|
2022-03-09T16:15:22.065Z
|
2022-02-28T00:00:00.000
|
{
"year": 2022,
"sha1": "574941ef4bdf11c70742b88e9c0c5151bc4c27d4",
"oa_license": "CCBYNC",
"oa_url": "https://ijkh.khistory.org/upload/pdf/ijkh-27-1-245.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "959109566146028d0bd26fa4f5239038f43a3c52",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.