id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
197585256 | pes2o/s2orc | v3-fos-license | Testing Instrument For Water Quality and Drinking Water Using Oxidation and Electromagnetic Methods (Case Study: Local Water Company at Bangka Barat)
Safe clean water, especially for consumption purposes, is needed by the humans. Various regulations on water safety standards have been developed and implemented by each country, including Indonesia Regulation of the Minister of Health Number 492/MENKES/PER/IV/2010 concerning Water Quality Requirements; SNI 01-3553 2006 on Drinking Water in Packaging and Government Regulation of the Republic of Indonesia Number 82/2001 on the Management of Water Quality and Control of Water Pollution. Some parameters of physics, chemistry, biology and radioactivity are the benchmarks of water security. Raw water treatment for clean water and drinking water has been done using oxidation and electromagnetic methods. Both of these methods have been tested in local water company (PDAM) at Bangka Barat which has problems with high organic content, the amount of heavy metal content Fe and Mn that exceeds the threshold above normal and high acidity level. Test results after processing show physical and chemical parameters that meet the criteria of water quality and drinking water requirements in accordance with regulations set by the government. Other important things besides meeting the health requirements are also the quality assurance of the measuring instrument used to test the physical and chemical parameters. This measurement traceable to International System of Units through Research Centre of Metrology-Indonesian Institute of Science.
Introduction
Water as a vital source of life is a very important and fundamental resource for the survival of living things in this world. Cities, nations and civilization have grown near rivers. Satisfactory water supply (adequate, safe and accessible) should be available to all living creatures of humans, animals and plants. In human life, water is widely needed in all aspects such as for consumption, crop irrigation, industry, power generation and much more. Water is well known as a "universal solvent" which means that many kinds of materials can dissolve or wash away in it. This character also makes it easy for water to be polluted with things that are dangerous or harmful. The problem of polluted water is very serious and can affect to the quality life of mankind and other living things as well [1][2][3]. Therefore, water must be utilized wisely so that sustainability of clean water and the ecological balance can be maintained and utilized continuously by our future generations. Safe clean water is essential to the realization of basic human rights so that an effective policy in regard to clean water supply is required. Since the diseases related to drinking water contamination become a major burden of human health, therefore every effort should be made to achieve the highest quality of drinking water. Interventions to improve the quality of drinking water and access to safe drinking water will give real benefits to good health for the society.Access to safe drinking water is essential to develop programs at the national, regional and local levels. In some areas, it has been shown that investments in water supply and sanitation can result in net economic benefits, as the reduction of adverse health effects and health care costs are greater than the cost of intervening [4][5]. Experiences also show that interventions in improving access to clean water greatly benefit the poor, both in rural and urban areas, and can be an effective part of poverty alleviation strategies.
Experimental Section
Water quality testing is essential for identifying pollution problems that occur, ensuring that water is proper for intended use, ensuring that the water to be consumed is safe and also for evaluating the effectiveness of the water treatment system. In recent day, environmental issues related to environmental pollution have also been the concern of the community both in national and international scope [6]. Law enforcement is firmly enforced for companies that cause environmental pollution. It is intended to protect the environment and to preserve the balance of the ecosystems. Thus, the water quality test result data must be valid and acceptable to all interested parties, so that appropriate policy or decisions can be taken as water is a very important natural resource and is needed by all living creatures. Metrology both in physics and chemistry are expected to play a strategic role in overcoming water problems, and become part of the clean water supply solution for mankind. It can be done by developing techniques / measurement methods and traceability needed for determining water quality, estimating the water balance and developing methods for water quality monitoring and water quality improvement. The World Health Organization (WHO) has published international norms on water quality and human health in the form of guidelines that has been used as the basis of regulation and standard setting worldwide. In Indonesia, there are several regulation on regard to water quality including In this study a prototype will be developed to improve water quality in a physical-chemical manner using electromagnetic resonance methods. This tool has a resonance tube which is equipped with a coil and an electric pulse generator to produce a magnetic field so that it can resonate with water molecules. This prototype to improve water quality in a physical-chemical way works by giving resonance treatment to hydrogen atoms physically to water molecules. The magnetic resonance method in this tool works by utilizing the behavior of protons and the molecular bonds contained in water [7][8]. The behavior of protons (including those found in water molecules) due to the influence of certain static magnetic fields (on this device utilizes the earth's magnetic field) which, if then disturbed by other magnetic fields that have a certain frequency, where the direction of both magnetic fields is perpendicular or not mutually parallel, the Larmor Precession with a certain frequency (Larmor Frequency) and the duration of the resonance is also specific [9]. In water molecules mixed with other liquid or solid materials, the mixing material if it has a proton will also resonate with a certain resonance frequency and duration. Such events can be found in theories related to Nuclear Magnetic Resonance or Proton Magnetic Resonance. With this kind of treatment the molecules contained in water will become clusters, so that groups that are reactive (pure water) become larger and not blocked by mixing materials [10].
Result
The results of research conducted using the system created shows that there is a decrease in the value of the measure parameters. This shows the quality of treated water is getting better. In the treatment process of ex-mine lake water, it was found that all the measured mineral parameters had decreased and were degraded by the presence of oxidation and electromagnetic processes. In order to obtain analysis data, it is necessary to analyze the equipment calibration trace. Measurement traceability according to the International Vocabulary of Metrology (JCGM 200: 2012), is defined as the nature of the measurement results that can be connected to a particular reference, through a documented unbroken calibration chain, each of which contributes to measurement uncertainty. Measurement traceability activities can be considered correct if it can be proven that the measurement results are in accordance with the primary standard value. But in reality, it is not easy to directly measure primary standards. Besides because the primary standard is limited in number and may be stored in a place that is difficult to reach due to cost factors. So with consideration of cost and practicality, a standard measurement hierarchy system is created, where the primary standard position is at the top of the hierarchy. In the concept of measurement traceability which is also known as metrological traceability, the correctness of measurement can be obtained by means of measuring activities using a measuring instrument. The measuring instrument must be calibrated with a calibration standard that has a higher accuracy, as well as the calibration standard in question which must also be calibrated with a higher one; so on until it goes to the primary standard and SI unit. It states that the existing measuring instrument can be ascertained its value in accordance with the measurement value of the primary standard [1]. So that traceability of this measurement has another purpose as a guarantee of the correctness of the value of the measuring instrument used.
Discussion
From this study, measurements were taken regarding the parameters that had been set. The measurement must go through a measurement hierarchy. In the concept of measurement traceability which is also known as metrological traceability, the correctness of measurement can be obtained by means of measuring activities using a measuring instrument. The measuring instrument must be calibrated with a calibration standard that has a higher accuracy, as well as the calibration standard in question which must also be calibrated with a higher one; so on until it goes to the primary standard and SI unit. It states that the existing measuring instrument can be ascertained its value in accordance with the measurement value of the primary standard. So that traceability of this measurement has another purpose as a guarantee of the correctness of the value of the measuring instrument used [11].
All measuring instruments used must be ensured that they are not interrupted by their traced chains. Moreover, the measuring instruments used are related to strategic matters, one of them is a measuring instrument that concerns the lives of many people. To guarantee the traceability of a measurement, the measuring instrument used must be calibrated. The calibration process is intended to determine and determine the correction that is owned by the measuring instrument is still within the limits of the performance of the allowed performance. This performance limit is known as tolerance or maximum permissible error (MPE). The calibration process is carried out by comparing directly to a measurement standard that already has a calibration certificate. Output released from calibration activities is in the form of a certificate and can also be a label or sticker given as a sign on a calibrated device [12].
If the process of guaranteeing traceability of measurement of a measuring instrument is problematic, then it can be ascertained that the size of the measurement that belongs to the standard is also problematic. Therefore, the task of ensuring traceability is very important, especially with regard to the primary quantity, both of which are owned by a calibration laboratory, let alone standards that are owned by national metrology institutions such as the LIPI Metrology Research Center.
Conclusion
All measuring instruments used must be ensured that they are not interrupted by their traced chains. Moreover, the measuring instruments used are related to strategic matters, one of them is a measuring instrument that concerns the lives of many people. To guarantee the traceability of a measurement, the measuring instrument used must be calibrated. The calibration process is intended to determine and determine the correction that is owned by the measuring instrument is still within the limits of the | 2019-07-19T20:04:04.150Z | 2019-06-13T00:00:00.000 | {
"year": 2019,
"sha1": "1961f3f5d14d57c7442d8f4b5827dacc82bbba04",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/543/1/012048",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "794f5319a2075b2dbf323b884846f9d5ec238173",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
217068288 | pes2o/s2orc | v3-fos-license | Long-term orbit dynamics viewed through the yellow main component in the parameter space of a family of optimal fourth-order multiple-root finders
. An analysis based on an elementary theory of plane curves is pre-sented to locate bifurcation points from a main component in the parameter space of a family of optimal fourth-order multiple-root finders. We explore the basic dynamics of the iterative multiple-root finders under the M¨obius conjugacy map on the Riemann sphere. A linear stability theory on local bifurcations is developed from the viewpoint of an arbitrarily small perturbation about the fixed point of the iterative map with a control parameter. Invariant conjugacy properties are established for the fixed point and its multiplier. The parameter spaces and dynamical planes are investigated to analyze the underlying dynamics behind the iterative map. Numerical experiments support the theory of locating bifurcation points of satellite and primitive components in the parameter space.
1. Introduction. We can formulate a dynamical system by means of any fixed rule characterizing the time-dependence of an evolving point along with its position in the relevant state(phase)-space. It is then best described by a function whose domain and codomain respectively consist of time as an independent variable and state(phase)-space as a dependent variable. The independent variable time can be measured in terms of integers, real or complex numbers. Some examples of continuous dynamical systems can be seen in differential equations, whereas other examples of discrete dynamical systems can be found in difference equations. This analysis will be limited to a discrete dynamical system whose governing equation represents a difference equation formed by an iterative method: where Φ f is an iteration function or fixed point operator [10]. Such an iteration function can be found in finding the numerical roots for the equation f (x) = 0 encountered in many fields of applied sciences and engineering. The root-finding iterative numerical methods have been developed by many researchers [6,7,17,27].
The main task of this iterative method is to ensure its convergence to the desired root by tracing the sequence {x n+1 = Φ f (x n )} ∞ n=0 within a prescribed error bound. Regarding the iteration index n as the discrete time variable, we keep tracking of the such sequence to get useful information on the long-term behavior of discrete dynamical system (1). The root-finding process for the equation f (x) = 0 leads us to a sequence of images of initial guess x 0 under the action of Φ f : f (x 0 ), · · · , Φ n f (x 0 ), · · · , }, which is regarded as a discrete dynamical system. Iteration function Φ f is usually given by a meromorphic function whose dynamics requires a rather sophisticated analysis. According to Lemma 2.1 (to be shown later), it suffices to treat the dynamics of rational functions.
An iteration function Φ f will be represented by a family of optimal [24] fourthorder modified Newton-like existing multiple-root finders [17,18,32] and their dynamical behavior [3,4,8,9,10,13,14,16,19,25,30] behind the parameter spaces as well as dynamical planes will be pursued on the Riemann sphere C [5] via Möbius map z−b z−a , (a = b) [9] which is conjugated to (z − a) m (z − b) m : where L f : C → C is analytic [1] in a neighborhood of 0 and s is taken on the principal analytic branch of the m-th root. Evidently, Φ f (x n ) will be the right side of the last equation of (2). We are interested in the specific form of L f with a control parameter λ ∈ C: L f (s) = 1+(λ+1)s+(λ+2)s 2 1+λs . ( Subsequent analyses and developments will be treated in additional five sections. Described in Section 2 are preliminary studies on long-term behavior of a dynamical system defined on C. Section 3 fully discusses a linear stability theory based on the analysis of a small perturbation about the fixed point of the iterative map (2) under the Möbius conjugacy and classifies the local bifurcations into three types according to the location of the spectral radius of the conjugated iterative map along the unit circle. In Section 4, we describe some properties of the fixed and critical points related to the dynamics under the Möbius conjugacy map. Section 5 investigates a long-term dynamical behavior of the conjugated iterative map. Parameter spaces and dynamical planes are defined and extensively explored as well as visually being illustrated with a number of self-explanatory figures. Besides, we develop a theory on locating bifurcation points budding from another component (maximal connected set) [1] in the parameter space with both theoretical and numerical methods from a viewpoint of plane geometry involving simple 2-dimensional curves. A numerical algorithm is also established to find approximate bifurcation points under consideration. Finally in Section 6, we draw a overall conclusion and state the future work on the analysis of long-term dynamical behavior of other parameter-controlled iterative maps.
2. Preliminary studies. Lemma 2.1. Meromorphic functions on the Riemann sphere C can be represented by rational functions.
Proof. Note that a meromorphic function on C is analytic (holomorphic) except at a finite number of poles. Let f be meromorphic on C and z 1 , z 2 , · · · , z n ∈ C be its poles with their corresponding integer multiplicities d 1 , d 2 , · · · , d n . Then we construct which has no poles and perhaps has at most a pole at z = ∞. Hence p must reduce to a polynomial, from which we have f = p n i=1 (z−zi) d i as a rational function.
To discuss the orbit dynamics between two functions, we introduce the definition of conjugacy and a relevant theorem.
Then the isomorphism h is called a conjugacy [5].
. The converse is also true. Note that h(ξ) is also a fixed point of F under the topological conjugacy h. (b) Differentiating both sides of with respect to any z ∈ Y , we find its evaluation at ξ: from which F (h(ξ)) = G (ξ) holds true due to the fact that h (ξ) = 0 in view of diffeomorphism h. Remark 1. In view of Theorem 2.3, the conjugacy h preserves the dynamical behavior between F and G, possessing the invariance properties of the fixed point and its multiplier under the conjugacy. Furthermore, we find Hence the topological conjugacy h gives an isomorphism (one-to-one correspondence) between the orbits of F and G, stating that the two orbits behave similarly under h.
The following lemma is useful to the development of linear stability theory to be discussed in the next section.
Lemma 2.4. Let a n = (a (1) n , a (2) n , · · · , a (r) n ) ∈ C r with r ∈ N and n ∈ N ∪ {0}. Further let {a n } ∞ 0 be a sequence satisfying the following recursive relations among its components with a given ω ∈ C: where µ = 0 if r = 1, 1, if r = 1 and a 0 = 0 is assumed. Then the following hold: Proof. (i) For r = 1, (4) reduces to a single scalar equation of the form: a n . After omitting the superscript (1), we find the relation |a n | = |ω| n |a 0 |, which easily proves the corresponding assertion due to a 0 = 0.
(ii) For r ≥ 2, (4) can be written as a vector equation of the form: where B is an r × r matrix given by: Hence we find the resulting relation: a n = B n a 0 , where B n is expressed as: whose derivation can be done without difficulty by mathematical induction on n ∈ N. We now proceed with the proof of (ii) as follows. Close inspection of a general upper off-diagonal entry in (6) leads us to the equation: Remark 2. The result of (i) implies that {a n } ∞ 0 is bounded if and only if |ω| = 1. According to the Bolzano-Weierstrass Theorem [1], the exists a convergent subsequence of {a n }, which will exhibit a complicated limit behavior including a possible bounded oscillation.
Let Ω ⊂ C and f : Ω → Ω be analytic. Further, let f have a fixed point ξ ∈ Ω with |f (ξ)| < 1. Then f has a unique fixed point ξ and the sequence {z n+1 = f (z n )} ∞ 0 converges to ξ, provided that any z 0 ∈ Ω is given. Proof. The property of being analytic at ξ allows us to write where η → 0 as z n → ξ. We now choose a sufficiently large positive integer N and a positive constant β such that |f (ξ)| + |η| = β < 1 whenever n ≥ N . Then for any k ∈ N ∪ {0}, from which we obtain lim k→∞ |z N +k+1 − ξ| = 0, i.e., lim n→∞ z n = ξ. Suppose that µ = ξ is another fixed point satisfying lim n→∞ z n = µ. Since a convergent sequence must have a unique limit, µ = ξ, contradicting the hypothesis. Hence the fixed point ξ is unique.
In Lemma 2.5, z 0 can be chosen as a critical point z * of f , due to the fact that f (z * ) = 0 implies z * ∈ Ω. As a result, Lemma 2.5 with z 0 = z * yields the following corollary.
Corollary 1. Let ξ be the fixed point described in Lemma 2.5. Then every critical orbit (which is an orbit of a critical point) of f tends to ξ.
The notions stated in the following definition [22,23,29] are useful to describe the qualitative behavior of a dynamical system. Definition 2.6. Let z 0 be in the domain of f . The orbit of z 0 is defined to be the sequence {f k (z 0 )} ∞ k=0 with f k as the k-fold composite map of f . Then we say the following: If z 0 is a period-k point (or k-periodic point), then the orbit of z 0 is called a k-periodic orbit (or k-cycle). (c) If z 0 is a point some iterate of which is periodic, i.e., if there exists an integer 1 ≤ ≤ k − 2 satisfying f k (z 0 ) = f k− (z 0 ), then z 0 is called an eventually periodic (or a pre-periodic) point. (d) If the the orbit of z 0 contains a subsequence converging to to a stable periodic point, then z 0 is called an asymptotically periodic point. (e) If z 0 is not of types (a),(c),(d), then z 0 is called an aperiodic (or a non-periodic) point. The orbit of a non-periodic point is said to be "non-periodic, stochastic or chaotic".
3. Linear stability theory and local bifurcations. We first let a discrete dynamical system F : C m × C → C m with m ∈ N be characterized by an iterative scheme: where λ ∈ C is a control parameter and z 0 ∈ C m is given. Under the assumption that ξ ∈ C m is a fixed point of F, we excite an arbitrarily small perturbation about ξ as described by where the initial perturbation δ 0 = 0 is arbitrary and small. For a fixed λ, Taylor expansion of (9) about ξ up to the first-order term in δ n gives us the recurrence relation: with S = S(ξ, λ) as an m × m Jacobian matrix evaluated at ξ. Consequently, we are allowed to discuss the linear stability about the fixed point ξ = F(ξ, λ) for a fixed λ by considering the iterative scheme: Transforming (12) by means of an m × m nonsingular matrix P , we obtain the where b n = P −1 δ n , and J = P −1 SP is the Jordan canonical form [28] of S with k Jordan blocks J 1 , J 2 , · · · , J k . A typical J i is given by an r i × r i upper triangular matrix with ω i 's as all diagonal entries and 1's as all super-diagonal entries as shown below: where ω i is an eigenvalue of S. Without loss of generality, let J i be chosen such that |ω i | = ρ(S) which is the spectral radius of S. For simplicity, we denote J i bỹ J , ω i by ω and r i by r. The limit behavior of the perturbation δ n = P b n will be best described by analyzing a subsystem of the form: whereb n ∈ C r consists of the corresponding r components of b n related withJ . After writing out component relations for (14) and identifyingb n andJ respectively with a n and B, we apply Lemma 2.4 in case a n = δ n to obtain the corollary: According to Corollary 2, the fixed point behavior or long-term orbit behavior of the iterative map F is stable if and only if ρ(S) < 1 (modulus of all eigenvalues of S < 1), and unstable if and only if ρ(S) ≥ 1 (modulus of some eigenvalues of S ≥ 1, with the equality holding when some eigenvalues are repeated). When all eigenvalues of S are simple, according to the Bolzano-Weierstrass Theorem, the result of (i) indicates that there exists a convergent subsequence of {δ n }, which will induce k-periodic orbits of F as well as its non-periodic bounded orbits, if and only if ρ(S) = 1. Since ρ(S) around the unit circle plays a significant role in stability analysis, we had better designate the unit circle as the stability unit circle shown in Figure 1 for further analysis.
The geometrical properties in the parameter space when ρ(S) = 1 would strongly influence the long-term orbit behavior of F, which often causes an abrupt qualitative change when an eigenvalue ω (or Floquet multiplier [21]) of S with the maximum modulus crosses a certain location ω * of the stability unit circle. This kind of qualitative change in the behavior of a dynamical system is called a bifurcation. We classify such bifurcations [26] in the space of control parameters into three types according to the location of ω * along the stability unit circle as follows: (1) The (cyclic) fold(saddle-node) bifurcation occurs when ω * = 1.
The location of the control parameter in the parameter space where qualitative change in a dynamical behavior occurs is called a bifurcation point. After solving the relation |ω| = ρ(S(ξ, λ) = 1 for λ in terms of ω, we can trace the control parameter λ as ω varies along the stability unit circle, from which the bifurcation point λ * in the parameter space can be given by λ(ω * ) based on the types of bifurcation mentioned above. (3), we will discuss the relevant properties behind fixed and critical points, as the control parameter λ varies in the finite complex plane. [9]. For convenience of analysis, after applying M (z) to f (z) = ((z − a)(z − b)) m , we find the resulting J:
Fixed points and stability. Let
with Θ N and Θ D as polynomials whose coefficients are generally dependent upon parameters m, a, b, λ. The conjugacy M (z) enables us to find that all the coefficients of both Θ N and Θ D are free from parameters m, a, b. Accordingly, we can simplify (15) as In C, we allow special treatment of arithmetic operations 1 0 = ∞ and 1 ∞ = 0 as well as ∞ = ∞ being its conjugate. We state the following lemma to efficiently treat the fixed and critical points of J(z; λ).
holds for any λ ∈ C and any z ∈ C.
Proof. We find that J is conjugate to itself via conjugacy for any z ∈ C and λ ∈ C, completing the proof.
The underlying dynamics behind iterative map (16) will be initiated by investigating the fixed points of J and their stability. In view of Theorem 2.3-(a) and (b), we express: where T (z; λ) = 1 + z(5 + λ) + z 2 (10 + 3λ) + z 3 (5 + λ) + z 4 and q(z) = 1 + z(4 + λ) + z 2 (5 + 2λ). The numerator of the last equation of (17) immediately gives us the fact that z = 0 and z = ∞ are two λ-free fixed points, which give little impact on the dynamics since their orbits simply tend to themselves. Fixed points other than {0, ∞} are called strange fixed points [12] which are not related to the roots of p(z). Clearly z = 1 is a strange fixed point which may influence the relevant dynamics. To locate other strange fixed points dependent on λ, we seek the roots of T (z; λ) = 0 in (17) for given values of λ. The following lemma can be easily proved by means of simple computations.
holds for any any λ ∈ C and any z ∈ C.
Corollary 3. Let J be given by (16). If ξ ∈ C is any fixed point of J, then so is 1 ξ . Proof. Let X = Y = C, F = G = J in Theorem 2.3 and h(z) = 1 z . Then by Theorem 2.3-(a), the proof is complete.
We pay special attention to additional strange fixed points including z = 1 (corresponding to the original convergence to infinity in view of the fact that M −1 (1) = ∞ or M (∞) = 1). Direct computation enables us to locate the strange fixed points from the roots of J(z; λ) = z in terms of λ, whereas their stability is described in the following two Subsections 4.1.1 and 4.1.2. 4.1.1. Fixed points. It would be worthwhile to resolve the inquiries: (i) For what values of λ can T (z; λ) and q(z) possess common divisors? (ii) Is it possible for either T or q to contain a divisor (z − 1) ? (iii) What are the strange fixed points? Such inquiries would be resolved by the following theorem whose proof is elementary and hence omitted. (b) Due to the fact that T and q have possible factors of (z − 1) for some special λ-values, we find , if λ = −10/3.
which are the roots of T (z; λ) = 0 for any given λ, then so are 1 z , in view of Corollary 3. Indeed, these four λ-dependent fixed points will recover the special fixed points in (18).
4.1.2.
Stability of the fixed points. The derivative of J from (16) characterizes the stability of the fixed points given by: where Q(z; λ) = 10 + 4λ + z(20 + 14λ + 3λ 2 ) + 2z 2 (5 + 2λ). Observe that fixed points 0 and ∞, being related to the roots a and b of polynomial p(z) = (z − a) m (z − b) m , are super-attractive and become the critical points of J(z; λ) in view of (19). The multiplier J (z; λ) = 0 at z = ∞ is found from the analyticity in a neighborhood of ∞ on the Riemann sphere in view of Corollary 4 (to be shown later). It is not difficult to prove the following lemma by means of a simple algebraic substitution.
holds for any λ ∈ C and any z ∈ C.
Similar treatment done in Theorem 4.3 successfully leads us to the following theorem. (b) Due to the fact that Q(z; λ) and q(z) have possible factors of z 3 (z + 1) 2 for some special λ-values, we find (c) For any λ ∈ C, the desired stability of a λ-dependent fixed point can be achieved graphically or analytically by varying parameter λ with the aid of Corollary 4. Corollary 4. Let ξ ∈ C be any fixed point of J defined in (16). Then the following holds: Proof. (a) With the aid of (19), we get J (1; λ) = 8(4+λ) 10+3λ without difficulty. We seek λ satisfying |J (1; λ)| = 1 to give the required relation for a parabolic point. It is then not complicated to prove the remaining part. Figure 2 illustrates that S is the circular boundary between M and Γ with a radius 16 55 and a center at (− 226 55 , 0). We appropriately call S as the stability circle, by taking into account the fact that the fixed point z = 1 with λ inside or outside of S becomes repulsive or attractive, respectively. Yellow-shaded region M is a bounded component inside of S, whereas non-shaded region Γ is an unbounded component outside of S.
In view of Corollary 4, T (z; λ) can be factored out with four linear factors as described in the following lemma, whose proof is easily made with the help of Mathematica [31] symbolic capability.
To find the super-attractors of conjugated maps J(z; λ) for some values of λ, we need to simultaneously solve T (z; λ) = Q(z; λ) = 0 for z and λ. Eliminating λ in T (z; λ) = Q(z; λ) = 0 easily gives us an 8-degree polynomial in z. The result of Corollary 4 requires only 4 super-attractors for each λ. Indeed, by a numerical technique, we list only 4 pairs of super-attractors in the following lemma:
4.2.
Critical points and free critical points. The roots of J (z; λ) = 0 are said to be the critical points of iterative map (16). At first glance of (19), we find that z = −1 is a critical point for any λ ∈ C. Observe that z = 0 and z = ∞ are critical points respectively associated with the roots a and b of the polynomial (z −a) m (z −b) m . If the critical points are not related to any roots of the polynomial (z − a) m (z − b) m , then they are called free critical points [12]. Moreover, we are interested in exploring the dynamics of strange fixed points under iterative map J(z; λ) related to free critical points.
5.1.
Parameter spaces and dynamical planes. Our further investigation on some properties of the relevant complex dynamics of conjugated map J(z; λ) given by (16) requires notions of parameter spaces and dynamical planes. It is essential to effectively construct parameter spaces and dynamical planes for the analysis of long-term behavior of J(z; λ). A useful task of constructing parameter spaces is preferably to use a critical point and generate its orbit under the action of J(z; λ) which will induce attracting periodic orbits for each given λ according to Corollary 1. The orbit behavior of two critical points z and 1/z of J is best described in Corollary 7 and Remark 4.
Corollary 7.
Let z ∈ C and q ∈ N. If z is a q-periodic point of J, then so is 1 z . Proof. In view of Lemma 4.1 and Remark 1, we find J q (z;
Remark 4. Corollary 7 and Eqn. (22) claim relation
for any integer q ≥ 1 and λ. Let the orbit of ζ 1 approach a q-periodic point of J. Then the orbit of ζ 2 approaches a q-periodic point ζ 2 = 1 ζ1 due to Corollary 7. Therefore, the orbit of ζ 2 behaves quite similarly to that of ζ 1 as follows: (1) If the orbit of ζ 1 converges to a q-cycle, then so does the orbit of ζ 2 .
(2) If the orbit of ζ 1 is divergent but bounded, then so is the orbit of ζ 2 .
(3) If the orbit of ζ 1 converges to ∞, then the orbit of ζ 2 converges to 0; and vise versa.
According to Remark 4, we explore only one branch of the critical points ζ 1 for its typical orbit behavior. We introduce two terms called the parameter space (plane) P and dynamical plane D to describe the underlying dynamics of the iterative map J(z; λ): P = {λ ∈ C : an orbit of a free critical point z(λ) under J(z; λ) tends to σ p ∈ C}, D = {z ∈ C : an orbit of z(λ) under J(z; λ) for a given λ ∈ P tends to σ d ∈ C}.
There exist a finite periodic orbit when σ p or σ d reduces to a finite constant. If no such σ p or σ d exists, then the orbit is non-periodic but bounded or approaches infinity. By definition, D will represent a union of attractor basins. Note that z = ∞ is treated as a fixed point on C. For any λ selected in some component of P, one can find members of proposed family (2) exhibiting similar dynamical orbit behavior in D. The values of λ contained in an appropriate component of P play a role selecting best members of family (2) to locate the desired roots z = 0 or z = ∞ with acceptable numerical stability. Figure 6(a) illustrates a parameter space P for critical point ζ 1 (λ). A point λ ∈ P is painted by the coloring scheme in Table 1. Any point λ ∈ P which is neither cyan (root z = a) nor magenta (root z = b) is not a good choice of λ for convergence behavior. According to a simple computation with λ = 0, we find that the orbit of J(ζ 1 ; 0) approaches ∞ and the orbit of J(ζ 2 ; 0) approaches 0, since ∞ and 0 are the roots of f (z) = (z − a) m (z − b) m .
A closer examination confirms that space P for critical point ζ 1 consists of a union of simply connected components including components A m = {λ : critical orbit of z cr under the map J(z cr ; λ) tends to a fixed point m} with m ∈ {0, 1, ∞}. In view of black * : q = 0 implies a non-periodic but bounded orbit. These 84 colors are explicitly illustrated in Figure 5.
Remark 4, λ-parameter spaces for both ζ 1 and ζ 2 have same components and boundaries in a neighborhood of which different anomalies occur. It is clear that the component associated with fixed point ∞ shares its boundary with the component associated with fixed point 0. Best members of the family can be found with λ selected in proper components of P for critical point ζ 1 (λ). The dynamical plane D is a set of starting points whose orbits under J(z; λ) tend to a value in C for a selected λ ∈ P. While tracing an orbit of a λ-dependent critical point, we may encounter with components of P containing a λ-parameter for which the corresponding dynamical plane D exhibits attractors 0 or ∞.
Lemma 5.1. Let F : C 2 → C be defined by F (λ, z) = λ · P 1 (z) + P 2 (z), where P 1 (z) and P 2 (z) are complex polynomials with real coefficients. Suppose F (λ, z) = 0. Let z denote the complex conjugate of z. Then the following hold: See the proof of Lemma 3.5 in [20].
Theorem 5.2. Let z(λ) be a free critical point of J(z; λ) given by a root of Q(z; λ) = 0 in (19). Then parameter space P is symmetric about its horizontal axis.
If J q (z; λ) tends to σ p ∈ C for a given q ∈ N ∪ {0}, then by induction on q we find: which states that J q (z; λ) tends to σ p . Hence two points λ and λ are painted with same color C q , proving that P related to J(z; λ) is symmetric about its horizontal axis.
Theorem 5.3. For a given λ ∈ R, let z(λ) be a starting point of iterative map J(z; λ). Then dynamical plane D is symmetric about its horizontal axis.
Judging from Figures 6-8 of P and D, we find members of the λ-dependent family exhibiting delicate dynamical orbit behavior. Figure 6 displays P and its components containing λ-parameters for which q-periodic orbits are generated with q ∈ N. In Figures 6(b)-6(c), we can clearly see period-1 components (one in red, the other one in yellow) and period-2 components (in orange color). Typical qperiodic components are identified by arrow lines. Period-q components in Figure 6(b) are born along the boundary of the period-1 red component in the manner of Faray sequence [2] based on Schleicher's Algorithm [15]. Figure 7 illustrates some magnified q-periodic components of P with bifurcation points (to be described in Section 5.2). Figure 6(c) shows period-q components emerging from the boundary of the stability circle S. -3. -2. -1.
(a) Parameter space P In Figure 8, we observe D for members of the family (2) with regions of starting values z 0 whose orbits under J(z 0 ; λ) converge to some of the strange fixed points as well as other k-periodic cycles for k ∈ {2, 3, 4, 5, 6, 7, 8, 9, 14} In view of the above-mentioned D, there exist regions of starting values z 0 whose orbit under J(z 0 ; λ) converges to period-q points for q ∈ {2, 3, · · · , }. Besides, each complement of such D contains of a union of basins of the fixed point in {0, 1, ∞}.
5.2.
Bifurcation points in parameter space P. It is said that a component H in P is hyperbolic if the orbit of z under J(z; λ) has a finite attracting cycle for each given λ ∈ H ; the word 'hyperbolic' comes from the meaning of 'hyperbolicity' [11] of a rational function, not from that of a fixed point of a dynamical system. We find that H is an open set with infinitely many connected components each of which is characterized by the period of the corresponding cycle. For ease of [29] of H . If we view this kind of budding phenomenon from the side of W , then the root point ∈ W can be regarded as a bifurcation point of W where it splits into two components meaning the word bifurcation. To locate such bifurcation points λ ∈ P, we define a k-periodic component H k = {λ ∈ C : J(z; λ) has an attracting k-cycle} as a subset of H . For notational convenience in the subsequent discussion, we write J(z; λ) as J λ (z). Indeed, H k is characterized Figure 9. Typical geometries for primitive and satellite components as: It is worth to note that H 1 plays the role of a main component from which finiteperiodic components are born. A choice of λ-dependent critical points ζ(λ) found from Q(ζ; λ) would produce the parameter space P whose λ-value leads us to a longterm dynamical behavior with possible periodic, non-periodic or chaotic orbits. We further characterize H 1 with the fixed point ξ = 1 or λ-dependent strange fixed point ξ = ξ(λ) found from T (ξ; λ) = 0 in (17). The current analysis considers H 1 only related to strange fixed point ξ = 1. We name such H 1 as the yellow main component, which turns out to be exactly M described in Remark 3. For ease of subsequent analysis, let us call M the main component.
Exact bifurcation points.
Observing parameter space P in Figure 6, we should pay attention to various components of finite periods budding from the boundary S of M . We denote the finite boundary of M by ∂M and select a parameter λ ∈ ∂M . Let q ∈ N be given. If an attracting period-q component H q arises at λ, then the point λ is called the period-q bifurcation point. If M is itself a primitive component, then the bifurcation point λ turns to be a cusp point. We now consider two components M and H q (budding from M ) osculating at the common boundary point λ as shown in Figure 9. Then we have the following relations for λ ∈ ∂M ∂H q and a fixed point ξ ∈ C: with µ = J λ (ξ). Let ξ = |ξ|e iθ be a parametric representation for θ ∈ [0, 2π). We will show that µ q = 1 holds at λ where M and H q share the common tangent line.
We solve for λ from relation J λ (ξ(θ)) = ξ(θ) to obtain: where F (z) is a rational function in z in view of (23) and differentiable everywhere except at a finite number of poles. This λ will trace a curve in the complex plane as parameter θ varies. As a result, we find the derivative at the fixed point ξ: from which we have µ q = 1 using the relation J q λ (ξ) = ξ. Hence, Eqn. (25) reduces to: Since dλ dθ represents a tangent line in the direction of θ, Eqn. (27) describes the common tangent line of M and H q . The preceding discussion thus far enables us to introduce the notion of /q-bifurcation point or /q-root point.
Typical values of λ for 1 ≤ q ≤ 10 are given in Table 2. The type of 0/1bifurcation point is fold, while the type of 1/2-bifurcation point is flip. The type of all other /q-bifurcation points is of Neimark-Sacker. Note that 1/q-bifurcation points are positioned clockwise monotonically along the lower half-circle of S as q ≥ 2 increases. On the other hand, /q-bifurcation points are positioned counterclockwise monotonically along the full-circle of S as increases for a fixed q ≥ 2.
Numerical bifurcation points.
When the boundary of M is not exactly known, we resort to a numerical method to find local bifurcation points.
Let q, k ∈ N be given. Consider a period-qk satellite component H qk budding at λ from a period-q component W q . Applying the same argument as done in the preceding subsection with λ ∈ ∂W q ∂H qk and a q-periodic fixed point ξ(θ) ∈ C, we find: J q λ (ξ) = ξ, d dz J qk λ (z)| z=ξ = 1.
Since the first relation J q λ (ξ) = ξ in (30) defines a bivariate rational function in λ and ξ, we can solve for λ as an analytic function of ξ(θ), except at a finite number of poles. Hence, we express λ = F (J q λ (ξ(θ))) implicitly with F as a bivariate rational function of λ(θ) and ξ(θ). Since J q λ (ξ) = nq j=0 λ j P j (ξ)/ mq i=0 λ i Q i (ξ), with P j (ξ) and Q i (ξ) as polynomials in ξ for q-dependent nonnegative integers n q and m q , the relation J q λ (ξ) = ξ defines a polynomial in λ with its coefficients as polynomials of ξ, and hence λ can be solved as an analytic function of ξ except at the corresponding Table 2. Typical /q-bifurcation points λ for 1 ≤ q ≤ 10 and 0 ≤ ≤ 9 (2) with a parameter-controlled family of first-order rational weight functions. A study on the dynamical orbit behavior has been treated with the introduction of the parameter spaces and the dynamical planes. The boundary of the yellow main component has been found to be a circle along which infinitely many periodic satellite components are born. We have employed the linear stability theory based on a small perturbation about the fixed point and the elementary geometrical viewpoint through a parametric representation of the fixed point on the common tangent line of the main and a budding component in order to find the desired bifurcation points. The parameter spaces in Figure 6 display components containing λ-parameter for which the orbit under iterative scheme Φ f approaches a periodic orbit or chaotic [22] orbit. With λ selected in the components colored in magenta or cyan, Φ f stably converges to the desired root. However, for λ selected in other components, Φ f would exhibit unfavorable numerical behavior.
One should note that Theorem 5.5 and Algorithm 5.6 play important roles in locating exact or numerical bifurcation points. For higher-values of qk, one would encounter difficulties in finding such bifurcation points since the composition task of J qk λ (z) increases the degree of complexity. As part of our future study, we will pursue the exact boundary equation of the red main component shown in Figure 6(b) and locate the bifurcation points where period-q components are born along the boundary. | 2020-02-20T09:10:11.419Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "c56fa54b6b70c7e80e3a6bfa03329dcb5a4f13c7",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=dc2319f0-178e-4c14-9300-c4f33727c22b",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9eb0d08ddf0f108540b7cbe49ea13bb646cfe5d3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
229463004 | pes2o/s2orc | v3-fos-license | The Modern Generation of Teachers in the Context of the Educational Organization Life Cycle
Every day new organizations are established. At the same time, every day hundreds of organizations are liquidated forever. Those who know how to adapt are likely to thrive, but the inflexible ones tend to disappear. Some organizations develop faster than others and fulfill their aims better than others. The manager should know at what stage of development the organization is and be able to assess whether the accepted leadership style corresponds to this stage. That is why there has been accepted the universally known concept of the life cycle of organizations as predictable changes with a certain sequence of states over time. The theoretical basis was the results of research by Russian and international scientists in the field of systems theory, corporate management, organization’s life cycle theories and crisis management. Research into the development of the human body performed by Vygotsky and it is also mentioned in the current research. The aim of our study is to put into practice the approaches proposed in modern management to make the preschool organization efficient and effective. The results show that a mutually complementary team will provide the result if the members adhere to different approaches and the tasks of each member are clearly defined.
Introduction
To make an insight into any problem (a car breakdown, family disorder, trouble at work) will show that a failure has occurred, and it is caused precisely by the fact that something has changed. What is required for this change? Adizes (2009) proposed his own theory of management -the concept of managing change and solving problems caused by them. It addresses the issue of management education.
The challenges, which are manifestations of disintegration as a result of change, need to be addressed. However, any decisions that heads of organizations make to meet these challenges give rise to new changes, and therefore new discrepancies leading to new problems. The purpose of any form of organizational leadership is to solve the today's problems and anticipate the tomorrow's ones. And that means change management.
The relevance of this work is due to the need for managers to pay attention to the theoretical foundations of the organization functioning. This is important so that the development methods introduced by executives correspond to the level of natural development of the organization. The use of graphical, visual models in determining the developmental trends of the company, built on the basis of the theory of life cycle, allows to more likely predict the internal changes of the company. In addition, in order to improve the managerial process, the manager needs to monitor aspects of the organization life cycle theory that appear as a result of new research. Researchers noted that the development of organizations is determined by a certain algorithm common to almost all types of organizations. The identification of such an algorithm allows, with a certain degree of probability, to forecast the onset of crisis situations and develop the tools that will most effectively neutralize their negative consequences. This becomes more relevant in a market economy, when market mechanisms are intensified, the destruction of old economic ties is destructed and new relationships with counterparties are built.
Considering the organization as an economic system from the perspective of the stages of its life cycle, it is possible to quite precisely predict the future characteristics of the organization in order to optimize the managerial impact.
Nevertheless, the issues of the manifestation of the stages of the life cycle in the conditions of the development of a market economy have been poorly studied. In addition, there are some differences in determining the directions for improving the management of organizations according to the stages of their life cycle. The main problem in this case is the difficulty of combining the approaches developed by world economic thought with the traditions of the traditional Russian research and the difficulties of developing modern economic reality.
Purpose and objectives of the study
The purpose of our study is to put into practice the tools proposed in modern management to make the preschool organization efficient and effective in the short and long term.
Literature review
The literature and management sources' review reveals the material, social, and economic points of view (Blanchard, 2009;Lavizina, 1996;Piskoppel, 1994, Ros, 2005. Change creates problems and the greater the speed and scale of change, the larger and more complex the problem. Why are changes causing problems? Because everything in the world is a system, be it a person or the solar system. Any system by definition consists of subsystems. When changes occur, they do not change synchronously: some transform faster, others slower. This leads to the disintegration of the system. The main idea of Adizes (2008a) is that there is no an ideal leader, or manager -that is, the one capable of singlehandedly fulfilling all the functions necessary for the effectiveness and efficiency of an organization in the short and long term. Talking about what a leader should do (based on the needs of the organization), modern management literature does not take into account that those who are able to fulfill these recommendations do not exist in nature. All books and textbooks that try to make us perfect managers or leaders come from the erroneous idea of the attainability of an ideal.
We all go the wrong way and waste millions on education and training of leaders on the basis of insolvent ideas. Classics of management theory, including Koontz (2010), andDrucker (1971), as well as current management academies Stephen Covey (1992) and Tom Peters (1988), describe managers as if they all use a single leadership style that can be easily taught to everyone. This overlooks the fact that different people have different approaches to organization, planning and control. Successful management is considered as a model, a template.
All works are dedicated to what is supposed to happen. However, in reality there are many different management stylesboth successful and erroneous. The number of combinations of strengths and weaknesses of the leader is infinite. A lot of books have been written on this subject, but most authors focus on behavioral models, considering them in a psychological aspect. "I am a psychologist. I am the head of an educational organization. I am not a management specialist. I'm interested in how different people make decisions, exchange information, select staff and create incentives for their activities, and I'm looking for ways that can help them do their work more efficiently for the benefit of the organization". Adizes' theory (2008a) does not rely on psychological theories, interviews, or controlled experiments.
Planning the change process through model building provides an advantage to the organization. Firstly, modeling is a practical answer to questions regarding improving the organization's activities and increasing its competitiveness. Secondly, the head or management of an organization that has implemented one or another methodology will have information that will allow them to independently improve their organization and predict the future.
When our organization faced the choice of a development model, it took us a long time to consider which model we want to focus our attention on though sorting out, comparing, analyzing. And they chose the model of "life cycles" by Adizes (2008b). Because his theory of the organization life cycles is similar to the periodization of human mental development.
Dwelling on the criterion for age-related periodization the Russian researcher Vygotsky (cited in Piskoppel, 1994) considered mental neoplasm characteristic of each stage of development. He distinguished "stable" and "unstable" If we compare the theory of development of organizations of different management methodologies on the example of Greiner (1997) and Adizes (2009)
The completed model
Represents 10 stages from Birth to Death. Company may return with a descending branch of development into the stage of Prosperity.
The Incomplete model
There can be an infinite number of stages, but the company can no longer return to the past Adizes calls the optimal point of development as the stage of Prosperity Greiner does not name the optimal development point, but notes that since any stage cannot last more than 15 years, the company cannot remain at any stage permanently -organization problems are divided into "growth diseases" and "organizational pathologies"; -as a rule, organizations cope with growth diseases on their own; -treatment of organizational pathologies requires external intervention; -the transition to a descending branch of development for the organization is not predetermined; -the main task of the organization is to achieve the dawn and not fall on the downward branch of development.
The advantages of the model of Adizes (2009) include: independence from the industry; non-normative approach; the "Scenario" approach; optional death of the organization.
The disadvantages of the model of Adizes (2009) include: unclear relationship with other areas of management; lack of explicit criteria for the development of the company.
The objective of the successful management is to make the organization effective and efficient in the short and long term.
Methodology
In the process of research, theoretical, diagnostic, empirical, experimental, methods of mathematical statistics and graphical representation of the results were applied. They are: It is important to point the four functions of management code according to Adizes (2009). He believes that in order to ensure the proper level of management, the organization should perform four functions: (P) producing resultsthese are the results for which the organization exists and which determine its effectiveness, (A) administeringadministration that ensures productivity, (E) entrepreneuring -entrepreneurship through which change management takes place, and (I) integrating -integration, that is, combining the structural elements of the organization to ensure its viability in the long term.
The PAEI is an abbreviation of 4 English words, denoting 4 basic needs of any organization and corresponding to its main activities, the main roles: manufacturer, administrator, entrepreneur, integrator.
The Adizes' methodology (2008b) is aimed at forming the optimal team composition. In particular, it enables the leader to evaluate how effectively the employee performs each of the four main roles.
Thus, P stands for producing results where the function is production and the role is the producer. For the manufacturer, a clear task, goal, facts, figures are important. Such employees carry out any, even routine work, which has a clear measurable result. They love the clear and predictable rules of the game, arrangements and instructions. A "pure" manufacturer is a workhorse. He/she comes first and leaves last and is always busy. However, he/she is wary of any innovations and changes; he or she is not able to work in a team. She/he is competent in his or her field, when changing regulations, he or she is knocked out of the working rhythm and does not tolerate a state of uncertainty. This person loves to engage in work processes, which are tuned to the result. He/she is not inclined to follow all instructions, and is sometimes ready to deviate from the rules in order to achieve the desired. He/she needs to see the goal in front of them, and the ways to solve the problem they will figure it out on their own. Therefore, he/she needs to know his or her business thoroughly and understand the little things.
A stands for administering where the function is administration and the role is the administrator. These people tend to stay away from problems and difficulties. The zone of their comfort is beyond fuss and all that requires an immediate reaction to what is happening. The follow instructions and rules and scrupulously pay attention to detail. Such employees have strong analytical skills. The "pure" administrator is a bureaucrat. They come to the office and leave on schedule. All papers are removed in place. The minuses include pettiness, pickiness, emotional detachment, and action only according to instructions and lack of initiative. His or her functionality is reduced to a strict adherence to the rules, compliance with standards and deadlines. He/she is inclined to control everyone and everything and likes to make lists and instructions. An administrator can give shape to any crazy idea and make it completely achievable. Such a person sees all the risks initially, before becoming a reality. He/she is worried that all matters are brought to an end. They worry if the information is saved.
E stands for entrepreneuring where the function is entrepreneurship, the role is the entrepreneur. An entrepreneur is a creative person; showing courage and willingness to act. They have a highly developed need to change the world around them: creating a new one, developing ideas, updating, a common understanding of technologies, assessing opportunities and threats from the surrounding community -this is their "pet-horse". It is important for them to win and at any cost.
The entrepreneur in "pure form" is an adventurer. He/she does not have a clear schedule, but many ideas. And this is the main thing for him. It is impossible to isolate from the team, often missing important details. The hate routine and they do not tolerate small details. It is difficult for such a person to concentrate on one thing and methodically follow to achieve a result. But he/she has no rival in finding a new line of development or come up with a unique product. He/she is not interested in short-term results and interested in only great ideas.
I stands for integrating to suggest the function of integration and the role of the integrator. The integrator is able to maintain an atmosphere of mutual trust and respect and is able to listen and hear. They can resolve conflicts and reduce tension. They as sociable and friendly have the ability to empathy. They are always party people, informal leaders in the team. The cons are a quick loss of interest in the case when much of the work remains unfinished, sharply perception of criticism, and proneness to grievances and gossip. He/she creates a team spirit and he or she follows the rules of the corporation and motivates colleagues to work. The integrator can be passive (he or she himself participates in the group) and active (rallies, but he or she remains aloof). In addition, their activities are vertical (aimed at top management), horizontal (in his or her professional circle) and lower (with subordinates).
The four management functions of P, A, E and I are something like a set of "vitamins" all of which are necessary for the organization maintenance in the short and long term. If at least one of them is missing, the organization is threatened with a disease with certain symptoms: • the production function is unsatisfactorily fulfilled and the main consumers of the services of our organization (parents) are dissatisfied.
• the administration function is poorly executed and the organization suffers unjustified losses.
• the organization does not cope with the function of entrepreneurship; there are no new ideas or they are not in demand.
• the integration function is not implemented and the company begins collapsing.
However, skillfully nourishing the organization with the missing "vitamin", it is possible to improve its work in the short and long term.
The implementation of each function provides an answer to the corresponding question.
R -What needs to be done?
A -How to do this?
E -When and why should this be done?
I -Who should do this?
If one function is satisfactorily performed, and the other three do not meet even the minimum requirements necessary to complete the task, there is a certain style of mismanagement, the variety of which depends on which functions remain unfulfilled. People who could perform all four functions at once do not exist in nature. A normal person can cope with one or two. Occasionally, there are those who are able to perform three functions. The manager can successfully cope with each of the four functions individually in solving specific problems, but no one can perform all four at the same time in any situation. If you do not take the wish for reality, it will inevitably be found that each person has a unique combination of advantages and disadvantages. Adizes' approach (2009)
The Birth Stage
At this stage, the organization existed exclusively in the form of an idea, which was formed by the founders, the authors of the article. In the process of considering the idea, its refinement to a "ready-made" state, it took on ever more clear-cut outlines. And it was decided to establish a Kindergarten with the emphasis placed on the aesthetic and intellectual development of preschool children through a high-quality educational environment.
In 2014, as part of the social program for expanding preschools in the city of Kazan, the city administration issued a decision to construct a kindergarten to house 140 children at 20 "b", Glushko Street.
At the initial stage, the managers may encounter the following problems, presented in Table 2. -fuzzy tasks and mixing functions.
The Infancy stage began since the opening of the preschool in October 29, 2014, with the first preschoolers and employees admitted.
The efficiency was low. The organization only learned to function effectively. And then, the leader was required high attention of and a strong managerial hand included.
The leader, by example, had to show involvement in the work and focus on results, act as the provider of transparency, confidence and sustainability for the staff.
During this period, the efforts of the administrative team, consisting of the head, the senior educator and the supply manager, were focused on creating a quality education, improving work with children and their parents. Employees did their best (sometimes there were not enough educators for normal functioning) in order to compensate for the lack of experience and achieve the required results. The details are set out in Table 3. Table 3.
The Expected Problems and Errors The Abnormal Problems and Errors
Problems associated with first experience using the product or service by end customers Inability of the company to establish feedback with consumers and solve problems with the product Difficulties with the release of the finished product (violation of the terms of release and difficulties at the stages of development of the finished product) Continuous postponement of product release due to minor changes The desire to receive additional cash resources from the sale non-core goods and services Launching a "raw" product to the market Lack of management skills and delegation Excessive control that paralyzes work Lack of rules and procedures for effective management Lack of tolerance for errors and the repetition of errors of the same type Frequent mistakes made by employees and company manager The head does not listen to feedback and does not go no contact It was during this period that the regulatory framework of the preschool educational institution was created: the charter, the local acts were developed and licenses for educational and medical activities were obtained. The development education program had been compiled and approved.
According to Adizes (2009) the managers are likely to face the following problems.
The problems encountered by the authors of the article at the Infancy stage were related to the first experience in providing additional paid educational services: -the desire to receive additional non-governmental funds; -frequent errors by employees and the head of the organization, -lack of management skills and delegation.
To meet the challenge we introduced a rigid centralized decision-making system because there was no time for decentralization and the search for compromise solutions.
Establishing the organization in the market of educational services, the growing demand for the provision of educational services of the preschool contributed to the transition of the organization to the next stage of its development -the Rapid Growth stage.
At the Rapid Growth stage, the educational products of the organization began to be popular and had high loyalty, which allowed the organization to strengthen its position and flourish.
This was expressed in the project activity, in which both teachers, preschoolers and their parents took part. (Sadretdinova, 2017).
The year 2017 was a real breakthrough for us, as we were able to adequately implement another A Preschool Educational Institution is a Health-care institution... project. In the summer of 2017, our kindergarten took a first prize in the Citywide Specialized Contest. This result was achieved through intensive work of the entire team to create projects of various centers in the kindergarten.
We tried so that throughout the kindergarten the pupils had an opportunity to find something they enjoy, for example, an interesting lesson. We took into account the age characteristics of children and their cognitive interests. All game manuals are located along the eco-friendly path in our kindergarten. And there are also centers for the development of aesthetic, musical, visual qualities of children.
The year 2018 was also noted by well-coordinated and focused work of the team. We established a museum in the kindergarten. The museum is part of the kindergarten's subject-developing environment aimed at ensuring understanding and interaction of children of different nationalities in the process of cognitive, play, cultural and leisure activities, focused on the priorities of spiritual values, ideas for the development of personality, individuality of the child.
An important feature of the museum is the participation of children, parents and teachers in compiling it. Preschoolers feel their involvement in the museum. They can participate in a discussion of its subjects, bring exhibits from home; children from senior groups conduct excursions for younger children and replenish them with their creative work.
In real museums you cannot touch anything, but in our Preschool Local Lore Museum it is not only possible, but also essential. The preschoolers can visit the Preschool Museum every day, change it yourself, rearrange the exhibits, pick them up and examine them. In an ordinary museum, a child is only a passive observer, and here he/ she are the cofounders, creators of the exposition. And not only the preschoolers themselves but also their parents, relatives and friends.
The According to the Adizes' theory (2009), the problems that the manager faces at the Rapid Growth stage are the most dangerous for the long-term prospects of the organization and they can be seen in Table 4. At the Rapid Growth stage, there is the likelihood of the "Founder's Trap" scenario. This term was introduced by Adizes (2009). It means the organization is highly dependent on its leader, who manages all the processes himself/ herself. But, unfortunately, he/she is not able to allocate time for analysis and solving all the problems.
To avoid this situation, the authors organized the work of temporary creative teams in the preschool to implement the project, which was relevant at that time. Building temporary creative teams and appointment of different members of our teaching staff as their leaders allowed us to update the organizational structure, monitor the quality work of internal processes of information exchange and control. Delegation of authority became an especially important skill in the organization. In addition, the promotion of teamwork helped strengthen the corporate spirit and create a cohesive team.
It is noted that if the delegation process is successful, then the leader can gradually proceed to the process of decentralization of management (transfer of responsibility for decision-making). Before that, the organization was not quite mature.
At the stage of the life cycle, called Youth, the preschool staff was reborn into the light, but in a slightly different form.
We decided to try ourselves in innovation.
One of the decisions of our team was the decision to participate in innovation at different levels, both urban and federal.
The orders were signed on an innovative platform on improving the quality of education at the Institute for the Study of As part of our innovative projects, the preschool was engaged in the transformation of the developing environment in the premises of the preschool. Parents and teachers created vertical multifunctional manuals that allowed a change in play tasks and were available for children at any convenient time.
As part of the office project this year, the manager of the organization took a Coaching Training Course that helps in solving managerial problems.
Participation in innovative activities allowed not only to improve the quality of educational services provided under the main program, but also to expand the range of additional educational services. Now children have the opportunity to engage in figure skating on artificial ice. The objectives of this direction are to strengthen the physical health of preschoolers and the popularization of sports.
Classes in vocal art teach children to relax, confidently stay on stage, have a healing effect and promote the development of creative potential of preschool children. In this course, the preschoolers gain awards in various contests.
In our hectic time it is very important to create a favorable emotional atmosphere in the preschool. It is important to teach children how to relieve stress and relax in time and they successfully achieve it in the sensory room.
At the Youth stage, the management culture must transform from an absolute monarchy to a constitutional monarchy. In fact, the decentralization of power should occur, since the organization already has employees who are able to take on the solution of certain issues. And currently, the senior educator and supply manager have become such-like employees in the preschool under study.
If the manager and the staff do not solve the problems that arise at this stage of the organization's development, there is a risk of returning to the previous stage of your development and, as a result, failure. To achieve sustainable development at the Youth Stage, an organization must limit the flexibility that was encouraged in the previous stages of development. At this stage, the mission and values of the organization are of high importance.
The staff has something to work on, they are ready for change, and accept their mistakes and correct them. To do this, managers educate their teaching staff through seminars, trainings, coaching sessions within the team on team building, goal setting and more.
They are striving for the Dawn Stage, which represents the golden age. This is the stage when the organization occupies an optimal position on the life cycle curve and reaches a certain balance between flexibility and tight control in management. It happens when the organization has clear goals, everyone has clear priorities and all employees consistently, stubbornly and clearly perform their tasks. The organization is aligned with the mission, strategy, structure, information management processes, resource allocation, and reward systems. The organization works smoothly as a single mechanism.
Any leader wants his or her team to be efficient, competitive and harmonious. How to create a team so that its members complement each other, strive for development and not get stuck in a swamp? The simplest and most understandable solution for a manager at any level can be the methodology for selecting personnel according to the methodology of Yitzhak Adizes (2009). Because it describes the four basic types of personality through the prism of human behavior in the work environment.
Results
The authors believe that an interesting finding of the Adizes methodology (2008a) is that along with the strengths, the weaknesses of employees are considered, while other techniques describe only the strengths. Knowing about the weaknesses of employees allows us to predict possible difficulties for employees and think out ways to overcome them.
Why did the authors decide to implement this method?
1. The four roles of PAEI do not have cumbersome and scientific-clinical descriptions that impede perception and cast doubt on the interpretation of the results.
2. In order to understand the method no specialized education is necessary.
3. The principles of PAEI do not contradict and are combined with any other methodologies that we are used to. In addition, the method is universal: it can be used both in the selection of personnel and in training, internal communications, teamwork, and even in personal life.
As it can be seen, there are many advantages of this strategy that made the authors dare to apply it. We invited our staff to take an on-line test. After reviewing the strengths and weaknesses of each of the roles, analyzing the results of testing of the teaching staff, the researchers received the following data: • Participants with the most pronounced producer function -40%; • Administrators -13%; • Entrepreneurs -17%; • Integrators -30%.
After passing the test, the teaching staff received the following recommendations: 1) What to do if you are a producer: Before you complete a task, consider whether it is really necessary.
Make a list of tasks that only you can do.
Consider long-term results.
Pay attention not only to the result, but also to the process.
2) What to do if you are an administrator: • Depart from the plan, take risks. In an era of change, this is necessary.
• Do not judge the colleagues for stepping back from the initial plan.
• Consider each specific situation separately. Sometimes, in order to maintain a good relationship, you have to deviate from the plan.
3) What to do if you are an entrepreneur: • Do not rush to voice each idea.
• Set aside the decision for a day: you eliminate emotions and have time to think things through.
• Let other colleagues speak out.
4)
What to do if you are an integrator: • If you need to resolve the issue, imagine that you are on a desert island. You have no advisers. The decision is yours.
• Do not be afraid of criticism. If someone does not share your point of view, listen them and share.
• Allow an hour to communicate with colleagues, the rest of the time devote to strategic issues.
Discussions
For each group, the authors of the article tried to choose such techniques that would help our teachers develop the ability to act other roles.
A group of producers faced the problems with generating ideas. They were dissatisfied with something new, unconventional. The colleagues and the head worked on the Wheel of Balance method, the purpose of which is the method of analysis and life planning as well as The 100 Wishes Technique, which helps build tasks in the long run.
The administration function in our team is at the lowest level. Therefore, it was decided to put into operation such a technique from management as the SCRUM Board. The purpose of which is that in everyday worries you can leave in sight only those things that you need to focus on without bothering with what is yet to be done or what has already been done. On the board there are 4 columns on which stickers with the name of cases are glued, according to the instructions.
As the work progresses, the employee re-sticks the sticker in the next column.
Colleagues who are inclined to the functions of entrepreneurship and integration were offered the SMART Analysis technique.
Together we examined how to set a goal, what results we want to achieve, what actions need to be taken to achieve a goal. And, of course, it is important in this technique to determine the timing for achieving the goal.
In order to assemble a team of suitable types and learn to work together for the effectiveness of the entire kindergarten, the staff applies one of the effective methods that are the analysis of functions and powers according to "the vitamin complex" of Adizes (2009). The purpose of this technique is that everyone is invited to take on a role to explore the behavior of a given code. Thus, each time taking on different roles, playing as a team, solving different tasks, the colleagues build the functions according to "the vitamin code" (Adizes, 2009).
The desired results can be achieved if functions P and E are brought to the forefront. The work efficiency can be gained by using functions A and I.
A mutually complementary team will provide the result if the members adhere to different approaches and the tasks of each member are clearly defined.
Conclusion
The main idea of the methodology is to understand that all people are different and need each other to achieve common goals. There are no perfect team members. The staff must be able to cooperate efficiently with the actual team-members. It is necessary to create conditions under which the employees will be able to reveal their best qualities. The modern leader, no matter what organization he/she manages, is to determine at what developmental stage their organization currently is. | 2020-11-26T09:04:36.594Z | 2020-11-25T00:00:00.000 | {
"year": 2020,
"sha1": "ce9d3f069a83886674a22eb7dd6972ff7b281b6b",
"oa_license": "CCBY",
"oa_url": "https://ap.pensoft.net/article/22487/download/pdf/",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "effd95f5395f6585db395680d11edbd7c6bd62ca",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
} |
248383652 | pes2o/s2orc | v3-fos-license | Expression of Growth Hormone-Releasing Hormone and Its Receptor Splice Variants in Primary Human Endometrial Carcinomas: Novel Therapeutic Approaches
Antagonists of growth hormone-releasing hormone (GHRH) inhibit the growth of various tumors, including endometrial carcinomas (EC). However, tumoral receptors that mediate the antiproliferative effects of GHRH antagonists in human ECs have not been fully characterized. In this study, we investigated the expression of mRNA for GHRH and splice variants (SVs) of GHRH receptors (GHRH-R) in 39 human ECs and in 7 normal endometrial tissue samples using RT-PCR. Primers designed for the PCR amplification of mRNA for the full length GHRH-R and SVs were utilized. The PCR products were sequenced, and their specificity was confirmed. Nine ECs cancers (23%) expressed mRNA for SV1, three (7.7%) showed SV2 and eight (20.5%) revealed mRNA for SV4. The presence of SVs for GHRH-Rs could not be detected in any of the normal endometrial tissue specimens. The presence of specific, high affinity GHRH-Rs was also demonstrated in EC specimens using radioligand binding studies. Twenty-four of the investigated thirty-nine tumor samples (61.5%) and three of the seven corresponding normal endometrial tissues (42.9%) expressed mRNA for GHRH ligand. Our findings suggest the possible existence of an autocrine loop in EC based on GHRH and its tumoral SV receptors. The antiproliferative effects of GHRH antagonists on EC are likely to be exerted in part by the local SVs and GHRH system.
Introduction
The expression of splice variants (SVs) of the GHRH receptor (GHRH-R) has been found not only in the pituitary but in extrapituitary tissues, including human neoplasms [1][2][3][4]. cDNAs encoding for the four SVs of GHRH receptors were isolated and sequenced [5]. Based on these findings the cDNA sequence of SV1 was found to be similar to that of the full-length GHRH-R [5]. The first three exons were replaced in SV1 by a fragment of retained intron 3 possessing a new putative in-frame start codon; thus, the encoded N-terminal extracellular domain of SV1 is different from the pituitary-type GHRH-R protein [5]. SV1 appears to be the most functional isoform since SV2 encodes a GHRH isoform truncated after the second transmembrane domain, while SV3 and SV4 lack any transmembrane domains [5]. In support of this hypothesis, SV1 has been demonstrated to bind to GHRH and GHRH antagonists with high affinity and to mediate responses to GHRH in ligand-dependent and ligand-independent ways [6][7][8].
Endometrial cancer (EC) is the sixth most common diagnosed malignancy in women [38]. Based on the estimates of the American Cancer Society, nearly 67,000 new cases of cancer of the uterus will be diagnosed, and approximately 13,000 women will die from cancers of the uterine body in the USA in 2021. Cancer of the uterine corpus is often referred to as endometrial cancer because more than 90% of cases occur in the endometrium (lining of the uterus) [39]. Based on Global Cancer Statistics 2020, about 417,000 new cases and 970,000 deaths of EC were confirmed worldwide [38].
In the last decade, a wide variety of treatment options was proposed as adjuvant therapies of EC. Chemotherapy, irradiation, use of immune checkpoint inhibitors and drugs aiming at molecular targets provide options for fighting EC.
In earlier studies, the expression and role of the GHRH ligand was already investigated in some benign and malignant gynecologic conditions, including EC [32,33,40,41]. However, information on the splice variants of GHRH-Rs is rather limited. Fu et al. [42] demonstrated the expression of SV1 in endometriosis. The aim of the present study was to investigate the expression of GHRH and its tumoral receptors and the presence of GHRH-R SVs in primary human endometrial carcinoma samples and in corresponding benign endometrial tissues.
Molecular Biology Analysis
New primers were designed for the PCR amplification of GHRH-R and SV1. PCR products were sequenced in both directions, and the specificity of the primers was confirmed. For GHRH-R, a 121 base-pair-long product was amplified from exon 1 to exon 2, which is present only in the full length receptor mRNA and absent in the splice variants. This product could be detected in none of the endometrial tumor specimens or normal endometrial tissues. However, as expected, the expression of mRNA for the full length GHRH-R was found in all five pituitary samples used as positive controls (data not shown). Accordingly, only the GHRH-R PCR product obtained from these samples was used for sequence analysis. In the case of the SV1 receptor variant, the 415-bp long PCR products (from intron 3, absent in the full length receptor; to exon 7, present only in SV1 and the full length receptor but not in the other variants) of the endometrial tumor samples were identical to that of the pituitaries.
The SV2, SV3 and SV4 splice variants were detected as 523-, 245-and 120-bp long PCR products, respectively [5]. Figure 1 shows the representative RT-PCR analysis of the splice variants. As a positive control, we have investigated five human pituitary tissues, all of which expressed the four splice variants and the full length GHRH-R. and the full length receptor but not in the other variants) of the endometrial tumor samples were identical to that of the pituitaries.
The SV2, SV3 and SV4 splice variants were detected as 523-, 245-and 120-bp long PCR products, respectively [5]. Figure 1 shows the representative RT-PCR analysis of the splice variants. As a positive control, we have investigated five human pituitary tissues, all of which expressed the four splice variants and the full length GHRH-R. Twenty-four of the investigated thirty-nine tumor samples (61.5%) and three of the seven corresponding normal endometrial tissues (42.9%) expressed mRNA for GHRH ligand (Table 1, Figure 2). The expression of mRNA for GHRH was also detected in the five human pituitary tissues investigated ( Figure 2). Twenty-four of the investigated thirty-nine tumor samples (61.5%) and three of the seven corresponding normal endometrial tissues (42.9%) expressed mRNA for GHRH ligand (Table 1, Figure 2). The expression of mRNA for GHRH was also detected in the five human pituitary tissues investigated ( Figure 2). In patients with endometrial carcinoma, SV1 is the most functional form in the view of a potential cancer therapy and could be shown in nine types of cancers (23%) ( Figure 2, Table 1). The second most frequent variant was SV4, which was detected in 8 of 39 malignancies (20.5%). The incidence of SV2 could be observed only in three cancer specimen (7.7%), and the expression of SV3 variant was absent in the tumor samples ( Figure 2, Table 1). The presence of the GHRH-R splice variants could not be revealed in any of the normal endometrial tissues investigated (Table 1).
Altogether, we were able to detect splice variants of GHRH-Rs in 14 of the 39 EC specimens (35.9%). The co-expression of mRNA for GHRH ligand and splice variants for GHRH-Rs was also found in 14 of 39 (35.9%) patients (Table 2). Our results show that all GHRH-R splice variant positive specimens expressed mRNA for the GHRH ligand. Ten of thirty-nine endometrial cancer specimens exhibited mRNA expression for GHRH but not for splice variants for GHRH-Rs. In five cases, only SV1or SV4 were expressed among the four splice variants of GHRH-Rs. In one case, SV1 and SV2 or SV1 and SV4 co-expression, and in other two cases, SV1, SV2 and SV4 co-expressions were observed. (Table 2). Table 2. Clinicopathological features and mRNA expression pattern of receptors for GHRH, and GHRH ligand in endometrial cancer specimens positive for any of the GHRH receptor splice variants.
Patient No
Age at Diagnosis Histology * Grade Stage GHRH-R GHRH SV1 SV2 SV3 SV4 * P-S: papillary serous adenocarcinoma; E: endometrioid endometrial carcinoma. In patients with endometrial carcinoma, SV1 is the most functional form in the view of a potential cancer therapy and could be shown in nine types of cancers (23%) ( Figure 2, Table 1). The second most frequent variant was SV4, which was detected in 8 of 39 malignancies (20.5%). The incidence of SV2 could be observed only in three cancer specimen (7.7%), and the expression of SV3 variant was absent in the tumor samples ( Figure 2, Table 1). The presence of the GHRH-R splice variants could not be revealed in any of the normal endometrial tissues investigated (Table 1).
Altogether, we were able to detect splice variants of GHRH-Rs in 14 of the 39 EC specimens (35.9%). The co-expression of mRNA for GHRH ligand and splice variants for GHRH-Rs was also found in 14 of 39 (35.9%) patients (Table 2). Our results show that all GHRH-R splice variant positive specimens expressed mRNA for the GHRH ligand. Ten of thirty-nine endometrial cancer specimens exhibited mRNA expression for GHRH but not for splice variants for GHRH-Rs. In five cases, only SV1or SV4 were expressed among the four splice variants of GHRH-Rs. In one case, SV1 and SV2 or SV1 and SV4 co-expression, and in other two cases, SV1, SV2 and SV4 co-expressions were observed. (Table 2). Table 2. Clinicopathological features and mRNA expression pattern of receptors for GHRH, and GHRH ligand in endometrial cancer specimens positive for any of the GHRH receptor splice variants.
Radioligand Binding Studies
The presence and binding characteristics of GHRH-Rs and specific binding of radioiodinated GHRH analog JV-1-42 to membrane homogenates of human EC samples were determined using radioreceptor assays. Of the eleven tumor specimens examined by ligand competition assays, nine samples (81.8%) showed GHRH binding ( Table 3). The concentrations and binding affinities of GHRH-Rs in EC membranes were also investigated. The analyses of the displacement curves of [ 125 I]JV-1-42 and the Scatchard plots of the specific binding data in the 9 receptor positive cancer specimens revealed that GHRH-Rs had a mean dissociation constant (K d ) of 5.28 nM (range, 1.63 to 8.81 nM). The mean concentration of GHRH-Rs (maximal binding capacity, Bmax) was 385.0 fmol/mg membrane protein in crude membranes derived from human EC cells (range, 249.5 to 509.5 fmol/mg protein). Based on our receptor binding results, the one-site model could provide the best fit representing a single class of high affinity GHRH-Rs in human EC specimens. Biochemical specifications and parameters crucial to characterize specific binding sites were also defined. Thus, the in vitro receptor binding of [ 125 I]JV-1-42 was detected to be specific, reversible, temperature dependent and time dependent, and linear with protein concentrations in the human endometrial tumor specimens examined (data not shown). The binding of radiolabeled JV-1-42 was displaced completely by increasing the concentrations (10 −12 -10 −6 M) of hGHRH or hGHRH(1-29)NH 2 , whereas none of the structurally and functionally different and unrelated peptides analyzed, such as somatostatin, luteinizing hormone-releasing hormone (LHRH), epidermal growth factor (EGF), [Tyr 4 ]bombesin, and insulin-like growth factor I (IGF-I), inhibited the binding of radioiodinated JV-1-42 at concentrations as high as 1 µM (data not shown). Our results also showed that ligand binding was accompanied by the expression of mRNA for SV1 subtype of GHRH-Rs in all endometrial cancer specimens examined. A comparative analysis of the results of radioreceptor assays and SV1 subtype mRNA studies demonstrated that the expression of the SV1 subtype was 100% consistent with the presence of specific binding sites for GHRH antagonist [ 125 I]JV-1-42 (Table 3). In our study, no correlation was found among clinicopatholological features and receptor findings.
Discussion
Endometrial cancer is a major cause of morbidity and mortality for women worldwide, and it is the sixth most common malignancy among women [38,39]. Early stage EC has a favorable prognosis in general, but some women have aggressive malignancy because their tumors are high-grade, deeply invasive or consist of non-endometrioid cells (clear or papillary serous cells) and have a strong possibility for recurrence and death. Cases with EC are usually classified into two subtypes.
Based on Bokhman's publication, we distinguish two main types of EC: Type I and Type II [43]. Type I endometrioid cancers are estrogen-dependent and arise from atypical endometrial hyperplasia. Thus, the excess of exogenous and endogenous estrogens has an important role in pathogenesis of Type I endometrial adenocarcinoma. Type II endometrioid cancers are less common, consist of more aggressive histological variants (i.e., clear-cell and serous carcinoma and uterine adenocarcinosarcoma), commonly occur in postmenopausal age and are associated with excessively high mortality [44]. Otherwise, Type II lesions are not related to long-lasting unopposed estrogen exposure. On the other hand, the molecular biology of EC became clearer in the past decade, leading to less morbid and minimally invasive surgical approaches and more routine utilizations of chemotherapy that have all made the outcomes of women with EC better. More efficient treatment modalities further improving survival and quality of life are strongly needed.
Clinical trials of immune checkpoint inhibitors are in progress for advanced and recurrent endometrial cancer [45][46][47]. If a relationship with the genetic background of the administered population can be found and a good response rate is obtained, new treatments options can be introduced to replace standard treatment approaches [45][46][47].
Since 2018, the FDA has approved the use of immune checkpoint inhibitor, pembrolizumab (anti-PD-1, (programmed-cell death protein-Ligand 1)), for all solid tumors with defective DNA mismatched repairs. About 20-30% of patients with advanced EC can potentially benefit from its application [48,49]. Several studies suggested that chemotherapy not only may activate the immune system but also induce PD-L1 expression on cancer cells, which may result in more successful immunotherapy [45,49]. Ongoing observational studies try to improve the effect of immunotherapy (avelumab, atezolizumab and durvalumab) strategies with or without the combination of classic chemotherapy [49]. Other genomic changes and molecular markers in EC, such as hormone receptor status, could lead to more tailored therapy in the future. Preclinical and clinical investigations of targeted therapies suggest that some agents have efficacy for the treatment of EC [45].
It is widely accepted that GHRH acts as an autocrine/paracrine regulator for cancer cell proliferation [37,50]. Several splice variants (SVs) of the GHRH receptor have been isolated not only from pituitary but also from extrapituitary tissues, including human neoplasms [1,3,4,26,29,50]. Rekasi et al. found that the sequence of the main splice variant, SV1, is almost identical to that of the full-length (pituitary type) GHRH-R [5]. Opposed to pituitary type GHRH-R, the first three exons were replaced by a fragment of retained intron 3 possessing a new putative in-frame start codon in SV1, resulting only in a partial loss of the extracellular part of the pituitary type GHRH-R protein [5]. Based on the putative protein structure of SVs, SV1 appears to be the most probable functional receptor. Moreover, it has been demonstrated that SV1 binds GHRH and its antagonists with high affinity and mediates responses to GHRH [5]. In the present study, using RT-PCR, we demonstrated that mRNAs for GHRH and SVs, but not the pituitary type GHRH-R, are expressed in human EC tissues, suggesting the existence of an autocrine/paracrine GHRH loop.
In our work, we found that about one-third (35.9%) of EC specimens, but none of the normal endometrial tissues, were positive for one or more splice variants (SV1-4) of GHRH-R and 23% showed positivity for expression mRNA for SV-1. In an earlier study, 43% of endometrial cancer tissues were found to be positive for SV1 protein expression by immunohistochemistry [33]. This slight discrepancy could be explained by the fact that the antisera used for the detection of SV1 protein in this study was directed against the first 25 amino acids at the N terminus of the SV1 protein, which is also present in SV2 and SV4 subtypes. While SV1, SV2 and SV4 can be distinguished by size based on Western blotting, immunohistochemistry provides positive signals for all three GHRH-R isoforms. In addition, positive immunohistochemical signals were detected only in the cytoplasm of the epithelial cells of the glands of the endometrial adenocarcinomas but not on the cell's surface. We found that the second most frequently expressed splice variant in our tissue series was SV4 (20.5%). The presence of the remaining two splice variants, SV2 and SV3, could be detected in only three or none of the samples, respectively. GHRH-R isoforms derived from SV3 and SV4 imply that they probably do not represent mature receptor proteins to be manifested on the cell's surface. SV2, possessing the truncated N-terminal extracellular domain of SV1 but containing only two transmembrane domains, might be transported to the cell's surface [5].
We could not detect mRNAs for pituitary type GHRH-R either in endometrium carcinoma or in normal endometrial tissues. In previous studies, the expression of pituitary GHRH-R was shown by real-time quantitative PCR in different cancer cell lines, including non-Hodgkin's lymphoma, pancreatic cancer, glioblastoma and small-cell lung carcinoma, but the level of expression was low in extrapituitary normal tissues [3]. Our results are in agreement with previous findings, where the expression of classic pituitary type GHRH-R on different human tumor tissues could not be detected or was found to be less frequently present than SV1 [7,26,33,51].
In eleven cases, we were able to prepare crude membrane protein fractions for radioligand binding studies to demonstrate the presence of specific GHRH binding sites. Using ligand competition assays, we demonstrated the presence of specific, high affinity receptors for GHRH. Molecular biology analyses and radioligand binding studies clearly demonstrated that the expression of mRNA for SV1 subtype of GHRH-Rs was 100% consistent with the presence of specific receptors for radiolabeled GHRH analog JV-1-42. However, the expression of mRNA for the pituitary type of GHRH-Rs was not detected. It is also important to note that all receptor positive human EC specimens examined by ligand competition assay expressed a well-detectable amount of the SV1 GHRH-T gene. Furthermore, the PCR products for GHRH ligand were found in 24 of 39 (61.5%) human EC specimens. In 14 samples (35.9%), mRNA for both GHRH and GHRH-R splice variants was detected. While the most probable functional receptor splice variant SV1 was present in only 23% of the EC specimens investigated, the GHRH ligand could be detected in more than 60% of tumoral and 40% of normal endometrial tissues. In an earlier study, GHRH mRNA was detectable in normal endometrium and EC; however, no changes in endometrial GHRH mRNA were shown between normal and neoplastic tissues obtained from the same patient. However, the levels were higher than those found in myometrial tissues obtained from other patients from benign gynecologic diseases [40]. Thus, it was suggested that GHRH may promote endometrial proliferation and be involved in the pathogenesis of EC and endometriosis [40].
In another study investigating the presence of GHRH and SV1 in normal mouse tissues, a group of tissues was examined, including endometrium, and expressed GHRH but not its receptor SV1 [52]. The authors assumed that the presence of GHRH in these tissues is not coincidental but is physiologically important and may be consistent with the paracrine/endocrine action of neurohormons and extrapituitary actions of GHRH being mediated not only by SV1, but by other receptor(s) as well [52].
Previous studies have shown that GHRH antagonists, such as MZ-J-7-118, MZ-5-156 and JMR-132, inhibited the growth of human experimental ECs both in vitro and in vivo [31,53,54]. The beneficial oncological effects of these antagonists in experimental cancer treatment can be attributed to the suppression of pituitary-hepatic IGF-I axis and the direct inhibition through the binding of GHRH antagonists to pituitary GHRH-R and/or their splice variants present on tumors [36,50,55]. A recent study also demonstrated a mechanism by which GHRH-R antagonists such as MIA-602 target SV1 and inhibit the tumor growth of esophageal aquamous cell carcinoma mediated by SV1 [29]. Their findings suggest that SV1 is a hypoxia-induced oncogenic promoter that can be a potential target of GHRH-R antagonists [29].
Based on the evidence that GHRH antagonists were able to suppress experimental tumor growth and that a subset of EC expressed receptors for GHRH, the application of powerful new GHRH antagonists could be useful for the treatment of this type of malignancy. However, further studies were needed to validate this assumption.
In the future, we would like to expand our investigation and try to collect a reasonable number of human EC specimens to further study and analyze the expression of GHRH-Rs in such human tissues. These studies may provide novel quantitative data on the mRNA and protein levels of GHRH-Rs and their splice variants. From these results, we would be able to predict the potential response of the patients to GHRH-R-based therapy.
Tissue Samples
Human endometrial carcinoma specimens from 39 patients (mean age 62 years; range 28-82 years) who underwent surgical removal of their uterus at the Department of Obstetrics and Gynecology, Faculty of Medicine, University of Debrecen, were investigated. Approximately 5-20 mm 3 of tissue samples of the uterus removed during staging surgery were used. Histopathological examinations of each specimen were undertaken to confirm the presence of endometrial carcinoma before molecular biology studies. There were 28 endometrioid (71.8%) and 11 papillary serous (28.2%) adenocarcinomas. Among patients with endometrioid adenocarcinoma, five had grade 1, twenty had grade 2 and three had grade 3 diseases. Among patients with the papillary serous subtype, three had grade 1, five had grade 2 and three had grade 3 cancers (Table 4). Normal endometrial tissues were available in seven cases. Tissue samples were frozen and stored at −80 • C until total RNA isolation and membrane preparations were performed. The collection and the use of these specimens and normal human pituitary samples in our studies was conducted in accordance with the Declaration of Helsinki and approved by the local institutional ethics committee named Regional Institutional Ethics Committee, Clinical Center, University of Debrecen (DERKEB/IKEB 2284-004). Informed consent was obtained from all patients. Five normal human pituitary reference samples used as positive controls were collected in an anonymous fashion from the paraffin tissue-archives of autopsy cases at the Department of Pathology, Faculty of Medicine, University of Debrecen. Summary of the clinical data for the 39 patients with endometrial cancer (endometrioid, n = 28; papillaryserous, n = 11). Specimens from patients with a diagnosis of endometrioid adenocarcinoma were graded: well differentiated (grade 1), moderately differentiated (grade 2) or poorly differentiated (grade 3).
For β-actin, GHRH-R, GHRH and SV1 genes the PCR reaction mix contained 1 × PCR Buffer, 1U Taq After denaturation and enzyme activation (3 min at 95 • C), cDNA was amplified for 45 cycles (30 s at 95 • C; 30 s at 63 • C and 60 s at 72 • C). Then, a final elongation step of 72 • C 5 min was applied, and finally, the samples were cooled down to 4 • C. The PCR products were separated by electrophoresis on 2% agarose gel stained with ethidium bromide.
Preparation of Membranes and Radioligand Binding Studies
Radioiodinated derivatives of GHRH antagonist JV-1-42 were prepared by the chloramine-T method, as previously described [1] with some minor modifications. The preparation of tumor cell membranes from human EC samples for the receptor binding studies was performed as reported previously [1]. Briefly, the human cancer specimens were homogenized in 50 mmol/L Tris-HCl buffer (pH 7.4) and supplemented with protease inhibitors (0.25 mmol/L phenylmethylsulfonylfluoride, 2 µg/mL pepstatin A, and 0.4% aprotinin) using an Ultra-Turrax tissue homogenizer (IKA Works, Wilmington, NC, USA); then, the crude membrane fraction was prepared as described [1] and stored at −70 • C until investigated in vitro. Protein concentrations were determined by the method of Bradford. GHRH-R binding assays were carried out, as reported in detail, using in vitro ligand competition assays based on the binding of [ 125 I]JV-1-42 as radioligands to membrane fractions of human EC specimens [1]. GHRH antagonist JV-1-42 and [ 125 I]JV-1-42 as radioligand were well-characterized previously and showed high-affinity binding to rat and human pituitaries and human renal, prostate, breast and other cancers [1,10,17,50]. The high affinity binding of radioiodinated JV-1-42 to SV1 was also demonstrated and reported previously [1]. In brief, membrane homogenates containing 50-160 µg protein were incubated in duplicate or triplicate with 60,000-80,000 cpm [ 125 I]JV-1-42 and increasing concentrations (10 −12 -10 −6 mol/L) of nonradioactive peptides as competitors in a total volume of 300 µL binding buffer (50 mmol/L Tris-HCl, 5 mmol/L EDTA, 5 mmol/L MgCl 2 , 1% BSA and 30 µg/mL bacitracin, pH 7.4) supplemented with protease inhibitors, as mentioned above. After 1 h of incubation and the separation, the final pellet containing the receptor bound fraction was counted in a γ-counter [1]. The LIGAND-PC computerized curve-fitting software of Munson and Rodbard was used to determine the type of receptor binding, dissociation constant (Kd) and maximal binding capacity of the receptors (Bmax). Due to the limited amounts of membrane protein fractions, the receptor binding of GHRH was examined in only 11 specimens. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the local institutional ethics committee named Regional Institutional Ethics Committee, Clinical Center, University of Debrecen (DERKEB/IKEB 2284-004).
Informed Consent Statement:
Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2022-04-26T15:15:15.304Z | 2022-04-21T00:00:00.000 | {
"year": 2022,
"sha1": "2b4e60d8bb401f5f4243b83a1aac027a264d892e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/9/2671/pdf?version=1650590478",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "805e2da750d2692ebc7faf0dfa918c5bddd3abe0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
212715567 | pes2o/s2orc | v3-fos-license | Hydrogen-rich saline ameliorates hippocampal neuron apoptosis through up-regulating the expression of cystathionine β-synthase (CBS) after cerebral ischemia- reperfusion in rats
Objective(s): This study aimed to evaluate the potential role of hydrogen in rats after cerebral ischemic/reperfusion (I/R) injury. Materials and Methods: The experimental samples were composed of sham group, model group of rats that received middle cerebral artery occlusion (MCAO) for 2 hr followed by reperfusion for 24 hr, and the hydrogen saline group treated by hydro¬gen-rich saline (1 ml/kg) after MCAO. Hydrogen sulfide (H2S), S100-βprotein (S100-β), and neuron-specific enolase (NSE) levels were measured; the levels of malondialdehyde (MDA), reactive oxygen species (ROS), and superoxide dismutase (SOD) were detected; the histologic structure and apoptotic cells of hippocampus were observed; the expressions of cystathionine β-synthase (CBS), nuclear factor erythroid 2-related factor 2 (Nrf2), and hemeoxygenase-1 (HO-1) were measured. Statistical analyses were performed using one-way analysis of variance (ANOVA) followed by Fisher’s least significant difference (LSD) test. Results: Our results showed that hydrogen up-regulated H2S levels via promoting the expression of CBS in the hippocampus, and its treatment alleviated oxidative stress via activating the expression of Nrf2 and HO-1, and then cell apoptosis reduced, furthermore, brain function improved by down-regulating the levels of S100-βand NSE. Conclusion: This study showed that hydrogen-rich saline ameliorates cell injury through up-regulating the expression of CBS in the hippocampus after cerebral ischemia reperfusion (I/R) in rats, this provides new experimental evidence for the treatment of stroke with hydrogen saline.
Introduction
It is known that ischemic stroke characterized by the sudden loss of blood circulation is a major type of stroke (1) and has become the second leading cause of death globally (2,3). Moreover, its high recurrence rate has economic burdens for the society. Studies have confirmed that cerebral ischemia has a complicated pathology closely related to oxidative stress (4)(5)(6)(7). Accumulating evidence demonstrated that the main cause of neuron damage is not ischemia itself but the overproduction of reactive oxygen species (ROS) which attacks cells, resulting in ischemia/reperfusion (I/R) injury and neuronal cell death (8)(9)(10). Thus, a better understanding of the molecular and cellular mechanisms underlying oxidative stress injury after I/R may provide novel treatment for ischemic stroke. It has generated considerable interest in developing antioxidant therapies to combat ischemia-induced damage due to the close relationship between cerebral ischemia and oxidative stress (11).
Hydrogen sulfide (H 2 S), known as a toxic gas in nature (12) with an odorous smell, is synthesized from L-cysteine by enzymes such as cystathionine γ-lyase (CSE), cystathionine β-synthase (CBS), and mercaptopyruvate sulfurtransferase (3MST) (13). CBS is predominantly responsible for the production of H 2 S in the central nervous system (14,15), recent studies have shown that small amounts of H 2 S are produced in the brain (14), and it has been proven to be an endogenous factor that regulates cellular function (16). Previous research has investigated that H 2 S has a protective effect against cerebral injury in rodent models (17). So, up-regulating the expression of CBS to promote H 2 S synthesis may be a therapeutic strategy for stroke.
Hydrogen is a gas that can be used for diving (18). However, studies have confirmed its anti-oxidative capabilities in animal models since Ohsawa et al. reported that hydrogen inhalation could protect the brain against cerebral I/R injury (19). In our previous researches, we also found that hydrogen has a neuroprotective effect in ischemia reperfusion rats (20,21). The novelty of this study is investigating whether hydrogen protected the brain against I/R injury through up-regulating the expression of CBS and changing the levels of neurological function indices such as neuron specific enolase (NSE) and S100-βprotein (S100-β) in the hippocampus and its related mechanism.
Preparation of hydrogen-rich saline
The hydrogen-rich saline was purchased from Second Military Medical University (Shanghai, China) and stored under atmospheric pressure at 4 °C in an aluminum bag. Hydrogen-rich saline was freshly prepared within one week to ensure a constant concentration no less than 0.6 mmol/l (22).
Experiment design
In this study, 36 adult male Sprague-Dawley Rats (weighing 280-320 g) were obtained from the Experimental Animal Center of Shandong University (Jinan, China). The experiment protocols were approved by the Ethics Committee of Bin Zhou Medical University and performed in accordance with the guidelines of National Institutes of Health Guide for Laboratory Animals.
After one week acclimation, the rats were divided into three groups randomly, 12 rats in each group. The animals were anesthetized with 3.5% chloral hydrate solution (1 ml/100 g, IP). Middle cerebral artery occlusion (MCAO) was induced as our previous research described (21). In brief, the left common carotid artery (CCA) and the external carotid artery (ECA) were exposed, and then, a 3-0 surgical monofilament nylon suture was inserted from the external carotid artery into the internal carotid artery (ICA) carefully and was advanced in a forward manner to occlude the origin of the left middle cerebral artery (MCA) until a light resistance was felt (18-20 mm from the CCA bifurcation). After 2 hr of MCAO, the nylon suture was withdrawn, followed by 46 hr of reperfusion. Twelve rats were used as sham group (suture only after exposure of carotid artery), 12 rats were used as I/R group after MCAO, and the I/R + hydrogen group comprised 12 MCAO rats treated with hydrogen-rich saline (1 ml/kg) after the beginning of reperfusion. After 24 hr, all rats were euthanized with chloral hydrate (7%, 5 ml/kg). The blood and brain tissues from each animal were collected for analysis.
Measurement of S100-βand neuron-specific enolase (NSE) levels
The blood samples were allowed to clot and then centrifuged at 3,000 rpm for 10 min, sera were separated and used to determine S100-β protein and NSE by Enzyme-Linked Immunosorbent Assay (ELISA) (Yuchen, Shanghai) according to the manufacturer's protocol.
Measurement of H 2 S levels
The biosynthesis of H 2 S in the brain was measured as described previously (24). The optical absorbance was measured at 655 nm with a microplate reader (iMark; Bio-Rad). The H 2 S concentration of each sample was calculated.
Measurement of ROS, malondialdehyde (MDA) and superoxide dismutase (SOD)
ROS and SOD are the indicators of oxidative stress. The homogenates of brain tissue were centrifuged at 3,000 rpm for 20 min at 4 °C . The supernatant was separated and the activity of ROS and SOD were determined using a detection kit (Jiancheng, China) as manual protocol. Optical density was determined using a spectrometer, both at 550 nm.
The concentration of MDA as a marker of lipid peroxidation, was measured using a detection kit (Jiancheng, China) following the manufacturer's protocol. Optical density was determined by a spectrometer at 532 nm.
Histopathological examination
Isolated ischemic cerebral tissues were fixed with 10% methanol, embedded with paraffin, tissues were sectioned at a thickness of 5 μm, stained with hematoxylin and eosin (HE), and observed under a light microscope (Olym-pusX71-F22PH, Japan) at 400 magnification. The total and injured neurons were counted in 12 different fields of microscope per sample, six samples in each group were counted.
Immunohistochemical staining
Immunohistochemical staining method refers to our previous studies (21).The ischemic cerebral tissues were fixed in 10% formalin, embedded in paraffin, and cut into 5-μm thick sections, stained with a CBS antibody (diluted 1:500, Cell Signaling Technology (CST), USA), followed by a second IgG antibody. Immunostaining was performed with diaminobenzidine (DAB).The DAB staining density was assessed with a microscopic image analysis system (GX51, Olympus, Japan).
TdT-mediated dUTP nick end labeling (TUNEL)
The ischemic cerebral tissue was fixed in 10% formalin, embedded in paraffin, and sectioned at a thickness of 5 μm; TUNEL staining was performed with an in situ cell death detection kit (Jiancheng, China) according to the manufacturer's protocol.
First, the sections were rinsed with PBS and were treated with 1% Triton X-100 for 3 min. Terminal deoxynucleotidyl transferase (TdT) was used to catalyze the addition of biotinconjugated dUTP to the 3′-OH ends of the DNA fragments subsequently. Streptavidin-HRP solution was added and reacted at 37 °C for 30 min. Finally, the slides were placed in DAB for 3 min and stained with Hematoxylin Harris. These analyses were performed at 100× magnification under a light microscope in 12 different fields using computer-aided software (Olympus X71-F22PH, Japan). The apoptosis cells were quantified using computer assisted image analysis (Leica LAS Image Analysis V4.0).
Statistical analysis
When data were normally distributed, they were analyzed using SPSS 21.0 and are expressed as mean ±standard error (SEM). Statistical analyses were performed using one-way analysis of variance (ANOVA) followed by Fisher's least significant difference (LSD) test, and a value of P<0.05 was considered statistically significant.
Effect of hydrogen on H 2 S levels
The levels of H 2 S in the brain tissue decreased in the I/R group compared with the sham group (P<0.05), but hydrogen increased the endogenous H 2 S levels in the brain compared to the I/R group (P<0.05) (Figure 1).
Effect of hydrogen on S100-βand NSE levels
The levels of S100-β and NSE have similar trends, these increased in the I/R group compared with those of the sham group (P<0.05), but hydrogen treatment decreased their levels in the brain compared to the I/R group (P<0.05) (Figure 2). It indicated that hydrogen played a protective role against ischemic injury.
Changes of histopathological structure
The neurons in the CA1 area of the hippocampus were observed. Normal neurons had round nuclei, but the nuclei of necrotic neurons were irregular, shrinkage, and with deep staining. Figure 3A shows obvious morphological changes in the I/R group in which the body and nuclei of neurons were reduced with shrinkage, and percentage of injured cells detected in the I/R group was significantly higher than that in the sham group (P<0.05), but it decreased after hydrogen treatment (P<0.05) ( Figure 3B).
Effect of hydrogen on ROS, MDA, and SOD levels
As shown in Figures 4 B and C, ROS and MDA levels were much higher in the I/R group than those in the sham group (P<0.05). Treatments with hydrogen decreased ROS and MDA levels compared with the I/R group (P<0.05). On the contrary, the levels of SOD in the brain tissue decreased in the I/R group compared with the sham group (P<0.05), but hydrogen increased the endogenous H 2 S levels in the brain compared with the I/R group (P<0.05) ( Figure 4A). Effects of hydrogen on S100-βprotein and neuron-specific enolase levels in middle cerebral artery occlusion rats (mean±SEM). S100-β protein level in brain (A), neuron-specific enolase level in brain (B). * indicates significant difference compared with sham group (P<0.05), # indicates significant difference compared with I/R group (P<0.05). I/R: ischemia/reperfusion
Changes of apoptosis
The TUNEL staining is shown in Figure 5A, the gray value of positive TUNEL cells found in the I/R group was significantly higher than that of the sham group (P<0.05), but it decreased after hydrogen treatment (P<0.05) ( Figure 5B).
Changes of CBS expression
The expression of CBS showed brown staining in cells (Figure 6 A). Its mean density in I/R group increased compared with the sham group (P<0.05). Moreover, it increased significantly higher in the hydrogen treatment group compared with I/R group (P<0.05) (Figure 6 B). It indicated that hydrogen up-regulated the CBS expression in ischemic hippocampus.
Changes of Nrf-2 and HO-1 expressions
The expression levels of Nrf-2 and HO-1 increased in the I/R group compared with the sham group (P<0.05), and the hydrogen treatment further up-regulated the protein levels of those compared to the I/R group (P<0.05) (Figure 7). It indicated that endogenous expression of Nrf-2 and HO-1 were activated after I/R, and hydrogen could further up-regulate the expressions of those.
Discussion
In the present study, we investigated the protective role of hydrogen in rats after I/R injury through upregulation of H 2 S levels, it was confirmed by H 2 S assessment and CBS expression in the hippocampus. Meanwhile, the treatment of hydrogen reduced oxidative stress and down-regulated the percentage of apoptotic neurons in ischemic hippocampus, and then injuries of neurons were improved. These results indicated the protective effect of hydrogen on injured neurons. These are consistent with our previous research showing that hydrogen protected the neurons against I/R injury (20,21). Furthermore, the novelty of this study is that hydrogen may achieve its protective effect by upregulating the expression of CBS and H 2 S levels in the hippocampus.
H 2 S is the third gas signal molecule (24), has been recognized to play crucial physiological functions in the central nervous system (25,26). The results were in accordance with previous reports that the expression of CBS and concentration of H 2 S increased in the hippocampus of rats after brain ischemia (27), it indicated that hydrogen up-regulated the H 2 S levels in ischemic brain, and H 2 S could reduce cerebral I/R injury in the animal model (28)(29)(30)(31). To further elucidate the mechanism of hydrogen in alleviating MCAO-induced cerebral ischemic injury, the present study investigated the effects on antioxidants. Oxidative stress is a core pathological component closely related to reperfusion injury accompanied with excessive ROS production (32,33). Our results demonstrated that induction of I/R leads to elevated levels of ROS and MDA and a decrease of SOD. Increasing studies have shown that treatment with H 2 S could inhibit apoptosis via blocking an ROS activated Ca 2+ signaling pathway in hypoxia-induced hippocampal neurons (30) and improved ischemic damage and apoptosis in cerebral ischemia through its antioxidant effects (34)(35)(36). As expected, we found that treatment with hydrogen reduced ROS and MDA levels in the ischemic brain of I/R rats, and increased the SOD activity as well as the expression of CBS, nuclear factor erythroid 2-related factor 2 (Nrf2), and hemeoxygenase-1 (HO-1). All these implied that treatment with hydrogen saline significantly suppressed oxidative stress in ischemic brain. Nrf2 is an endogenous cytoprotective factor that activates the transcription of antioxidant stress genes, including HO-1 against oxidative stress (37,38). The activation of Nrf2/HO-1 antioxidant pathway has been shown to play an important neuroprotective role after ischemia reperfusion-induced brain injury (39)(40)(41). The results showed that hydrogen significantly regulated Nrf2/HO-1 levels in the ischemic model.
Conclusion
The present study demonstrated that hydrogen could protect neurons in the hippocampus against ischemic injury through up-regulation of the CBS expression and activating the Nrf2/HO-1 antioxidant pathway. This provides a new experimental basis for clinical application of hydrogen in the treatment of cerebral ischemic injury. But the specific mechanism of how hydrogen up-regulated CBS expression remains to be further explored, this is the subsequent target for us and other researchers. | 2020-03-15T03:30:54.360Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "b084b8b9afda22aedad8911462c6632d17cf27c0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b084b8b9afda22aedad8911462c6632d17cf27c0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
255999970 | pes2o/s2orc | v3-fos-license | Characterisation of chaos in meteoroid streams. Application to the Geminids
Dynamically linking a meteor shower with its parent body can be challenging. This is in part due to the limits of the tools available today (such as D-criteria) but is also due to the complex dynamics of meteoroid streams. We choose a method to study chaos in meteoroid streams and apply it to the Geminid meteoroid stream. We decided to draw chaos maps. Amongst the chaos indicators we studied, we show that the orthogonal fast Lyapunov indicator is particularly well suited to our problem. The maps are drawn for three bin sizes, ranging from $10^{-1}$ to $10^{-4}$ m. We show the influence of mean-motion resonances with the Earth and with Venus, which tend to trap the largest particles. The chaos maps present three distinct regimes in eccentricity, reflecting close encounters with the planets. We also study the effect of non-gravitational forces. We determine a first approximation of the particle size $r_{lim}$ needed to counterbalance the resonances with the diffusion due to the non-gravitational forces. We find that, for the Geminids, $r_{lim}$ lies in the range $[3;8]\times 10^{-4}$ m. However, $r_{lim}$ depends on the orbital phase space.
Introduction
The Meteor Data Center of the International Astronomical Union (IAU) currently lists 921 meteor showers ⋆⋆ , most of which are in the working list and are awaiting confirmation or more data.According to this number, on average, 2.6 near-Earth objects per day were active enough in the past 10 3 − 10 4 years to produce such showers.If this is confirmed, it would greatly impact our current understanding of the Solar System.This prompted us to examine how IAU meteor showers are determined.Most methods used to find new meteor showers involve computing the radiant and an orbit dissimilarity criterion (Dcriterion).D-criteria quantify the proximity of the orbits of two objects.Many D-criteria have been developed (e.g.Valsecchi et al. 1999;Jenniskens 2008;Rudawska et al. 2015) but the D S H from Southworth & Hawkins (1963) is still largely used today.However, this criterion has long been criticised for its mathematical, physical, and statistical shortcomings (see e.g.Drummond 1981;Valsecchi et al. 1999).Rudawska & Jopek (2010) compared two criteria (the D S H and the criterion described by Jopek et al. 2008), providing some preliminary indications as to the validity of the two.However, to our knowledge, no such study has been conducted for the entire set of D-criteria: it is not known which should be avoided and which are most suited to specific cases.
While D-criteria are not sufficient to identify a meteor shower with absolute certainty, they can help us to identify candidates.We call those candidates meteor groups: a meteor group is a set of meteors sharing a radiant and showing similar orbits.
⋆ ariane.courtot@obspm.fr⋆⋆ https://www.ta3.sk/IAUC22DB/MDC2007/visited in September 2022 In contrast, meteor showers are defined, according to the IAU, as a set of meteors coming from a single parent body, through a meteoroid stream ⋆⋆⋆ .To prove that a given meteor group is in fact a meteor shower, a statistical or dynamical analysis is sometimes performed (see e.g.Guennoun et al. 2019) but these are not always conclusive.
Most dynamical analyses model meteoroids from a suspected parent body and follow these particles through time until they meet with the Earth (see e.g.Egal et al. 2021).However, dynamical chaos has not been studied extensively, perhaps because of the specificity of meteoroid dynamics (mainly the nongravitational forces that make for a non-conservative problem).
Chaos could explain the difficulty in understanding the dynamical evolution of the meteoroids and provide insights into the formation of the meteoroid streams.The study of chaos is often carried out using chaos maps.These maps can be found as far back as 1990 (Markus 1990), and they have become a standard means to describe chaotic and stable regions of a phase space as a function of initial orbital elements.They are generally drawn using a chaos indicator derived from the theory on Lyapunov characteristic exponents (Benettin et al. 1980), and usually for a specific type of body (e.g.moons or asteroids).Here, we use this technique to explore meteoroids, and this contributes to a new field of study.
Combined with other tools (radiant, D-criteria, statistical and dynamical analysis), chaos maps could help us to prove whether or not a meteor group is a meteor shower.For example, if a meteor group is shown to come from a part of the map that never intersects the Earth, it cannot be a meteor shower, as no parent body would explain that dynamic.Another example would be a meteor group with a very small D-criterion in a very chaotic region.If this hypothetical meteor group was a meteor shower, its meteoroids would be scattered quickly because of the chaos.Such a meteor shower would not survive for a long time, and so if observations of this meteor group date back sufficiently far in time, it is very unlikely to be a meteor shower.
Our aim is not to develop a new method to study chaos, but to use well-known tools in a new field of study.We can expect most meteoroids to be chaotic, as they are subjected to many close encounters and are under the influence of non-gravitational forces; our aim is to precisely quantify this chaos and to investigate what drives the dynamics of meteoroids.
In Sect.2, we outline the method used to draw meteoroid stream chaos maps, and we explain our choice of chaos indicator.In Sect.3, we present an application of this method to the Geminid meteoroid stream.More precisely, the maps drawn for the Geminids show which resonances constrain the evolution of the meteoroids.We also investigate the impact of non-gravitational forces and study the role of eccentricity in the definition of the Geminids.
Chaos indicator
Chaos maps are drawn using an indicator that measures the chaoticity of a given orbit.Various indicators and methods (such as the analysis in frequency from Laskar 1990) are available to study chaos.Here, the indicator has to be suitable for meteoroid analyses: meteoroid evolution is characterised by the effects of non-gravitational forces.These forces greatly reduce the timescale of meteoroid evolution, meaning that these objects survive for a few thousand years at most (Liou & Zook 1997); they also make the problem non-conservative.
The most common chaos indicators are based on the divergence of two initially nearby orbits: a chaotic behaviour is characterised by an exponential divergence, in contrast to the linear divergence of a stable behaviour.The Lyapunov characteristic exponents are based on this idea, but they use tangent vector and variational equations instead of nearby orbits, as described in Subsect.2.1.2.These Lyapunov characteristic exponents are the basis of the indicators we study here.
These chaos indicators are either relative or absolute.The former are generally used to draw a map over a wide area of the phase space, while the latter are usually used to study a specific object.Most chaos indicators could potentially be suitable for our problem.For example, we could have considered indicators such as the one proposed by Barrio (2005), but we restricted ourselves to only a couple of Lyapunov-based indicators, as a complete study is beyond the scope of this paper.
All indicators have strong arguments in favour of their study and they have all been used successfully.However, none of them were initially developed for short timescales (order of 10 3 years) or for objects under the influence of non-gravitational forces.It is therefore necessary to verify whether or not they are suitable to our problem.
The mFLI describes close encounters, which play an important role in the dynamics of meteoroids.However, this indicator is designed to study a specific encounter, and not the general effect of these encounters on a relatively large region of the phase space.A discussion with the authors led us to realise that this indicator was not adapted to our problem.
The FLI, mFLI, and OFLI are relative: they measure the chaoticity of an orbit, but only relative to others.The lower their value, the more stable the orbit studied.The MEGNO is the only absolute indicator presented here, with a value of two being the threshold between chaos (> 2) and stability ( ≤ 2).However, its oscillations make it ill-suited to a map, as its value at a time t might not represent its general behaviour.Nevertheless, mMEGNO corrects for this problem, and seems to be generally preferred over MEGNO for drawing maps.Comparing FLI and OFLI, the latter filters out an artificial effect: the growth of the FLI due to differential rotation.
We compared FLI, OFLI, and mMEGNO, examining the evolution of the indicators during the integration of 12 particles from the Geminid meteoroid stream.This integration lasts 500 years.We only took into account the gravitational forces from all the planets.Figure 1 shows the comparison of the indicators for two particles: one that experienced a close encounter and therefore became chaotic, and another that remained stable and did not encounter a planet.We would like to point out that this second particle is a rare case, and is only presented here for comparison, as the large majority of our particles are unstable.
The blue rectangles in Fig. 1 represent the initialisation phase of the FLI (light blue) and of the OFLI (darker blue).The initialisation phase ends when the value of the indicator first levels off.The OFLI has a very short initialisation phase: it reaches its first plateau in 0.6 years, whereas that of the FLI takes between 37 and 58 years depending on the particle (between 7.4% and 11.6% of the total integration time).The definition of this initialisation phase is less clear for the mMEGNO.
For the first particle, a black dashed line marks a close encounter with the Earth (the particle's distance to Earth was smaller than its Hill radius).We wanted to test the impact of a close encounter because meteoroids are characterised by their numerous encounters with planets.This close encounter happened just after the initialisation phase for the FLI had ended.
We measured the effect of this close encounter on each indicator.The three arrows illustrate the difference between the value of the indicator prior to the encounter and its value at the end of the integration.These Deltas show the effect of a close encounter on the final value of the indicators.While the FLI and OFLI react similarly (∆ 1 = 5.22 for the FLI and ∆ 2 = 4.76 for the OFLI), the mMEGNO reacts less to this close encounter (∆ 3 = 1.65).It also appears to react slower.Table 1 summarises the features of each indicator.
All three indicators seem to respond correctly to the evolution of our particles.However, a choice has to be made as to which indicator we use in our analysis.It seems the OFLI is both quicker than the FLI to reach a first value after the initialisation phase and quicker than the mMEGNO to react to a close encounter.As we work on short timescales with particles heavily influenced by close encounters, we feel these arguments are sufficient to favour the use of the OFLI.
Formula
The vector X represents the state vector (position and velocity) of a particle.We name f the so-called force function that describes the evolution of X.The tangent vector w plays a crucial role in the computation of FLIs.Its evolution is described by: (1) Specifically, the OFLI rests on w 2 , the orthogonal part of w with respect to the variational flux: And finally, we have The evolution of the vector w was computed alongside the evolution of the particles.The initial vector w 0 was chosen perpendicular to the flux, as Lega & Froeschlé (2001) recommend, and was derived from the gradient g of the two-body problem Hamiltonian and the initial state vector X 0 of the particle studied:
Computational method
The integrator chosen was the RADAU order 15 from Everhart (1985).The RADAU is characterised in part by its automated computation of the length of each time increment.We used the ephemeris INPOP from IMCCE (Fienga et al. 2009) and added non-gravitational forces (Poynting-Roberston drag and solar radiation pressure; see e.g.Vaubaillon et al. 2005).
The Geminid meteor shower is a well-known shower, dynamically stable compared to streams from Jupiter-family comets.This allows us to test our reasoning.We generated a high (> 10 3 ) number of particles with orbits similar to the Geminids.The particles were described by the initial time t 0 , their state vector X 0 at t 0 , and their radius r.We assumed a density of ρ = 1000 kg/m 3 to compute the mass of the particle.We chose the year 2000 A.D as the initial time, which corresponds to the Geminids current orbit.
Two sets of initial conditions were processed.In both cases, the mean anomaly was chosen randomly between 0°and 360°i n order to evaluate the impact of this parameter on the chaos map.The first set (IC1) was composed of 100080 particles, chosen so as to be relatively close to the ejection conditions of the Geminids (see Table 2).The goal was to simulate a set of initial conditions just large enough to encompass the usual orbits of Geminids.The second set (IC2, Table 3) mapped a larger part of the phase space, and was composed of 99720 particles.Thanks to this second set, we obtained a broader view of the model and investigated what happens on the borders of the Geminid stream.
For both sets, each heliocentric orbital element of each particle was picked randomly in a chosen interval, as described in Tables 2 and 3.This means that no orbital element is fixed: they are all randomly chosen according to a uniform distribution.Such an approach was taken for example by Todorović & Novaković (2015).Contrary to a uniform cartesian mesh, this method avoids the introduction of a parameter, the step of the mesh.However, as seen in Gkolias et al. (2016), this non-uniform distribution might blur some details, and so we performed an integration with mesh-like initial conditions.We did not find any improvement in the maps drawn from such initial conditions, and so they are not presented here.
The particles were integrated for 500 years for IC1 and for 1000 years for IC2.We did not need to integrate them further, because we were not simulating the entire lifespan of the Geminids.The evolution of the position, speed, and OFLI of the particles were recorded, as well as their close encounters with planets (here, mainly the Earth).The encounters were detected when the distance between the particle and the planet was smaller than its Hill radius.
First, we worked with large particles (radius chosen randomly between 10 and 100 mm), on which non-gravitational forces (NGFs) have a negligible effect.We named these data sets IC1 BIN10100 and IC2 BIN10100, depending on the initial conditions used.Then, we also investigated the effect of the NGFs.For this purpose, we replicated IC1 and IC2, changing only the radius.It was picked randomly between 1 and 10 mm (BIN110) and then between 0.1 and 1 mm (BIN011).We obtained six sets of particles: IC1 BIN10100, IC2 BIN10100, IC1 BIN110, IC2 BIN110, IC1 BIN011, and IC2 BIN011, which are summarised in Table 4.
Results
Maps are usually drawn as a function of initial orbital elements.In our case, maps drawn as a function of the initial semi-major axis and the initial eccentricity (a, e) of the particles are the only ones presenting distinctive features, and these are therefore the only ones discussed.In these maps, we only plot the value of As explained in the previous section, we tested the whole range of possible values for the mean anomaly.Maps drawn from this element are completely uniform, and so the mean anomaly does not seem to impact the Geminids chaoticity.
Resonances
The first maps we drew (Fig. 2) are from IC1 BIN10100 and IC2 BIN10100.We note the difference in colour scale between the two data sets: the chaos keeps rising after 500 years.The smallest values are similar, showing the long-term stability of some particles.
The maps present several dark vertical lines, where the chaoticity is much lower.Those lines are a perfect match to the mean-motion resonances (MMRs) listed in Table 5.Three of them are mostly present in the BIN10100 IC1 map, and are slightly visible in BIN10100 IC2.Two more (the 2:3 and 3:4 with the Earth) only appear in the BIN10100 IC2 map.
Two different MMRs fit the dark line at around 1.27 AU: the resonance 1:6 with Mercury and the resonance 9:13 with the Earth.As the Earth plays a determinant role in the evolution of the Geminids, the resonance with the Earth seems more likely.The line is also very thin, which matches with the low order of the resonance with the Earth.Ryabova (2022) studies resonances that play a role in the evolution of the Geminids.The resonances she finds are listed in Table 6.As noted in Table 6, some of them are outside the bounds of our study, but the 2:3 and 5:7 MMRs with the Earth are detected in the chaos maps.As for the 2:5, 3:7, and 4:9 MMRs with Venus found by Ryabova (2022), they are also visible in our chaos maps, although less clearly: they appear only at high eccentricity, as can be seen in Fig. 3.
Particles initially inside those MMRs are much less chaotic than others.To understand why, in Fig. 3 (respectively Fig. 4), we plot only particles meeting with Venus (respectively with the Earth).These plots reveal the mechanism of stabilisation: the particles initially inside the resonance do not meet with the planet considered.A particle inside a MMR with the Earth, for example, will be trapped there and kept from meeting with the Earth.This will maintain its chaoticity at a relatively low level compared to those that do meet with the Earth.In the same way, a particle trapped in a MMR with Venus cannot meet with this planet.
Using our method and in the specific case where the effect of NGFs is negligible, we are able to replicate the findings of Ryabova ( 2022), but we also find additional information on the effect of MMR.This validates our approach.In Figs. 3 and 4, we can also see the effect of eccentricity.At low eccentricity, the particles cannot meet with the Earth or Venus.At higher eccentricity, they are able to meet with the Earth and their chaoticity rises.At even higher eccentricity, the particles can also meet with Venus, and this is where the OFLI reaches its highest value.We interpret this as finding the lower bound in eccentricity for the Geminid meteor shower: meteoroids with too low an initial eccentricity will never meet with the Earth and are therefore not part of the shower.This allows us to check the data we already have on Geminids: particles whose eccentricity is too low are probably contaminants from another dynamical origin.
Impact of non-gravitational forces
To investigate the impact of NGFs, we chose the data sets with smaller radius (BIN110 and BIN011).Maps from BIN110 produce very similar results to the previous section.However, maps from BIN011 (see Fig. 5) lack the dark vertical lines related to the MMR we analysed; they only present a uniform background for IC1 and the gradient in eccentricity for IC2 (see previous section for explanation).
Analyses of the interaction between MMRs and NGFs have been conducted before (Liou & Zook 1997).Here, to better un- derstand this effect, we studied the evolution of a few particles.We chose five particles from BIN10100 IC2 characterised by their initial position with respect to the largest MMR (2:3 and 3:4 with Earth).Particles n°1 and 2 are well outside these MMRs, particle n°3 is close to them but not inside, and particles n°4 and 5 are initially inside the MMRs.We then selected the clones of all of these particles in the BIN110 IC2 and BIN011 IC2 data sets.This allowed us to compare the evolution of particles that differ only from their radius, and thus to measure the influence of MMRs and NGFs for each size.We plotted the evolution of orbital elements and added grey lines that mark the MMRs with the Earth (their respective size corresponds to the size visible in the BIN10100 IC2 map).
The resulting Fig. 6 shows the strong diffusion of the small particles from the NGFs, which prevents them from being captured by the MMR.This explains why they do not appear on the BIN011 maps.
Particles of intermediate radius are influenced by MMR but their evolution becomes slightly blurred.This blurry aspect is even more pronounced in the evolution of small particles, with some exotic points that do not seem to be aligned with the general evolution of the orbital elements.This is due to the method of computation of orbital elements: the integration computes the state of each particle as a function of time.The orbital elements are then computed using these data.Doing so, slight changes in the velocity (due to the effect of NGFs) translates into relatively significant changes in semi-major axis a.It is well known that a changes drastically when performing such a rough conversion.In summary, smaller particles lose energy and start to plummet towards the Sun, disregarding any resonances, while larger ones might be locked out of close encounters with the Earth.This has probably a great impact on the distribution of objects in the meteor shower: from Earth, we might see less large objects than originally ejected, because they get captured.The semi-major axes of small objects may also tend to be smaller than those of large objects, even though they originally came from the same parent body.
We wanted to know which value of the radius marks the transition between those two behaviours.To this end, we used his-tograms to find the limit radius: the distribution in semi-major axis for the least chaotic particles (final OFLI smaller than 7) present peaks at the MMR semi-major axes for the large particles, while it stays uniform for small particles.We compared the distribution of particles whose radius is smaller than a radius r lim with the distribution of particles whose radius is greater than r lim to check whether or not r lim is indeed the limit radius we are looking for.
In Fig. 7, with a limit radius of 8.10 −4 m, we obtain the expected result.The second panel in the same figure shows a histogram for IC2 particles, with a radius inferior to the limit chosen.No peak should be visible in this histogram, but one does exist for the strongest MMR in our maps (2:3 with the Earth).This MMR does not appear in the IC1 set, which explains the discrepancy.To make sure the diffusion is stronger than this resonance, r lim must be decreased.On the last panel, we draw a new histogram from IC2 with a limit radius of 3.10 −4 m.This time, the peak on the last MMR disappears for particles whose radius is inferior to this new limit.
The limit radius could therefore be different for other showers, depending on which resonances play a role in their evolution.We point out that this radius limit is an estimation that could be refined by a detailed theoretical analysis, but such an analysis is beyond the scope of this paper.
Conclusion
We chose to study chaos in meteoroid streams by drawing chaos maps.Meteoroid dynamics are characterised by a short integration time (of the order of 10 3 years), many close encounters and potentially strong effect from NGFs, contrary to many previous applications for chaos maps.After analysing some chaos indicators, we find the OFLI to be well suited to our problem.We validated this choice by applying it to the Geminid meteoroid stream.We are able to see the effect of some MMRs on the Geminid meteor shower, obtaining very similar results to those of Ryabova (2022).The maps also provided us with interesting insights into how MMRs can trap large particles and prevent them from meeting with planets.
We also show that Geminids are defined by an initial eccentricity higher than approximately 0.84, because no close encounters with the Earth are found under this value.For even higher values of eccentricity, the particles might meet with Venus in addition to being able to meet with the Earth, adding a new element of chaos.
The effect of NGFs on small meteoroids is also very visible in the chaos maps.We clearly see the effect of diffusion, which completely overpowers the MMR for those small particles.Finally, we computed a first approximation of the radius limit that quantifies the boundary between high diffusion and the effect of MMR.This radius depends on the strength of the MMR and we found 8.10 −4 m and 3.10 −4 m as first approximations.In future works, we may refine them with an analytical study of this phenomenon.
We also note that the number of large particles in the Geminid meteor shower is probably underestimated given the capture of many of these particles in the MMR.Small particles seem to have a much smaller semi-major axis than large ones when they encounter the Earth, which should be taken into account when looking for parent bodies.
In future works, we will apply our method to other meteor showers, as well as to some meteor groups.
Fig. 1 .
Fig. 1.Comparison between the evolution of FLI, OFLI, and mMEGNO for two particles, a chaotic particle (mMEGNO > 2) with a close encounter with the Earth, and a stable particle (mMEGNO < 2) without close encounters.The rectangles show the initialisation phase for the FLI (larger, very light blue) and for the OFLI (smaller, light blue).The vertical black dashed line marks the close encounter of the chaotic particle with the Earth.The horizontal yellow dashed lines mark the values of the indicators immediately before the encounter.These values are compared with the final value of each indicator thanks to the arrows.The precise values of ∆ 1 , ∆ 2 , and ∆ 3 are given in the text, with the overall interpretation.
Fig. 2 .
Fig. 2. Maps from BIN10100.The arrows point to the dark lines visible in the maps.The colours of the arrows are related to the mean-motion resonances responsible for the line.One map is drawn from the IC1 data set and the other from the IC2 data set (see titles).
Fig. 3 .
Fig.3.Maps from BIN10100 IC2.On the top panel, we point to some resonances with Venus found byRyabova (2022).On the bottom panel, we only plot particles from BIN10100 IC2 that met with Venus.
Fig. 4 .
Fig. 4. Map from BIN10100 IC2.We only plot particles that met with the Earth.
Fig. 5 .
Fig. 5. Maps from BIN011 (small particles).One map is drawn from the IC1 data set and one from the IC2 data set (see titles).
Fig. 6 .
Fig.6.Evolution of five particles, with various radii.We plot the evolution of five particles from BIN10100 IC2 in the first panel and the evolution of their clones from BIN110 IC2 and from BIN011 IC2 in the other two panels, respectively (see titles).The grey lines represent the MMR with the Earth.
Fig. 7 .
Fig.7.Histograms for the search in radius limit.On each graph, black lines mark the MMRs.The histogram titled 'IC1r lim = 0.8 mm' counts particles from IC1 and compares those whose radius is inferior to r lim with those whose radius is superior to r lim .The second histogram counts particles from IC2 with a radius inferior to r lim .The last histogram also counts particles from IC2 but changes the value of the radius limit to r lim = 3.10 −4 m and compares particles whose radius is lower than or superior to r lim .
Table 1 .
Specificity of each indicator of chaos.Here 'All' signifies that the indicator does not focus on any particular source of chaos.
Table 2 .
Range of each heliocentric element of IC1 (500 years of integration)
Table 4 .
Description of the six different sets of particles.See Tables 2 and 3 for explanation of IC1 and IC2.
Table 5 .
MMR that fit with the structures observed
Table 6 .
MMR found by Ryabova and how they relate to our chaos maps."Out" means the resonances cannot be found in our chaos maps as their semi-major axis exceeds the bounds of our study.The other resonances ("In") can be seen either only at high eccentricity ("high e") or in the middle of the map ("middle e").
semi-major axis and eccentricity for each particle, but the value chosen randomly for each of the other elements is not presented. | 2023-01-20T06:41:35.391Z | 2023-01-19T00:00:00.000 | {
"year": 2023,
"sha1": "309e4dd2dd14b2b53bf85045c6970a5a5c79dd60",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1051/0004-6361/202245256",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "309e4dd2dd14b2b53bf85045c6970a5a5c79dd60",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
35151988 | pes2o/s2orc | v3-fos-license | Simultaneous determination of four Sudan dyes in rat blood by UFLC–MS/MS and its application to a pharmacokinetic study in rats☆
A rapid and sensitive method based on ultrafast liquid chromatography–tandem mass spectrometry was developed and validated for simultaneous determination of Sudan I, Sudan II, Sudan III, and Sudan IV levels in rat whole blood. Cleanert C18 mixed-mode polymeric sorbent was used for effective solid-phase extraction cleanup. Separation was carried out on a reversed-phase C18 column (100 mm×2.1 mm, 1.8 μm) using 0.1% (v/v) formic acid in water/0.1% (v/v) formic acid in acetonitrile as the mobile phase in gradient elution. Quantification was performed by an electrospray ionization source in the positive multiple reaction monitoring mode using D5-Sudan I as the internal standard. Calibration curves showed good linearity between 0.2 and 20.0 μg/L, with correlation coefficients higher than 0.9990. The average recovery rates were between 93.05% and 114.98%. The intra- and inter-day relative standard deviations were within 6.2%. The lower limit of quantification was 0.2 μg/L. All the analytes were found to be stable in a series of stability studies. The proposed method was successfully applied to a pharmacokinetic study of four Sudan dyes after oral administration to rats.
Introduction
Sudan dyes (Sudan I, Sudan II, Sudan III, and Sudan IV; Fig. 1), a group of synthetic fat-soluble colorants that contain an azo group (-N ¼N-) as part of the structure, are mainly used in textile, rubber, plastic, paint, and other coloring applications. Sudan dyes are frequently used as a food-coloring agent by unscrupulous merchants because of their bright red color, colorfastness, and low price. However, these synthetic colorants have been classified as possible human carcinogens [1][2][3][4], and pose a potential risk to consumer health if their daily intake exceeds the maximally permitted levels established by the World Health Organization [5]. Thus, their extensive use in food products has been prohibited by the European Commission [6].
Animals
Certified commercial SD rats, weighing 240720 g, were purchased from the Experimental Animal Center of the Zhejiang Academy of Medical Sciences (Hangzhou, Zhejiang, China). Thirty healthy and drug-free SD rats (15 males and 15 females) were acclimatized in 10 rat cages in a unidirectional airflow room with controlled temperature (22-25°C), relative humidity (45%-55%), and a 12 h light/dark cycle for one week before the start of the experiments. The rats were given free access to filtered tap water and commercial rat food.
Apparatus and conditions
UFLC-MS/MS was performed using a Shimadzu Prominence UFLC XR system coupled with an Applied Biosystems SCIEX Triple Quad 5500 mass spectrometer and ESI source. The UFLC-MS/MS system was controlled, and data were analyzed on a computer equipped with Analyst 1.5.1 (Applied Biosystems, MA, USA). Chromatographic separation was performed on an Agilent Eclipse Plus C 18 column (column size: 2.1 mm  100 mm, particle size: 1.8 μm) using solution A (water with 0.1% (v/v) formic acid) and solution B (acetonitrile with 0.1% (v/v) formic acid) as the mobile phase. The flow rate was 0.45 mL/min. The gradient program used for elution was as follows: initial conditions with solvent B were increased from 10% B to 95% B in 2.0 min, held constant at 95% B for 2.5 min, returned to initial conditions, and maintained for 2.0 min for equilibration. The total run time was 6.5 min. The column temperature was set at 40°C. A total of 5 mL of the solution was injected into the UFLC-MS/MS system for analysis.
MS was performed using the positive ESI mode and multiple reaction monitoring (MRM) mode for quantification. The optimized instrument operating parameters for mass spectral acquisition were as follows: ion spray voltage, 5500 V; curtain gas, 40 psi; interface heater, on; nebulizer gas (gas 1) and heater gas (gas 2), 30 psi each; turbo spray temperature, 500°C; entrance potential, 10 V; and collision cell exit potential, 10 V. Nitrogen was used in all cases. Retention times, precursor ions (Q1), product ions (Q3), declustering potential (DP), and collision energies (CEs) of each Sudan dye are shown in Table 1. The dwell time was set at 0.05 s for all the compounds.
Calibration standards and quality control (QC) samples
Calibration standard solutions at the concentrations of 0.2, 0.5, 1.0, 2.0, 5.0, 10.0, and 20.0 μg/L were freshly prepared by adding the appropriate mixed standard stock solution (100.0 μg/L) to 20.0 μL of blank rat whole blood, and stored at 4°C until the initiation of UFLC-MS/MS analysis (IS concentration: 2.0 μg/L). The four concentration levels (0.2, 0.5, 5.0, and 16.0 μg/L) (lower limit of quantitation (LLOQ), low, medium, and high levels) in blank rat whole blood were considered as QC samples. These samples were stored in a freezer compartment at À 20°C and brought to room temperature before use.
Sample preparation
Approximately 20.0 μL of whole blood sample and 10.0 μL of IS solution (100.0 μg/L) were mixed in a 2.0 mL Eppendorf tube, in which 0.5 mL of acetonitrile was added prior to extraction. The mixture was extracted by vortex-mixing for 3 min at ambient temperature. The extract was centrifuged at 20,000g for 5 min. The supernatant solution was then subjected to SPE for further purification.
The Cleanert C18 cartridges were preconditioned and equilibrated with 2 mL of methanol and 2 mL of water. Then, the aforementioned mixed solution was slowly loaded into the column. Sampling was followed by a cleanup step. The cartridges were rinsed with 2 mL of acetonitrile/water (50:50, v/v) at a flow rate of 1 mL/min and pump-dried. Elution was performed with two aliquots of 3 mL of n-hexane. The eluate was evaporated to dryness by a gentle nitrogen flow and reconstituted with 0.5 mL of acetonitrile. Finally, the reconstituted eluate was filtered through a 0.2 μm polytetrafluoroethene (PTFE) microporous film, and 5.0 μL was injected into the UFLC-MS/MS system.
Selectivity
Six individual rat whole blood samples obtained from different sources were used to assess selectivity. Blood from each rat subject was extracted and checked for peaks that can interfere with the detection of the four Sudan dyes and IS in the rat whole blood samples. The chromatographic peaks were confirmed by comparing the retention times (t R ) and mass fragment ions with those of reference standards.
Matrix effect (ME)
As suggested by Matuszewski et al. [26], a quantitative approach to assess absolute MEs was performed using a post-extraction addition study, in which percent ME (ME%) is calculated. In this study, the absolute ME was determined by comparing the mean peak areas of QC samples spiked post-extraction (B) with those of the standard solutions (A). ME% was determined using the following equation: absolute ME (ME%)¼ (B/A) Â 100. The absolute ME was evaluated at four concentration levels (LLOQ, 0.2 μg/L; low, 0.5 μg/L; medium, 5.0 μg/L; and high, 16.0 μg/L) and three parallels.
Precision and accuracy
The four QC samples (LLOQ, 0.2 μg/L; low, 0.5 μg/L; medium, 5.0 μg/L; and high, 16.0 μg/L) were used to assess interday precision and accuracy, which were analyzed on at least three separate runs. For intraday precision, six aliquots of each QC sample were thawed to room temperature and analyzed within a day. For interday precision, the tested experiments were done in triplicates for each QC sample on eight separate days within a two-week period. The relative standard deviations (RSD%) of the four Sudan dyes in each QC concentration were then calculated.
Accuracy was expressed by the recovery rates, which were assessed by comparing the mean peak area ratios from the rat blood samples for standards spiked before extraction (C) with those for standards spiked after extraction into the plasma extracts (B). The recovery rates were calculated using the following formula: recovery (%) ¼(C/B) Â 100 [26]. Experiments on the three QC samples were performed in five replicates.
Stability
Stability experiments were performed to evaluate analyte stability in rat whole blood extracted samples under different conditions. The LLOQ, low, medium, and high-concentration QC samples were analyzed in triplicates. Short-term stability was tested by placing the QC samples at room temperature in an autosampler for 2, 4, 8, and 12 h. Freeze-thaw stability was determined by analyzing the samples for three cycles (from -20°C to room temperature). Long-term stability of the QC samples stored at À20°C was tested for 30 days. The stability experiments were then evaluated by comparing the concentration obtained with the standard values (the quantitative data of the freshly prepared QC samples by used the instrument method), followed by calculation of the deviations.
Pharmacokinetic study
The study was conducted in accordance with the Ethical Guidelines for Investigations in Laboratory Animals and approved by the Ethics Review Committee for Animal Experimentation of the Ningbo University. Certified commercial SD rats were used after growth for one week. Before the day of administration, 30 rats were randomly assigned to five groups (blank edible oil group, Sudan I group, Sudan II group, Sudan III group, and Sudan IV group). Each group comprised three males and three females. Rats fasted overnight but were allowed access to water ad libitum. Sudan dyes (Sudan I, Sudan II, Sudan III, and Sudan IV) were separately dissolved in 10 mg/mL of edible oil and administered via a single oral for Sudan III and Sudan IV) was collected from the rat tail vein, poured into a 2 mL of heparinized centrifuge tube containing 0.5 mL of acetonitrile, and immediately vortex-mixed for 3 min. These samples were immediately centrifuged and cleaned according to Section 2.6. The extracts were stored in the dark at 4°C prior to analysis.
Sample preparation
In this study, an efficient extraction solvent was selected using a recovery test (at 1.0 μg/L) in a rat whole blood sample. Based on their molecular structures and fat-soluble properties, various extraction solvents, such as methanol, acetonitrile, dichloromethane, and nhexane, were chosen. After the extraction and total processing time were comprehensively evaluated, a simple protein precipitation method was proposed for extraction. Acetonitrile was selected as the optimal protein precipitation solvent because of its good protein precipitation efficiency and good extraction efficiency.
Given that Sudan dyes are fat-soluble compounds, normal neutral alumina SPE cartridges are recommended for use in cleanup. The activity of alumina N was adjusted according to the recoveries of standard solutions through the column because of variations in the quality of different batches of alumina N. During SPE cleanup, trace water in blood extracts significantly affected the recovery and precision. Thus, extracts should be dried with anhydrous sodium sulfate prior to sample loading. However, the application of this step for water-enriched rat whole blood samples prior to preparation is highly difficult. Therefore, we selected a stable and simple C18 SPE cartridge for cleanup, and achieved good recovery rates with all four dyes, in which 495% were unaffected by trace water. Fig. 2 shows the MRM chromatograms with and without SPE cleanup. Interference significantly decreased with the use of SPE cleanup compared with that without SPE cleanup.
LC and MS
The analytes have hydrophobic and lipophilic characteristics. Thus, organic solvent-rich mobile phases are typically used for their rapid elution and strong retention in reversed-phase chromatography. C 18 columns from different manufacturers have been used in previous studies, in which the most common column length was 150 mm, but diameters and particles differed. In our preliminary experiments, mixtures of deionized water with two common HPLC organic modifiers (acetonitrile and methanol) were used as the mobile phases. The flow rate was 0.45 mL/min, and the sample injection volume was 5 mL. Acetonitrile offered better peak symmetry and was selected for subsequent studies. The studied azo dyes were weak acids (pK a (Sudan I&II) ¼11.65) because an intermolecular hydrogen bond could be formed with the phenolic hydroxyl groups. Thus, the effect of the mobile phase pH on the chromatographic behavior of analytes was investigated by acidifying the aqueous portion through the addition of different formic
Photostability and "fast peaks" of Sudan III and Sudan IV
Previous studies reported the appearance of "fast-eluting" peaks in the chromatograms of Sudan III and Sudan IV [10,11,19]. Given that these peaks eluted a few minutes before the main peak of the compounds, they were called "fast peaks", which were proven by Mölder et al. [10] to be caused by photochemical isomers via MS. This phenomenon could lead to about 10%-35% quantitative errors for Sudan III and Sudan IV if lightning conditions were not controlled. However, photo-induced isomerization was reversible when the compounds were stored in the dark or wrapped in aluminum foil for a sufficient time.
A considerable increase in the peak area of isomers was observed upon standing on the tray of the autosampler without protection from light, which confirmed the findings of Mölder et al. (Fig. 4). The nature of "fast peaks" could not be avoided in routine conditions. Thus, they were integrated together with the normal peaks for analysis. Our results (see Section 3.4) showed that the "fast peaks" did not affect the accuracy of the analysis when all the peaks of one compound were integrated together, and the RSD and accuracy were satisfactory.
Selectivity
Selectivity is defined as the ability of the bioanalytical method to measure a substance unequivocally and to discriminate between the analyte(s) and other components that may be present [32]. Under optimized UFLC-MS/MS conditions, the rapid method yielded excellent selectivity and sensitivity for the analysis of Sudan I, Sudan II, Sudan III, Sudan IV, and IS in the blank whole blood samples (Fig. 4). To demonstrate the selectivity of the method and to screen for interfering substances, three replicate analyses of six blank whole blood samples and blood spiked with the LLOQ were extracted and injected for analysis using the developed UFLC-MS/ MS method. The representative chromatograms of a blank rat blood sample and a blank rat blood sample spiked with four Sudan dyes (0.2 μg/L) are shown in Fig. 4. No significant interfering endogenous peaks were observed at the retention times of the four Sudan dyes. Thus, according to the guidelines for industrial bioanalytical method validation (2001) [33], the method that we developed was selective.
ME
The details of the performed ME experiment are summarized in Table 2. The absolute MEs of the four compounds and IS ranged from 89.72% to 101.06%. An absent or insignificant ME in the different sources was confirmed by the results, which agreed with the requirement of the guidelines for industry and bioanalytical method validation by the Food and Drug Administration [33].
Precision and accuracy
The accuracy and precision results are shown in Tables 2 and 3, respectively. The mean recovery rates for the four compounds (n ¼6) of the three concentrations ranged from 93.05% to 114.98%. The intra-day RSDs for 0.5, 5.0, and 16.0 μg/L ranged from 1.6% to 6.2%, and their inter-day RSDs for 0.5, 5.0, and 16.0 μg/L ranged from 1.3% to 4.8%. Thus, the repeatability and recovery of the assay were within the acceptance limits of 715% at the tested concentration levels.
Linearity, limit of detection (LOD), and LLOQ
The calibration model was selected based on the analysis of the data by linear regression with intercepts and 1/x 2 weighting factor by using the Least-Squares Refinement method. The linear calibration curves were plotted as the peak area ratio of the analyte to IS (Y) versus the target-compound mass-concentration ratios of the analyte to IS (X). Representative linear equations of the four compounds in rat whole blood are listed in Table 4. Each standard point in every calibration curve was back-calculated using its own equation. The linearities of the calibration curves were between 0.2 and 20.0 μg/L, and the correlation coefficients (r) of the four compounds were Z0.999.
LODs were established based on signal-to-noise (S/N) ratio of 3, whereas LLOQs were established based on an S/N ratio of 10. The LODs for the four compounds in the injection solutions were estimated to be 0.06 μg/L. Based on the acceptable precision and accuracy, the LLOQ was 0.2 μg/L in the injection solutions, as shown in Table 4.
Stability
The results for freeze-thaw cycle stability and short-term stability under room temperature at 0.2, 0.5, 5.0, and 16.0 μg/L levels of the four compounds in rat whole blood are shown in Table 5. No significant degradations (o14.5% for short-term stability under room temperature, o 13.5% for freeze-thaw cycle stability, and o8.5% for long-term stability) were observed for the four compounds. Results show that the four compounds were stable under the investigated conditions because the measured concentrations were within the acceptable limits (r15% of the nominal concentrations).
Pharmacokinetic study
The method that we developed was successfully applied to the pharmacokinetic study after a single oral administration of 50 mg/ kg Sudan I, Sudan II, Sudan III, and Sudan IV to SD rats. Mean blood concentration-time profiles of Sudan I, Sudan II, Sudan III, and Sudan IV are shown in Fig. 5. The major pharmacokinetic parameters of the four analytes were calculated using a non-compartment model using DAS 2.0 statistical software (Pharmacology Institute of China), and the results are shown in Table 6 Sudan III, and Sudan IV, respectively. The absorption rate constants (k a ) of Sudan I, Sudan II, Sudan III, and Sudan IV were 3.996, 1.393, 0.4368, and 0.4395 h À 1 , which were significantly greater than their elimination rate constants (k e ) of 0.8277, 0.3829, 0.2899, and 0.2932 h À 1 , respectively. The four Sudan dyes were easily absorbed and quickly accessible to the loop body of the SD rats. The phenomenon of rapid absorption possibly involves the specific and lipophilic chemical properties of the four Sudan dyes. Given the double phospholipid compositions of the cell membrane of the rat, the lipophilic Sudan dyes successfully and rapidly accessed the organic body. These results were in accordance with the maximum blood concentrations achieved at 630.91, 696.26, 349.39, and 1304.61 μg/L for Sudan I, Sudan II, Sudan III, and Sudan IV, respectively. The areas under the concentration-time curve (AUC 0-1 ) of Sudan I, Sudan II, Sudan III, and Sudan IV were 1150.03, 2966.66, 2706.37, and 10,013.91 μg h/L, respectively, which indicated the high bioavailability of the four analytes. The successful application of the UFLC-MS/MS method to the pharmacokinetic study of four Sudan dyes suggested its suitability and sufficiency for use in pharmacokinetic studies.
Conclusions
In this study, a sensitive and rapid UFLC-MS/MS method was developed and validated for determination of Sudan I, Sudan II, Sudan III, and Sudan IV levels in rat whole blood. The established method was fast, precise, accurate, specific, reproducible, and suitable for the pharmacokinetic study of Sudan I, Sudan II, Sudan III, and Sudan IV. The validated method was sufficiently sensitive with an LLOQ of 0.2 μg/L. Thus, the method successfully determined the Sudan I, Sudan II, Sudan III, and Sudan IV levels, which ranged from 0.2 μg/L to 20.0 μg/L. UFLC-MS/MS is suitable for detailed assessment in pharmacokinetic, bioequivalence, and bioavailability studies. | 2018-04-03T06:07:09.254Z | 2015-03-18T00:00:00.000 | {
"year": 2015,
"sha1": "2745889843dce4848d6c8b40c2696327b2689738",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jpha.2015.03.001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2745889843dce4848d6c8b40c2696327b2689738",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
239695093 | pes2o/s2orc | v3-fos-license | Histological and radiographic evaluation of three common tendon transfer techniques in an un-ossified bone porcine model: implications for early anterior tibialis tendon transfers in children with clubfeet
Abstract Purpose To compare the histological healing and radiographic effects of tendons transferred to ossified or unossified bone using different tendon fixation techniques Methods Nine new-born piglets underwent bilateral tendon transfers to either the ossified boney calcaneal body or unossified apophysis. The tendons were fixed using metallic suture anchors, sutures alone or a bone tunnel. At six weeks of age, calcanei were harvested, radiologically imaged and then prepared for histology. A semi-quantitative aggregated scoring system with values ranging from 0 (poor) to 15 (excellent), was used to grade healing at the surgical enthesis and the apophyseal ossification was graded by five independent reviewers in triplicate using a modified (1 to 4) validated scoring system. Results Histologically, the cartilaginous transfers utilizing the tunnel and suture techniques also demonstrated the best average aggregated scores of entheses healing rivalling that measured in transfers using the classic bone tunnel technique (clinical benchmark), whereas suture anchor fixation demonstrated the worst healing in both the ossified and unossified samples. All three transfer techniques caused at least minor alterations in apophyseal ossification, with the most significant changes observed in the metallic suture anchor cohort. The tunnel and suture techniques demonstrated similar and more mild abnormalities in ossification. Conclusion Tendon transfers to unossified bone heal histologically as well as transfers classically performed through tunnels in bone. Suture fixation or tunnel techniques appear radiographically and histologically superior to suture anchors in our newborn porcine model. Level of evidence
Introduction
Tendon transfers are common soft-tissue procedures utilized to improve function and/or dynamic anatomical alignment due to congenital, developmental or post-traumatic muscle imbalances, most commonly in paediatric hand and foot surgery. Talipes equinovarus and vertical talus are two congenital foot deformities in which tendon transfers have been reported. [1][2][3][4][5][6][7][8] In both deformities, the tibialis anterior tendon is transferred, however, the tendon is transferred to different tarsal bones, in order to optimize the mechanical pull and correct residual deformity. 2,9 In addition to being transferred into differ-ent bones, different surgical fixation techniques are used in each setting. [1][2][3][4][5][6][7][8] In clubfeet, the tendon is transferred through a tunnel placed in the lateral cuneiform, thus it is recommended to wait until the cuneiform is ossified (three to four years of age) to ensure bone to tendon healing. 1,10,11 However, in the procedure described by Dobbs et al 2 for vertical talus, the tendon is sutured to the cartilaginous surface of the talus and appears to heal without incident. This has led to the following questions: 1) how does the healing of a tendon transferred to ossified or unossified bone (cartilage) compare and do different tendon fixation techniques affect healing?; 2) what affect does each of the early tendon transfer techniques have on subsequent ossification?
To answer these questions, we utilized a previously described porcine model 12 to compare the histological and radiographic effects of tendons transferred using metallic suture anchors (new technique), suture fixation (similar to that described for vertical talus) and the classic bone tunnel (as described for clubfoot) to both ossified and unossified bone. Our null hypotheses was that no differences in histological healing or radiographic appearances will be found between fixation techniques. The purpose of this work was to increase our current understanding of the science behind these commonly performed paediatric transfers, in hopes to help guide the surgeon's choice of tendon fixation technique(s) in various clinical scenarios involving very young patients.
Materials and methods
Nine newborn (< 48 hours of age) mixed breed piglets 12 were anesthetized, placed in a prone position and the lower extremities prepped and draped for surgery. A midline posterior incision was performed extending proximal to the os calcis distal towards the midfoot. Posterior dissection was carried down until the flexor digitorum superficialis (FDS) was identified crossing superficial to the tendo-achilles. The FDS was freed from the underlying tendo-achilles then followed and dissected free from the surrounding soft tissues distally to the level of the metatarsals. The individual tendon slips were then transected. Single tendon slips were then transferred to either the unossified calcaneal apophysis (left) and/or through the ossified calcaneal body (right) as indicated by the specific procedure. All tendons were fixed under direct vision of the cartilaginous apophysis and boney calcaneus without radiographic imaging. Following all surgeries, piglets were recovered, and were returned to their sow, without any weight bearing restrictions or immobilization. Animals were euthanized and hind limbs collected for analysis at six weeks.
Suture anchors
The FDS tendon was harvested as described above. A size 1.3-mm suture anchor (Micro Quickanchor, Titanium, 4-0 Orthocord; (Mitek/DepuySynthes, Raynham, MA, USA) was used to fix one slip of the FDS tendon to the medial surface of either the cartilaginous apophysis or boney calcaneal body via a Krackow type stitch run ~2 cm high in the tendon (Fig. 1).
Suture fixation
The same sutures used in the suture anchors (4-0 Orthocord, Depuy Synthes, Raynham, MA, USA), were used to fix tendons in the same manner with a Krackow stitch. The attached needle was then used to pass the suture arms through either the apophysis (left side) or ossified bone (right side) without creating an expanded tunnel within the calcaneus. The tendon was then brought apposition to the medial surface of the calcaneus, without pulling the tendon into or below the medial surface of the calcaneus. The suture arms were then tied over the cartilaginous or boney block on the lateral side of the calcaneus (Fig. 1).
Classic tunnel
The non-absorbable suture (4-0 Orthocord, Depuy Synthes, Raynham, MA, USA) was also placed using a Krackow stitch in this cohort. However, an 18-gauge syringe needle was then used to create a tunnel, either through the cartilaginous apophysis (left side) or through the ossified body of the calcaneus (right side). The free ends of the running suture were then placed through the syringe needle in a medial to lateral fashion. The tendon was then pulled through the tunnel (created by the 18-gauge needle) exiting the lateral side of the calcaneus and held in place by tying a custom stainless steel button to the free end of the tendon slip, resting on the lateral side of the calcaneus. No excessive tension was applied to any of the tendon transfers. Wounds were closed in a non-layered horizontal mattress fashion using large gauge monofilament suture (#2 Prolene; Ethicon, Rariton, NJ, USA) due to rapid animal growth and tensile strength needed to maintain closure and recovered as described above (Fig. 1).
Calcaneal processing
The calcanei of all nine animals were harvested at six weeks of age. To evaluate any disturbances in the ossifiction of the calcaneal apophyses, lateral images of the calcanei were taken utilizing high resolution Faxitron imaging (UltraFocus Digital Radiography System with DXA; Fax-itron, Tuscan, Arizona). The calcanei samples were then sectioned in the sagittal plane and fixed in neutral buffered formalin for three days, followed by decalcification in 15% EDTA for two to three weeks. The anchors, buttons and pin tips were then removed. Finally, the samples were processed through to 70% ethanol and embedded in paraffin, and 5-µm sections were taken. Sections were stained with hematoxylin-eosin and Masson's trichrome and viewed using both transmitted and polarized light.
Histological analysis
A semi-quantitative grading system, 12 was used to grade healing at the enthesis. This is an aggregated scoring system with values ranging from 0 (poor) to 15 (excellent) enthesis healing (Table 1). Scoring was carried out by an experienced musculoskeletal histopathologist (M.D.) and another co-author (S.B.). Because the type of fixation was clear on histology, the authors were not able to be blinded when assessing the samples. Due to the low sample size (n = 3) in each cohort only descriptive statistical mean scores and ranges were performed.
Scoring the radiographic appearance of the calcanei
The radiographic images were deidentified and provided to five independent readers who graded the apophyseal appearance using a modified one to four grading system previously described 12 (Fig. 2; supplementary material). Each reader scored each image in triplicate at three separate settings.
Statistical analysis
A descriptive analysis reporting the overall mode of modes (most common reported score for each sample) and percentage agreement between the five raters score for each sample were reported.
Suture anchors
In tendon suture anchored to bone and cartilage (mean overall scores 5.7 (range 5-7 for bone; 3-8 for cartilage) out of 15, for both), there was poor inter-digitation (< 50%) of the tendon fibres into the bone and cartilage interfaces, respectively. In both groups, the tendon fibres appeared disorganized with low collagen fibre density.
Suture fixation
When tendon was sutured to bone (meanoverall score 7.3 (range 7-9) out of 15), the samples exhibited medium-good (25% to 100%) inter-digitation at the bone-tendon interface, however, collagen fibres were disorganized with only medium collagen fibre density. Interestingly, when tendon was sutured to cartilage healing was improved (meanoverall score 10.3 (range 8-12) out of 15), and the group exhibited medium-good (25% to 100%) inter-digitation at the cartilage-tendon interface with medium collagen and moderate orientation scores in the tendon.
Tunnel fixation
When tendon was passed through a bone tunnel meanoverall score 9.3 (range 8-11) out of 15), the samples exhibited medium (50% to 74%) inter-digitation bone-tendon interface with moderate alignment and medium collagen density. A periosteal reaction and periosteal thickening was observed around the site of the bone tunnel. Similarly, when tendon was passed through a cartilage tunnel (meanoverall score 10.3(range 10-11) out of 15), these samples exhibited medium (50% to 74%) inter-digitation bone-cartilage interface with moderate alignment and medium-high collagen density. A periosteal reaction and periosteal thickening was observed around the site of the cartilage tunnel. Examples of poor and good histological healing can be found in Table 2. Examples and summary of the individual histological scoring is found in Figure 3 and Table 3. Representative histological findings for each cohort is found in Figure 4.
Radiographic results
In all, 16 of 18 samples were available for radiographic review. The digital images of one animal (suture anchor), were unfortunately erased from the institutions data system, prior to being scored. An overall rater agreement of ~90% was found using the four-point scoring system with 100% agreement in nine samples, 80% in six samples and 60% in one. The mode of the readers' individual scores, overall mode and percentage agreement are shown ( Table 4). Ossification of the calcaneal apophysis was minimally affected following transfers to the boney calcaneal body regardless of technique, while apophyseal sutures and tunnels had similar varying effects, and apophyseal achors demonstrated the most severe disruption (Table 4).
Discussion
The findings in this study provide histological evidence that a tendon transferred to an ossified bone (as has been recommended classically in clubfeet) or to an unossified bone (cartilage) can heal quite well. However, the tendon fixation technique can make a difference on the enthesis healing. In both the ossified and unossified bones, the use of suture anchors demonstrated the poorest histological healing. In agreement with the previously described clinical procedures in clubfoot and vertical talus, the classic tunnel technique appeared best in ossified bone (as is commonly used in the standard tibialis anterior transfer to the cuneiform), whereas suture fixation (described in the vertical talus) appeared just as good to the tunnel technique in the unossified bone. Our data suggests that each fixation technique affects subsequent ossification in the unossified bone, but that the suture anchor had the most severe effect.
In addition to answering a scientific curiosity, the results of this study provide valuable information that may help guide earlier surgical interventions, as earlier correction of muscle imbalances might prevent structural deformities from developing. While the Ponseti method has become the treatment of choice for idiopathic clubfoot treatment, [13][14][15][16][17] its success in maintaining deformity correction is dependent on long-term brace compliance. 18 Even in Ponseti's own hands, up to 40% of children treated required tibialis anterior tendon transfer (TATT). 11,19 In our previous clinical work, we have found brace tolerance and compliance to be difficult in our, non-Iowa, non-American population. 20,21 As a result, we have considered exploring the novel surgical approaches after casting to prevent recurrences that might provide families with an alternative to bracing. Similarly while the Ponseti method can obtaincorrection in non-idiopathic clubfeet, most clinicians struggle to maintain correction despite bracing. [22][23][24][25][26][27][28][29] Potentially earlier and better balancing of forces may provide longer lasting corrections. While TATT is clearly not a panacea that corrects every relapse, 30 understanding how to safely perform such transfers at a young age is the first step in exploring any alternative treatment protocol. Outside of clubfeet, these findings may have similar implica- tions in other pathologies, i.e. brachial plexopathy, that abnormal muscle imbalances cause underlying skeletal deformities that might be avoided with earlier intervention, however, further investigation is needed.
Limitations of this study should be pointed out. Firstly, the sample sizes for each technique are too small (n = 3) to clearly define which method of tendon transfer results in the 'best' healing with the 'least' apophyseal injury, and do not allow more than a descriptive statistical analysis. However, even with these limited numbers, our findings suggest that suture anchors should not be used for fixation as it demonstrated the poorest histological healing of all the techniques and the lowest radiographic score (despite loss of one image). Apophyseal tunnel and suture techniques were found to alter subsequent apophyseal ossification to varying degress, with less disruption of ossification in the tunnel cohort than would have been expected from our previous report. 12 This could be the result of examining the bisected calcaneii (including both histological and radiographic imaging of the samples) in the previous study and imaging of intact calcaneii in the current study. Despite assessing histological healing in the current study, we did not assess the mechanical properties of the transfers which will be important clinically. Future work will focus on the technique (tunnel versus suture) that produces the best mechanical result and has the least impact on final apophyseal ossification.
FUNDING STATEMENT
No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article.
OA LICENCE TEXT
This article is distributed under the terms of the Creative Commons Attribution-Non Commercial 4.0 International (CC BY-NC 4.0) licence (https://creativecommons.org/ licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed. | 2021-09-24T15:28:44.767Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "beb15fe7d8a54cf4a4c81664ebe4fcb66b5f49db",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1302/1863-2548.15.210076",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec3cc5e3e46a201d9c6b8e3d4b787fec97a13d49",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18822022 | pes2o/s2orc | v3-fos-license | Ambient Temperature and Cerebrovascular Hemodynamics in the Elderly
Background and Purpose Some prior studies have linked ambient temperature with risk of cerebrovascular events. If causal, the pathophysiologic mechanisms underlying this putative association remain unknown. Temperature-related changes in cerebral vascular function may play a role, but this hypothesis has not been previously evaluated. Methods We evaluated the association between ambient temperature and cerebral vascular function among 432 participants ≥65 years old from the MOBILIZE Boston Study with data on cerebrovascular blood flow, cerebrovascular resistance, and cerebrovascular reactivity in the middle cerebral artery. We used linear regression models to assess the association of mean ambient temperature in the previous 1 to 28 days with cerebrovascular hemodynamics adjusting for potential confounding factors. Results A 10°C increase in the 21-day moving average of ambient temperature was associated with a 10.1% (95% confidence interval [CI], 2.2%, 17.3%) lower blood flow velocity, a 9.0% (95% CI, 0.7%, 18.0%) higher cerebrovascular resistance, and a 15.3% (95%CI, 2.7%, 26.4%) lower cerebral vasoreactivity. Further adjustment for ozone and fine particulate matter (PM2.5) did not materially alter the results. However, we found statistically significant interactions between ambient temperature and PM2.5 such that the association between temperature and blood flow velocity was attenuated at higher levels of PM2.5. Conclusions In this elderly population, we found that ambient temperature was negatively associated with cerebral blood flow velocity and cerebrovascular vasoreactivity and positively associated with cerebrovascular resistance. Changes in vascular function may partly underlie the observed associations between ambient temperature and risk of cerebrovascular events.
The potential effects of ambient temperature on cerebrovascular hemodynamics must be considered in the context of ambient air pollution, which has been repeatedly linked to changes in peripheral vascular function [12][13][14][15]. We have previously reported an association between ambient fine particulate matter air pollution (PM 2.5 ) and resting cerebrovascular flow and resistance [16]. While that analysis adjusted for potential confounding by ambient temperature, we did not consider the potential interactions between temperature and PM 2.5 or the potential associations with O 3 .
Understanding the relationships between ambient temperature and cerebrovascular function may yield insights into the mechanisms of weather-related cerebrovascular events and inform future prevention or treatment strategies. Accordingly, we evaluated the association between ambient temperature and cerebrovascular flow, resistance, and vasoreactivity in a prospective cohort of community-dwelling elderly in the Boston metropolitan area. A secondary goal was to evaluate the joint effects on cerebral hemodynamics of temperature with either PM 2.5 or O 3 .
Study Design
We evaluated the association between short-term changes in ambient temperature and markers of cerebrovascular hemodynamics among 423 participants in the MOBILIZE Boston study, a prospective, community-based cohort study [17]. Briefly, between 2005 and 2008 we recruited 765 non-institutionalized men and women aged ≧65 years who were able to communicate in English, resided within 5 miles (8.0 km) from the study clinic, and were able to walk 20 feet (6.1 m) without personal assistance. Individuals with a Mini-Mental State Examination score <18 were not eligible to participate. On enrollment, subjects participated in an in-home interview followed within 4 weeks by a clinic examination. We assessed participant characteristics, medical history, medication inventory, smoking history, blood pressure, height, and weight, as previously described [18]. A second in-home interview and clinic examination (follow-up visit) were performed a median of 16.5 months after the baseline visit. All participants provided written informed consent upon enrollment. The study was approved by the Institutional Review Boards at Hebrew Senior Life and Brown University.
Participants were classified as normotensive if blood pressure was <140/90mmHg and there was no history of hypertension or receiving antihypertensive medications; controlled hypertensive if blood pressure was <140/90 mmHg and there was a history of hypertension or Disorders and Stroke, and Environmental Health Sciences (NIEHS), NIH, and grant RD83479801 (PK) from the US Environmental Protection Agency (US EPA). The contents of this report are solely the responsibility of the authors and do not necessarily represent the official views of the sponsoring institutions.
receiving antihypertensive medication; and uncontrolled hypertensive if blood pressure was 140/90mmHg. A blood sample was collected during the clinic visit and participants were classified as having diabetes mellitus if they reported a past diagnosis of diabetes, they reported using any diabetes medications, were found to have a hemoglobin A1c 6.5%, or had a random glucose measurement 200 mg/dl. Height and weight were measured during the clinic visit according to a standard protocol and body mass index calculated.
Cerebrovascular Hemodynamics
At each clinic examination we evaluated participants' cerebrovascular hemodynamics at rest and during provocative stimulation, as previously described [17]. Briefly, we used transcranial Doppler ultrasound (TCD) to continuously measure cerebral blood flow velocity in the middle cerebral artery (MCA) while participants sat in a chair. A 2-MHz TCD probe (MultiDop X4, DWL-Transcranial Doppler Systems Inc., Sterling, VA) was placed over the right or left temporal bone with the best signal and held in place during recordings using a Velcro headband. TCD data could not be obtained in some participants because of the absence of a suitable acoustic window to insonate the MCA. We measured arterial blood pressure using a Finometer photoplethysmographic system (Finapres Medical Systems, Arnhem, the Netherlands) placed on a finger and held at heart level with a sling. The envelope of the velocity waveform was digitized at 500 Hz, displayed simultaneously with the blood pressure, ECG, and end-tidal CO 2 signals; and stored for later offline analysis. Cerebrovascular resistance was calculated as the ratio of mean arterial pressure to blood flow velocity [19].
After a 5-minute resting period, we assessed cerebral vasoreactivity by asking participants to breathe room air normally for 2 minutes, inspire a gas mixture of 8% CO 2 , 21% O 2 , and balance N 2 for 2 minutes, and then mildly hyperventilate to an end-tidal CO 2 of 25 mm Hg for 2 minutes. Cerebral vasoreactivity was calculated as the slope of the linear regression of mean MCA blood flow velocity versus end-tidal CO 2 during the maneuver.
Meteorological and Air Pollution Data
We obtained hourly ambient temperature and other meteorological data from the National Weather Service station at Boston's Logan Airport. PM 2.5 was measured continuously at the Boston/Harvard ambient monitoring station, as previously described [20]. This monitor was <10 km of the clinic site and <20 km from participants' residential address. Hourly O 3 measurements were obtained from the Massachusetts Department of Environmental Protection's Greater Boston monitoring sites and averaged. For each participant we estimated average ambient temperature, dew point temperature, PM 2.5 , and O 3 levels in the 1, 2, 3, 5, 7, 14, 21, and 28 days prior to the clinical visit when cerebrovascular hemodynamics were measured.
Statistical Methods
Of the 765 participants in the MOBILIZE Boston Study, we excluded 76 participants with a history of stroke, 1 participant with a clinical visit on a weekend (making it difficult to statistically adjust for potential day of week effects), and 265 subjects due to the absence of a suitable acoustic window to insonate the MCA, leaving 423 participants for this analysis. Among these 423 participants, TCD data on cerebrovascular hemodynamics were available at two visits in 258 (61%) participants and at one visit in 165 participants.
We used linear mixed effects models with random subject intercepts to assess the association between ambient temperature and cerebrovascular hemodynamics while accounting for repeated measures within individuals. Measures of cerebral hemodynamics (resting blood flow velocity, cerebrovascular resistance, mean arterial pressure, and cerebrovascular reactivity) were all natural log-transformed and results are expressed as the percent difference (and 95% confidence interval [CI]) in each outcome per 10°C change in ambient temperature, 10 μg/m 3 change in PM 2.5 , or 10 ppb change in O 3 . All models were adjusted for age (natural cubic spline with 3 degrees of freedom), sex, race (white versus others), smoking status (never versus ever), hypertension status (normotension, controlled hypertension, versus uncontrolled hypertension), diabetes mellitus, body mass index (natural cubic spline with 3 degrees of freedom), visit number (baseline versus follow-up visit), day of week (indicator variables), seasonal pattern (sine and cosine of calendar time with period of 1 year), and long-term temporal trends (centered time as linear and quadratic functions). Where indicated, we used natural cubic splines with 3 degrees of freedom, resulting in a function with 2 internal knots placed at the upper and lower tertiles of the distribution of the relevant variable. In sensitivity analyses we further adjusted for dew point temperature, PM 2.5 , or O 3 in separate models. The exposure-response function of each outcome with temperature, PM 2.5 , and O 3 was initially modeled as a linear function and, subsequently, as a natural cubic spline with 3 degrees of freedom.
In a set of secondary analyses, we evaluated potential interactions between ambient temperature and PM 2.5 and between ambient temperature and O 3 by assuming linear exposureresponse functions and including interaction terms in our main models. In additional sensitivity analyses, we verified these results using a low rank tensor product smoothing which allows more flexible exposure-response functions for the main effects and 2-way interactions [21]. Analyses were performed using R statistical software (version 3.1.0). A 2-sided P-value of <0.05 was considered statistically significant.
Results
At baseline, the 423 participants with at least one assessment of cerebrovascular hemodynamics were predominantly female (56.0%) and white (86.1%), with a mean age of 77.7 (standard deviation [SD], 5.2) years (Table 1). Over the course of the study period, the mean ambient temperature was 11.0°C (SD: 9.2°C), mean levels of PM 2.5 were 9.0 μg/m 3 (SD: 5.3 μg/m 3 ), and mean O 3 levels were 22.9 ppb (SD: 10.7 ppb). Average pollutant levels over the 1 to 28 days prior to each clinical assessment are shown in Table A in S1 Tables. The intraclass correlation coefficients for resting cerebrovascular resistance, mean arterial pressure, and blood flow velocity ranged from 0.81 to 0.84, indicating high within-person reproducibility of these measures over time.
Measures of cerebral hemodynamics were associated with ambient temperature averaged over longer periods (Fig 1). For example, a 10°C increase in ambient temperature averaged over the prior 21 days (i.e.: the 21-day moving average) was associated with a 10.1% (95% CI: 2.2%, 17.3%) decrease in resting blood flow velocity, a 20.3% (95% CI: 7.1%, 35.1%) increase in resting cerebrovascular resistance, a 9.0% (95% CI: 0.7%, 18.0%) increase in resting mean arterial pressure, and a 15.3% (95%CI: 2.7%, 26.4%) decrease in cerebral vasoreactivity (Table B in S1 Tables, Main Model). We used natural cubic splines and confirmed that the exposureresponse functions underlying these observed associations were approximately linear (Fig 2 and S1 Fig). Ambient temperature was not associated with any outcome at shorter moving averages.
In sensitivity analyses, we assessed the robustness of the relationship between ambient temperature and cerebrovascular hemodynamics by further adjusting for PM 2.5 , O 3 , or dew point temperature in separate models (Table B in S1 Tables). Adjustment for PM 2.5 modestly attenuated the results although the overall pattern of the results remained unchanged. Adjustment for O 3 had little or no impact on the results. Adjustment for dew point temperature had an inconsistent effect on the results, but substantially increased the width of the confidence intervals suggesting strong colinearity between ambient temperature and dew point (Spearman's rank correlation coefficients 0.93 to 0.98).
We evaluated the potential interaction between temperature and PM 2.5 on cerebrovascular hemodynamics and found that PM 2.5 modified the relationship between temperature and resting blood flow velocity, reaching statistical significance for the 5-, 14-, 21-, and 28-day moving averages (Table 2). Specifically, the association of ambient temperature with blood flow velocity was significantly attenuated at higher PM 2.5 concentrations and vice versa. For example, a 10°C increase in the 21-day moving average of ambient temperature was associated with a 9.0% lower resting blood flow velocity at the 25 th percentile of PM 2.5 (6.76 μg/m 3 ), but a 6.4% lower resting blood flow velocity at the 75 th percentile of PM 2.5 (9.85 μg/m 3 ) (P for interaction = 0.02). We found no statistically significant interactions between ambient temperature and PM 2.5 on other outcomes. We confirmed these findings using a more flexible 2-way interaction model that showed a similar pattern of results (data not shown). We found no evidence of interaction between ambient temperature and O 3 with any measure of cerebrovascular function (Table C in S1 Tables).
Discussion
In this cohort of elderly participants, we evaluated the association between cerebral hemodynamics and mean ambient temperature in the prior 1 to 28 days and found that, at longer averaging times, higher ambient temperatures were associated with higher mean arterial pressure, higher resting cerebrovascular resistance, lower resting blood flow velocity, and lower cerebrovascular reactivity in response to changing end-tidal CO 2 levels. Additionally, at longer averaging times we found evidence of an interaction between ambient temperature and PM 2.5 in association with blood flow velocity.
To our knowledge, this is the first published study designed to evaluate the association between ambient temperature and cerebrovascular hemodynamics. Thus, direct comparison to prior studies is not possible. However, rewarming patients who have moderate hypothermia above 37°C decreases cerebrovascular pressure reactivity index, an indicator for cerebral The y-axis denotes the % change (and 95% confidence interval) in each outcome per 10°C increase in temperature, adjusted for age, sex, race, smoking, hypertension, diabetes, BMI, visit number, day of week, season, and long-term time trends. The x-axis denotes the averaging period for ambient temperature (in days) prior to the TCD assessment. Temperature and Cerebrovascular Hemodynamics vascular reactivity [22], suggesting elevated body temperature may perturb cerebrovascular regulation and potentially lead to increased risk of stroke [23].
More is known about the effects of ambient temperature on the peripheral circulation. Nawrot et al [10]. found that higher ambient temperatures in the prior 1 to 21 days were associated with reduced brachial artery flow-mediated vasodilatation, indicative of reduced endothelial function. Similarly, in the Framingham Heart Study, Widlansky et al [11]. found that higher same-day temperatures were associated with reduced hyperemic flow velocity, a marker of peripheral microvascular vasodilator function. These results may be considered analogous to and qualitatively consistent with our findings of reduced cerebrovascular reactivity in the MCA territory in association with ambient temperature, albeit over different time scales. On the other hand, Widlansky et al. [11] found no association between same-day temperature and either resting brachial artery diameter or flow-mediated dilation. In a repeated-measures study among patients with type 2 diabetes, Zanobetti et al. found that same-day temperature was positively associated with brachial artery diameter but not with either flow-mediated dilation or nitroglycerin-mediated dilation [24]. These studies differ from ours, in part, in the characteristics of participants and time periods considered. Importantly, only the study by Nawrot et al. [10] considered the effects of averages of temperature longer than 5 days on markers of endothelial function.
We have previously reported that in this cohort PM 2.5 is associated with higher resting cerebrovascular resistance and lower resting blood flow velocity at longer moving averages [16]. In the current study we found evidence of an interaction between PM 2.5 and ambient temperature for blood flow velocity, but not other outcomes. The physiologic basis for this interaction is unclear, but since the observed associations with temperature and PM 2.5 are in the same direction (i.e.: to decrease blood flow velocity), it is plausible that as blood flow velocity decreases other compensatory mechanisms are activated to preserve cerebral blood flow. Nonetheless, if our findings on cerebral circulation can be extrapolated to the peripheral circulation, our results suggest that there may be important interactions between ambient temperature and at least some ambient air pollutants in eliciting endothelial dysfunction, but these interactions might only be observed at longer averaging times. Arterial blood pressure has been shown to be negatively associated with same-day ambient temperature [25][26][27][28][29]. We similarly observed a modest negative association between mean arterial pressure and ambient temperature averaged over the prior 1-3 days, although these results did not reach statistical significance. However, at longer averaging periods, we found that ambient temperature was positively and significantly associated with mean arterial pressure. Few prior studies have considered comparably longer averaging periods. However, previous authors report that nighttime temperature is positively associated with blood pressure, potentially via reduced sleep quality or duration [30,31], suggesting that temperature may affect blood pressure via different physiologic mechanisms depending on both the time frame and the context of exposure. Our findings that the association between temperature and arterial blood pressure may be in opposite directions depending on the time frame considered raises potentially interesting questions about the differential effects of temperature on the risk of cardiovascular events. Indeed, time-series studies have often found that excess heat is associated with increased risk of cardiovascular events over the next 1-3 days while excess cold increases the risk of cardiovascular events over the next month or so [32,33]. Additional studies confirming or refuting our observations are clearly needed. Impaired vasoreactivity assessed by TCD is an established predictor of subsequent stroke, especially among patients with high grade carotid artery disease [34]. If higher temperatures do indeed increase the risk of cererbrovascular events (although, as mentioned above, the evidence for this remains equivocal), our results implicate impaired vasoreactivity as one potential mechanism.
We found no association between O 3 and any measure of cerebrovascular hemodynamics. This finding is consistent with previous experimental and observational studies failing to find associations between recent O 3 levels and peripheral vascular function [29,35,36]. Additionally, our findings indicated that the association between ambient temperature and blood flow velocity was modified by PM 2.5 levels. Although we are not aware of any studies examining how air pollutants interact with temperature on cerebrovascular function, several observational studies suggest the interplay of meteorological variables and air pollutants on peripheral vascular function. For example, Hampel et al. [37] found a significant interaction between PM 2.5 and ambient temperature on blood pressure among pregnant women. Further experimental studies may be needed to uncover the biological mechanism underlying the potentially complex interactions between pollutants and meteorological variables in eliciting vascular responses.
This study has a number of potential limitations. First, the high correlation between ambient temperature and dew point in this dataset makes it difficult to separate the effects of these exposures. Second, the 423 subjects included in this analysis tended to be leaner, younger, less likely to have hypertension, and more likely to be white compared to the overall cohort population [19]. Therefore, our findings may not be generalizable to the entire MOBILIZE Boston Study cohort or the general elderly population. Moreover, our results may not be generalizable to populations in other geographic areas with different climatic conditions as well as different levels and sources of air pollution. Third, as has been done in past studies, we used ambient temperature data from a single nearby weather station, potentially leading to some exposure misclassification. More importantly, ambient temperature may not accurately reflect actual temperature exposures during times when participants are indoors. Indoor or personal temperature exposure may have different effects on cerebrovascular hemodynamics, but we were unable to explore this possibility due to the lack of information on participants' indoor environment. Fourth, our results reflect a combination of the associations observed among repeated measures within the same individual and cross-sectional associations observed across individuals. On the other hand, our study has several strengths including a novel hypothesis, a relatively large sample size, a well-characterized study population, and a detailed assessment of cerebrovascular hemodynamics.
Conclusions
In conclusion, in this cohort of elderly participants, we found that ambient temperature was associated with lower cerebral blood flow velocity, higher cerebrovascular resistance, higher mean arterial blood pressure, and lower cerebral vasoreactivity. The association between ambient temperature and blood flow velocity was attenuated at high levels of PM 2.5 , but unaffected by ambient O 3 levels. These findings build upon and extend the growing literature on peripheral vascular effects of temperature and air pollution and provide insights into the potential mechanisms of weather-related cardiovascular morbidity and mortality.
Supporting Information S1 Fig. Exposure-response functions between ambient temperature and markers of cerebral hemodynamics among 423 participants in the MOBILIZE Boston Study. Natural cubic splines with 3 degrees of freedom were applied to model the association between ambient temperature and each outcome (blood flow velocity, cerebrovascular resistance, mean arterial pressure, and cerebral vasoreactivity) averaging temperature over different periods (1-, 7-, 14-, 21-, or 28-day) prior to the clinic visit. The dashed lines denote the 95% confidence intervals. The carpet plots along the x-axis denote the density of temperature values. All models were adjusted for age, sex, race, smoking status, hypertension status, diabetes, body mass index, visit number, day of week, season, and long-term temporal trends. The concentration-response plot of 1-day moving average was similar to the 2-and 3-day plots, and the 7-day moving average plot was similar approximate to the 5-day plot. (TIFF) S1 Tables. Supplemental tables. (DOCX) | 2016-05-16T09:32:08.444Z | 2015-08-10T00:00:00.000 | {
"year": 2015,
"sha1": "75256278e85494d1f1da4f69d69964ccf5acccef",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0134034&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75256278e85494d1f1da4f69d69964ccf5acccef",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
150720522 | pes2o/s2orc | v3-fos-license | Dialogic analysis vs . discourse analysis of dialogic pedagogy : Social science research in the era of positivism and post-truth
The goal of this article is to compare and contrast dialogic analysis versus discourse analysis of dialogic pedagogy to address Bakhtin’s quest for “human sciences” and avoid modern traps by positivism and by post-truth. We argue that dialogic analysis belongs to dialogic science, which focuses on studying “the surplus of humanness” (Bakhtin, 1991, p. 37). “The surplus of humanness” is “a leftover” from the biologically, socially, culturally, and psychologically given – the typical and general – in the human nature. It is about the human authorship of the ever-unique meaning-making. Dialogic analysis involves the heart and mind of the researchers who try to reveal and deepen the meanings of the studied phenomena by addressing and replying to diverse research participants, other scholars, and anticipated readers (Matusov, Marjanovic-Shane, & Gradovski, 2019, in press). We argue that dialogic science is concerned with meta-inquiries such as, “What does something in question mean to diverse people, including the researchers, and why? How do diverse people address and reply to diverse meanings?” In contrast, traditional, positivistic, science is concerned with meta-inquiries such as, “How things really are? What is evidence for that? How to eliminate any researchers’ subjectivity from the research?” (Matusov, 2019, submitted). Positivist (and monologic) science focuses on revealing patterns of actions, behaviors, and relationships. We argue that in the study of dialogic pedagogy, it is structural and/or functional discourse analysis that focuses on studying the given and objective aspects of dialogic pedagogy. In the paper, we consider, describe, interpret, and dialogically re-analyze a case of dialogic analysis involving science education coming from David Hammer’s and Emily van Zee’s (2006) book. We also discuss structural and functional discourse analysis of two pedagogical cases, a monologic and a dialogic one, provided by David Skidmore (2000). We dialogically re-analyze these two cases and Skidmore’s research. We conclude that in research on dialogic pedagogy (and beyond, on social sciences in general) both dialogic science (involving dialogic analysis) and positivist science (involving discourse analysis) are unavoidable and needed, while providing the overall different foci of the research. We discuss the appropriateness and the limitations of discourse analysis as predominantly searching for structural-functional patterns in the classroom discourses. We discuss dialogic tensions in the reported dialogues that cannot be captured by discourse analysis search for patterns. Finally, we discuss two emerging issues among ourselves: 1) whether discourse analysis is always positivist and 2) how these two analytic approaches complement each other while doing research on dialogic pedagogy (and beyond). ISSN: 2325-3290 (online) Dialogic analysis vs. discourse analysis of dialogic pedagogy Eugene Matusov, Ana Marjanovic-Shane, Tina Kullenberg, Kelly Curtis Dialogic Pedagogy: An International Online Journal | https://dpj.pitt.edu DOI: 10.5195/dpj.2019.272 | Vol. 7 (2019) E21 Eugene Matusov is a Professor of Education at the University of Delaware. He studied developmental psychology with Soviet researchers working in the Vygotskian paradigm and worked as a schoolteacher before immigrating to the United States. He uses sociocultural and Bakhtinian dialogic approaches to education. His recent books are: Matusov, E. (2017). Nikolai N. Konstantinov’s authorial math pedagogy for people with wings, Matusov, E. & Brobst, J. (2013). Radical experiment in dialogic pedagogy in higher education and its Centauric failure: Chronotopic analysis, and Matusov, E. (2009). Journey into dialogic pedagogy. Ana Marjanovic-Shane is an Independent scholar in Philadelphia, USA. She studies meaning making in human development, dialogic educational relationships and events, democracy in education, dialogic teacher orientation, the role of imagination, drama, play and critical dialogue in education. In her studies, she is developing a dialogic sociocultural paradigm, inspired by a Bakhtinian dialogic orientation. Her articles were published by "Mind, Culture, Activity Journal", "Learning, Culture and Social Interaction", and as book chapters in books on play, education, and democracy. Her most recent publication is: MarjanovicShane et al, (2019). Idea-dying in critical ontological democratic dialogue in classrooms. Learning, Culture and Social Interaction. Tina Kullenberg holds a Doctor of Philosophy in Education, currently working at Kristianstad University (Sweden) as a lecturer with teacher students from various programs, and the Master's program in Educational Science. Her research focuses on pedagogical communication, applying dialogic and sociocultural perspectives on teaching and learning. Lately she has been especially engaged in Bakhtininspired approaches to education. She also has a special interest in addressing democratic issues with a relational lens, for example, exploring the intricate dynamics of power-relations in educational dialogues between teachers and students or peers, premises for student agency, and other institutionally embedded dilemmas or opportunities in schooling of different types. Moreover, she has a background from the area of music education, in theory and practice. Kelly Curtis is currently a PhD student in mathematics education at the University of Delaware with a Bachelor's degree in mathematics and a Master's degree in mathematics education from Brigham Young University. She has taught mathematics and mathematics content courses for pre-service teachers at the secondary and college level for six years (in Utah, Colorado, and now Delaware). She is currently working on her dissertation which has to do with how cognitive demand of mathematical tasks affects the way that teachers and students interact around mathematics. She wants to help teachers learn how to improve their teaching skills. She is especially interested in mathematical discourse and how teachers can implicitly send messages to students about what it means to do mathematics and be good at mathematics.
Introduction
In our observation, broadly defined "dialogic pedagogy," which emphasizes the importance of dialogue for education, has mainly, but not exclusively, been studied discursively.Since the mid1970s, discourse analysis -i.e., finding structural and/or functional patterns of classroom interactions -has been extremely helpful for problematizing conventional monologic pedagogy.Specifically, the discovery and critique of "the triadic exchange" (Sinclair & Coulthard, 1975) -when the teacher initiates a discourse by In contrast, dialogue, involves a special quality of human relations, based on the principle of "a plurality of consciousnesses, with equal rights and each with its own world, [that] combine but are not merged in the unity of the event" (Bakhtin, 1999, p. 6).At the core of this (ontological) dialogue, argued Bakhtin, is authorship of meaning.Meaning is defined as a dialogic relationship between an interested question asked by one person and a serious reply by another person in a broader sense (Bakhtin, 1986).In genuine education, the questioning person is primarily the student, while the responding person may or may not be the teacher.In educational research guided by dialogic analysis, the questioning person is the researcher while the responding person is the researched, although in a critical dialogue of dialogic analysis they can easily switch their roles because in dialogic analysis, any authored meaning can become a subject of further questioning.In our view, this is what might be behind of "a plurality of consciousness, with equal rights" -a right to be questioned by the research participants and beyond.
We argue that all types of Discourse Analysis 2 are mainly about recognizing structural and/or functional patterns of discourse -patterns that have some relevancy for the researchers (and, at times, beyond the research community).By "structural patterns," we mean a pattern of organization of some elements of the discourse like conversational turns, roles, power relations, mediational tools, rules, norms, semiotics, narrative structures, voices, linguistics elements (e.g., grammar, genres, registers), and so on.
In our judgment, Discourse Analysis in all its all variations focuses on abstracting structural-functional patterns, while considering the very notions of "structural and functional patterns" broadly defined, beyond narrow definitions of structuralism and functionalism.
We claim that the birthmark of Discourse Analysis, as such, is to treat dialogue as it, as a thing among other things, as an object of analysis existing independently and outside of the researchers studying it, which can be viewed by well-informed and well-trained scholars in the same way (cf., "intercoder reliability").Discourse Analysis objectifies the subjectivities of the participants by trying to eliminate the subjectivities of the researchers, and especially their authorial subjective judgments (of course, never successfully).Thus, Trappes-Lomax sees differences in researchers' subjectivities as a problem of validity to be solved using diverse methods, One way of dealing with subjectivity is through multiplicity of approach.This is usually referred to as triangulation and is especially characteristic of ethnographic approaches.Triangulation is generally understood to refer to the use of different types or sources of data (for example a participant's account in addition to the analyst's account) as a means of cross-checking the validity of findings, but may also refer to multiple investigators, multiple theories, or multiple methods (Denzin, 1978), (Trappes-Lomax, 2004, p. 141).
In positivist-minded research, disagreements among researchers about perceived patterns of discourse have to be reconciled into agreements.
These attempts to take out the researchers' subjectivity are, in our view, attempts to take a bird'seye perspective on the given reality and to make it as pure as possible -even and especially when Discourse Analysis focuses on various issues of human relationships, perspectives and identities.Thus, for example, in his classical book on Discourse Analysis, James Gee defines it in the following way, "Discourse analysis considers how language, both spoken and written, enacts social and cultural perspectives and identities" (Gee, 2011, p. i).He claims that Discourse Analysis focuses on meaning, "The approach [to Discourse Analysis] in this book looks at meaning as an integration of ways of saying (informing), doing (action), and being (identity), and grammar as a set of tools to bring about this integration" (Gee, 2011, p. 8).In our judgment, Gee defines "meaning" monologically as a particular pattern that has the function of an integration of informing, doing, being, and "grammar of tools."Monologic "meaning" is always a pattern, a thing, which is often located in mediation, gestures, intonations, behaviors, actions, Dialogic analysis vs. discourse analysis of dialogic pedagogy Eugene Matusov, Ana Marjanovic-Shane, Tina
E24
signs, words, sentences, utterances, networks, and systems.This pattern seeking is often related to a structuralist and/or functionalist approach to language use in a situated interaction, focusing on studying either structures of a discourse or functions it services for some activities or both.Linell (1998) describes it in the following way, although within the framework of "dialogism," which, in our view can be more accurately called "discursivism" 3 , Discourse and discursive practices are themselves highly structured.It is possible to generalize across singular situations to define patterns, sequential structures, routines, recurrent strategies and situation definitions (framings), activity types and communicative genres, as well as more traditional linguistic unit and rules.But this is largely structure within discursive practices, rather than structure apart from, above and before discourse.Moreover, it is first and foremost an organization of social actions, and not a structure pertaining exclusively to language and linguistic forms.This is of course not to deny that there is a formal structure of e.g. the syntax of spoken language as it appears in actual discourse.Within a comprehensive dialogism, structuralist and functionalist perspectives could penetrate and complement each other (Linell, 1998, p. 5, italics are in original).
In Discourse Analysis, monologic "meaning" is a semiotic thing, a semiotic pattern.Thus, Gee (2011, pp. 8-10) exemplifies the discursive "meaning" by two sentences about hornworms showing that one sentence belongs to an everyday discourse, shaped by certain cultural practices, while the other belongs to a scientific biological discourse, shaped by (positivist) science practice.Gee convincingly argues that these two discourses generate different functional patterns of integration of informing, doing, and beingi.e., different discursive "meanings" (or better to say functions).Essentially, espoused Discourse Analysis equates "meaning" with a particular pattern recognition.Of course, in practice Discourse Analysis also usually involves dialogic meaning-making of the recognized structural-functional patterns as well.It often has genuinely interested inquiries and question addressing, at least, the academic community, in which this Discourse Analysis is situated.It is often guided by the researchers' own genuine emerging interest, redefining their initial inquiries and questions.Discourse Analysis researchers often try to make sense of the new patterns they find and consider their implications.However, Discourse Analysis researchers often feel uneasy with their own dialogic meaning making because it undermines the objectivity, generalizability, and validity of their research by positioning the researchers as subjective unique authors of their dialogic meaning making rather than as objective scholars of a given reality (Denzin & Lincoln, 2005).
When Discourse Analysis is used to study dialogic pedagogy, it focuses on looking for, identifying, and coding certain structural and/or functional patterns that are attributed to dialogic pedagogy by the researchers.These (mostly) functional patterns might involve "dialogic instruction," "dialogic enquiry," "dialogic teaching," "heteroglossia," "intertextuality" (or "heterodiscursia"), "high students-teacher talk ratio," "asking open-ended questions," "the teacher's uptake of student ideas," "cycles of critique," and so on, Drawing mainly on the theoretical ideas of Bakhtin on the dialogic nature of language, a number of authors have stressed the educative potential of teacher-student interaction which enables students to play an active part in shaping the agenda of classroom discourse.Examples include: dialogic instruction, characterised by the teacher's uptake of student ideas, authentic questions and the opportunity for students to modify the topic (Nystrand, 1997); dialogic enquiry, which stresses the potential of collaborative group work and peer assistance to promote mutually responsive learning in the zone of proximal development (Wells, 1999); dialogical pedagogy, in which students are invited to retell stories in their own words, using paraphrase, speculation and counter-fictional utterances E25 (Skidmore, 2000); and dialogic teaching, which is collective, reciprocal, supportive, cumulative and purposeful (Alexander, 2004) (Skidmore, 2016, p. 98, the italics are original).
These structural-functional patterns of the studied students-teacher classroom discourse are both conceptualized and operationalized to be coded for presence or absence the listed structural-functional patterns of dialogic pedagogy.What is important for our discussion here is that researchers of Discourse Analysis do not need to participate with their mind and heart in the ideas discussed by the research participants beyond the coding, interpretation of findings, and making implications, which often involves a minimum of their ontological engagement with the ideas raised in the studied discourse and a minimum of their dialogic contact with their research participants.Researchers' participation with their heart and mind in the studied phenomenon (i.e., ontological engagement) means reporting the researchers' personal feelings, authorial thoughts, personal experiences, personal connections, authorial descriptions, and authorial judgments in a response to the observed phenomenon and the research participants' contributions (Matusov et al., 2019, in press).Discourse Analysis researchers do not usually engage in addressing or replying to the research participants, at least not in an explicit and legitimate way.They are often not in a dialogic ontological contact with them, neither intellectually nor emotionally.
In contrast, in Dialogic Analysis, researchers are active co-participants in the dialogue with the researched participants and their ideas.The data collection process adopts dialogical approach (Sullivan, 2011) which attempts to form dialogues among the participants as well as between the participants and researchers to explore the research question.Sullivan wrote that dialogical approach is concerned with the subjectivity of data analysis.This approach takes Bakhtin's (1986Bakhtin's ( , 1999) ) perspective that "ideas are exchanged but ideas are actually lived rather than abstract and are full of personal values and judgements" (Sullivan, 2011, p. 2), which attempts to form dialogues among the participants as well as between the participants and researchers to explore the research question.
In this article we present, analyze, and compare dialogic analysis and discourse analysis.We first focus on an existing dialogic analysis study of a dialogic lesson in science with first-grade children discussing gravity by Hammer and van Zee (2006), that we define as a case of "tired gravity", Case #1.In the process of our dialogic re-analysis of this pedagogical event, we enter into agreements and disagreements with the original researchers, David Hammer and Emily van Zee.We express puzzlements and have many questions for them and for the first-grade teacher whose science education class was recorded and described by them.
Next we present and dialogically re-analyze two cases presented by David Skidmore (2000) as cases of Discourse Analyses (Cases #2 and #3).One of the cases (Case#2) represents Skidmore's discourse analysis of monologic pedagogy in a language art lesson in a multicultural primary school where the teacher quizzed the students to determine how well they comprehended a story "Rocky's Fox."The other case (Case #3) represents Skidmore's discourse analysis of dialogic pedagogy with the 5-6th grade students discussing their views about characters in a story "Blue Riding Hood," a parody of the "Red Riding Hood" that makes all the characters behave in rather questionable ways.In our subsequent dialogic reanalysis of these cases, we discuss the appropriateness and the limits of discourse analysis as predominantly searching for structural-functional patterns in the classroom discourses.We discuss dialogic tensions in the reported dialogues that cannot be captured by discourse analysis search for patterns.We question David Skidmore's conclusions about dialogicity of the Case#3, based solely on the discursive patterns that are coded as dialogic, without entering into the critical dialogue about authorial meaningmaking between the teacher and the students and about the discussed story.
Dialogic analysis: Case#1 of "tired gravity"
As an example of dialogic analysis, we chose a case of "tired gravity" from a terrific book "Seeing Science" by David Hammer and Emily van Zee (2006) describing, analyzing, and promoting dialogic pedagogy in science education.The book includes many lessons5 of teachers of diverse grades involving their students in deep, more or less free, conversations about diverse physical phenomena.The lessons were then presented to teachers during professional development workshops to "see science" in the students' free conversations.The researchers also interviewed the author teachers about their lessons and their decision making.The main goal of both the research and pedagogical practices described in the book was to promote professional dialogues among teachers and to recognize and imagine new teachinglearning opportunities, rather than to criticize author teachers.Hammer and van Zee argued in the book that the major problem of traditional science education (and probably beyond) is failing to recognize science in students' everyday thinking and conversations about natural phenomena.With a reference to Albert Einstein, they define science as refined discursive thinking on possible mechanisms -"tangible causes and effects" (p. 6) -behind natural phenomena.
The entire case of "tired gravity" in the first-grade classroom took exactly the first 9 minutes of the Day#2 lesson (21 minutes total on the video accompanied with the book).Paradoxically, it was the most prominent "case" in Ana's and Eugene's reading of the book.We wrote, "paradoxically" because, arguably, this "case" was the least developed by Hammer and van Zee, who skipped it in most of their workshop seminars, "In almost all seminars we skip this segment, in order to get to what comes later.It is mostly made up of the teacher trying the experiment herself in front of the children.At the end there's some joking about how 'maybe the gravity's tired' (lines 212-25), which could be interesting to think about.What's the joke?What understanding goes into making it or finding it amusing?"(p.90).
However, David Hammer and Emily van Zee seem to be clearly intrigued with this case, since they actually keep referring to it in their introductory chapters.The "tired gravity" and what the teacher, was doing to guide the children, attracted a lot of attention, but at the same time it seems that it was rather confusing and made David and Emily not sure what to do with it.David and Emily, we wonder what the teachers said about it when Hammer and van Zee presented it in a few workshop-seminars.This episode has initially become a case of interest for me (Matusov, 2018b) and then for the other authors of this article.We respectfully disagree with David Hammer and Emily van Zee's interpretation that the episode can be reduced to the teacher's demonstration followed by the children's joke.We were also attracted and even fascinated by the author teacher's immediate reflection on her decision making during the episode (p.78)
E27
and then her postponed, arguably contradictory, profound reflection promoted by her later conversations with other teachers (p.78) (see below).Question for David, Emily, and the teacher, what were these conversations about?Do you remember?As a former schoolteacher of physics, a dialogic pedagogy educator, and a scholar of dialogic pedagogy, I (Eugene Matusov) immediately recognized an importance and richness of this dramatic dialogic event that I conceptualized as first the teacher inviting her students for a genuine dialogue, then silencing them, and then critically rethinking her guidance.The other three authors of this paper agreed.Elsewhere, my second author here (Ana Marjanovic-Shane), some other colleagues, practitioners and scholars of dialogic pedagogy, and I developed a research on "idea-dying out" in dialogic pedagogy, in which we studied cases when students' ideas were extinguished in a classroom discussion (Marjanovic-Shane, Meacham, Choi, Lopez, & Matusov, 2019).We (the first and second authors) think that the case of "tired gravity" belongs to this phenomenon of the teacher's silencing the children's ongoing dialogue.
So, why did we choose this particular case to illustrate and engage our readers in dialogic analysis of dialogic pedagogy?We have the following three main reasons.First, the case generates a strong excitement in all of us.Usually, it is so difficult to find a teaching case where a teacher allows students to speak freely and listens to them very carefully and mindfully.We also had a strong desire to discuss it among ourselves and with David Hammer, Emily van Zee, dialogic researchers of dialogic pedagogy, with whom we both agree and disagree and with the author-teacher, whose contradictory and creative reflections we found very thought provoking.In other words, this case powerfully "sucks" us into a dialogic analysis of dialogic pedagogy and hopefully we will be able to promote engagement of our readers in it.Second, in contrast to more developed cases of dialogic analysis such as, for example, by Joe Tobin and his colleagues in their project "Preschool in three cultures: Japan, China, and the US" (Tobin, Davidson, & Wu, 1989;Tobin, Hsueh, & Karasawa, 2009), Hammer and van Zee focused on a) dialogic pedagogy (in their version) and b) academic curriculum and instruction.Arguably, the most attractive cases (for us) of dialogic analysis by Tobin focused on a) conventional pedagogy and b) classroom management.Third, the case of "tired gravity" is at par with David Skidmore's Cases #2 and #3 (see below) of discourse analysis involving elementary school students.Now we turn to providing a description and our interpretation and critical meaning making of the case.
On the day #2 of the "Falling Objects" lesson in the first grade, the teacher started her lesson asking her students to report about their experiments with a falling sheet of paper versus a falling book from the same height letting them fall at the same time.The students were very enthusiastic to reply providing all possible answers the outcome of the experiment6 :
E29
For a while, the students and the teacher continued to discuss the different results children got, until the teacher asked, (73-77) "How could it be that we all-we got different results-[…] -when we did the same thing?"Analyzing this event, we (the authors of this article) had different interpretations of what was going on at this point.We decided to preserve our authorial discussion in which we questioned and challenged each other's interpretations.Below we present it and extend an open invitation to the reader as well.
Tina Kullenberg: It is interesting that these key words "How could this be" by the teacher are to be repeatedly found in lines 77 & 86.I am not sure of how dialogic this particular question really is.Whether it is a teacher-like rhetorical question or not.If the teacher is searching for a correct, pregiven answer (according to the scientific laws in this case) or it could be conceived of as a genuine, explorative question.
Eugene Matusov: Dear Tina, my reading of this that the teacher sincerely believed in the moment that getting different results of "the same experiment" is impossible.I think her question and her search for possible plausible explanations of the phenomenon experienced by the children are genuine.I do not think that she tried to make the students to arrive at a preset curricular endpoint.
Ana Marjanovic-Shane: I am unsure whether to agree with you, Tina, or with you, Eugene, that in the case of "How could this be" question, the teacher is trying to swing the children toward a preset scientific understanding of gravity, thus not being genuinely interested in their answers, or if she is genuinely puzzled with different results in "the same experiment."I kind of think that the teacher was fluctuating in her being genuinely dialogic (like in 41-44 -where she is asking Ebony to explain what he meant by "fell first"; and when she is carefully mirroring the students' reports and summarizing differences in among them); and being herself sure in her own understanding of gravity, rather than actually trying to conceptualize what the students' concepts of gravity might be.This is why it sometimes looks to me as if she is genuinely interested in the children's dialogic opinions, and sometimes she seems to be just sure that her understanding of the science of gravity is the only possible, correct way of conceptualizing these forces.
I think that we see these two different aspects of this teacher in her genuine struggle to be dialogic, and in her approach to science.
Tina: I see what you mean, Eugene.I am not quite sure about my own opinion yet but will take a closer look on it.However, Ana, I really liked your added reflections here.The teacher perhaps was wondering this genuinely, so to speak, but what is even more important is her approach to the voices of the others, namely the children's own understandings of the phenomenon.To mirror their ideas and clarify different viewpoints among them without sincerely letting these viewpoints affect your own interpretations is not completely dialogic?(Also in monologic scaffolding you are mirroring and answering prompting questions like that, no?) Kelly Curtis: I think these are all interesting points.I like Tina's point about getting at students' understanding of the phenomenon, in their own words.When I read "how could this be" I was wondering if she was pressing students to reflect the conditions of the experiment.For example, perhaps the students did not all drop it in the same way.She summarizes what the students said, "fell first," meant by reiterating, "at the same place, at the same time."Perhaps she was trying to prompt students to talk about someone not letting go of the objects at exactly the same time.In this way, I agree with Eugene that the teacher believed that students should have had similar results.interpretation: On the one hand there is a preset/given understanding of gravity.In that sense ultimately the teacher's intention might be to lead them towards that (Here, I tend to agree with Tina's point about monologic scaffolding).On the other hand, if you look at the classroom interaction per se, it can be seen as an opportune opening to furthering an understanding of a phenomenon in the dialogue of differences in probing the students' subjective experiences further.>>It seemed that the teacher decided to focus the children on searching for a physical mechanism that could explain their striking different outcomes of their experiment, conducted in the same way (as the teacher claimed).The students seem to struggle for an explanation, by first trying to retell their experiences: for some of them, in some turns, the book fell first, but in other turns the paper fell first.The teacher continued to solicit their opinions on what happened and why, until one child, Rachel, suggested that the reason is in the "forces of gravity."Another child, Diamond, asked for an explanation of what "forces of gravity" are.Various children gave different answers: that gravity is what "keeps us down on the ground" when we jump, like "ground magnets," that there is no gravity in space where one can "never fall down," but just "float in the air," and so on.Some children started demonstrating it by jumping and showing how gravity pulls them down.When more children got up to jump and laugh, the teacher said, (turn 121) "Oh, we're not all going to [jump?]." and directed the children to sit down.The teacher then restarted the dialogue about the force of gravity: Gravity's pulling the book down before the paper.
The teacher then carefully guided the children's attention to a puzzle of getting different results in their experiments, asking, (139) "Why would gravity-why would gravity?[Three-second pause] How do I phrase this.Why, why would gravity sometimes make the book come down first, and then the paper.But the, the same gravity at other times make things come down at the same time.Or you're saying that gravity sometimes makes the paper come down before the book.… Why does-why does gravity do all the-I'm trying to think.[Two-second pause] How could gravity make-how could the same force of gravity… give us three different results?" The children again seemed puzzled and started to describe what happened in their experiments.The teacher guided them with a lot of supporting questions to remember whether they dropped the paper or the book first, or together.A child, Henry was deeply involved in testing his ideas about what happens if you drop the book and the paper together.At one point his idea was that, (167) "… if you, if you drop 'em at the same time, maybe they might fall on the same time-on the floor." The teacher announced that she was going to demonstrate what happens by dropping the paper and the book together three times and asked the children to watch and see what happens.When Diamond offered a hypothesis, the teacher dismissed it and said, (174) "Well, we'll see.Some people said that that's
E31
what they found out, other people that they saw that they hit at the same time, and then we had two people that said that 'well, the paper hit first.'Let's see what happens, OK?You guys watchin'?" The teacher started dropping the paper and the book together.Every time, when the book dropped first, some children exclaimed in confirmation that they "knew it."After the third time, there was some laughter and a boy, Henok turned to Ebony saying, "See, Ebony?" Then the teacher looked at Ebony and said, The children were still puzzled.Autumn alleged that Ebony possibly dropped the paper first, but Ebony denied it.Then Allison offered another hypothesis ( 207) "-the gravity probably had, uh, [turns to Ebony] the gravity probably just pulled the paper a little more than it pulled the, the, um, book."Ebony responded to that with a laugh, ( 209) "Maybe it's tired." This made almost all students laugh and repeat "maybe the book is tired."The teacher joined the children's laughter for a while, but then she stopped and said, ( 216) "Let's come back to Allison's idea.Allison said, 'Well, maybe the gravity-the time when Ebony did the gravity pulled the paper down more than the book that time.'"However, instead of following the teacher, Ebony laughingly repeated his joke that the gravity is maybe tired.Many children exuberantly joined him.Autumn leaned towards the teacher, very close to her, eagerly waving and smiling to her, and repeated Ebony's joke about "tired gravity." But the teacher announced a change of the activity, ( 223) "We're going to do something different now with the piece of paper; watch this."The students continued for a while to make comments about gravity being tired.Allison continued a discussion with Ebony, about gravity pulling something faster than something else, using hand gestures to represent each object.The teacher continued with the change of the activity, ( 228) "When you go back to your seat, this is what you're, you are going to do with your piece of paper."She crumples up a piece of paper… The teacher now chose to not be responsive to Autumn, who was explicitly echoing the joke that the gravity was tired.Instead, the teacher demonstratively turned her face and body away from Autumn and introduced the next activity on the agenda, shifting the conversation to another phenomenon: the crumpled piece of paper falling down at the same time as the book.No more discussion of why the children reached different outcomes by letting a straight piece of paper and a book to fall down.No more discussion of variable and unstable gravity.No more discussion of tired gravity.No more joking.Back to serious exploration of the physical mechanisms and the instructional agenda.
Tina: To me it is feels that the teacher somehow speaks in terms of "discovery" and "exploration."But I think she does it for the sake of persuasion; the aim of giving evidence for the universal physical law?
Eugene: Again, I respectfully disagree with you, Tina.I think that the teacher was sincere here -in my view, she sincerely exhausted her attempts to engage the students in exploration of "mechanisms" and
E32
she did not know what to do with the diverse outcomes of the experiments and with the "tired gravity" joke.She got stuck and wanted to move on.What do you think?Tina: Eugene, then I would say sincere persuasion.After viewing the video and reading the transcriptions, I do not have the impression that the teacher is teaching and talking in an openended, non-finalizing way to the children.In my eyes, she orients to a consensual "agreementdiscourse", seeking to unifying the voices in the end, according to her scientific and pedagogic believes.In doing so, she mirrors and clarifies different student viewpoints and experimental outcomes, no?
Ana: I am listening to the teacher in turn 216: Teacher: [Laughs with the students] Let's come back to Allison's idea.Allison said, "Well, maybe the gravity-the time when Ebony did the gravity pulled the paper down more than the book that time." To me (sic!), it sounds like the teacher is sincerely trying to "hunt" for children's ideas and mirror them back to children, in order to provoke more discussion.And yet, she still cannot figure out what to do with some of their dialogic turns: I think she struggles to interpret what the expression "tired gravity" was meant to do and what it really did.I think that she herself is wondering was it a joke, or somehow a genuine idea about how gravity might work.I think that she was not insincere, or manipulating, but because she herself was somewhat confused about the intent of the "tired gravity" that children themselves laughed about, she decided to retreat to a safer ground of "scientific experiments" (see her subsequent explanation in the quote below.)What do you think?Especially, you, teacher, and you, David and Emily, what do you think?Tina: Ana, I also think she is somehow seeking for the children's ideas and making them explicit in order to provoke further discussion.However, what I was referring to more specifically is the ultimate purpose of this kind of classroom discussion (as I interpret her, of course I would like to hear her own opinion here).In this particular case, the teacher seems to refer to Allison's suggestion of a scientific principle that she (Allison) herself was after.Therefore, the teacher had a plausible reason to re-contextualize Allison's claim again.In doing so she argues for her own opinion via support from Allison's "correct" reasoning.Utilizing Allison's comment as a resource for herself is at least partially finalizing teaching?What do you think about such an interpretation?
Kelly: Tina, this makes me wonder what the teacher's opinion of gravity is then.When Galileo dropped two spheres of different masses off the Leaning Tower of Pisa, he demonstrated that the objects would fall for the same amount of time.Therefore, from my perspective, gravity "pulls" on objects the same amount.The issue with the paper versus the textbook has more to do with air resistance than with gravity then.But perhaps the teacher was allowing students to explore the idea of gravity pulling on the book "more" in preparation for the next experiment when they crumple the paper into a ball and perform the experiment again.This experiment would contradict their conclusion that gravity pulls more on heavier objects.I wonder if the teacher made the connection between the two activities, where in the first activity the book would drop first but in the second activity they would drop at the same time.This connection would influence how the teacher talked about the activity with the children.
E33
We judge 9 this event as "idea-dying out" (actually "ideas-dying out") (Marjanovic-Shane et al., 2019) through the teacher's silencing the students.The teacher "killed" the dialogue about inconsistency of the experiment's results and the children's explanation through "tired gravity" by moving the discussion on to another topic, another physical phenomenon, and another inquiry.
<<Tara Ratnam, feedback 2019-02-06: I tend to agree here [with your judgment characterizing the event as "idea-dying out"].I don't see the point of opening to a dialogue the teacher initiated by the experiment she set for students who came up with diverse results.She didn't follow it through by allowing students to experiment and explore further to reach new levels of understanding.She also let go of her own contribution that could provide another perspective for students to juggle with in advancing their thinking.The potential to learn from differences was given an abrupt stop!However, this might not stop students from continuing to think /argue about it further beyond the class.In this sense there is still some value in the exercise the teacher set for students and the puzzling differences that it brought out.>> The teacher seemed to disagree with us, at least, immediately after her lesson (she might agree with us later when she started seeing a possible physical mechanism behind the children's "tired gravity" joke).This is how the author teacher explained her pedagogical and organizational decision to switch the topic of the classroom discussion, At this point, the students were out of ideas -perhaps out of steam.Or maybe they were confused by my asking them about an inconsistency that they did not see.In any case, I did not feel many of the students were comfortable with this explanation.The mechanisms they had talked about during their predictions, that the book would be pulled harder to the ground and that the air would push up on the paper, still seemed the most tangible explanations they had, and I wanted to get back to them.Since it looked like the class was tiring out, I decided to drop the book and flat piece of paper in front of the whole group, hoping to generate more discussion.It did not work.Instead, the students became a little silly, talking and giggling about how the book and the gravity were getting "tired."… It was time to move on.(Hammer & van Zee, 2006, p. 78).
Tina: I think this utterance, "At this point, the students were out of ideas -perhaps out of steam", is perhaps one of the most remarkable (and central) reasoning, worth to problematize because I cannot find evidence for it at all in the transcription.On the contrary, to me it seems like the children have a flow of ideas, although not the expected ideas, and they are using joking/humor as a childish, playful way of engaging in learning (I see it in a study of my own right now!).At least in my eyes.Do you think it looks like they are running out of ideas, tired and out of steam?
9 Tina: Or interpret?It could sound too harsh to the reader with the word "judge"?? Eugene: Sorry, Tina.This is pure judgment and not just interpretation.I know that middle-class folks do not like strong words and prefer euphemisms " # $ %.But feel free to explain of why you think that "judgment" is inaccurate descriptor of our (or mine only?) action here?Ana: I agree with Eugene that what we are doing is not just interpretation, but our judgment is based on our interpretation.This may be a dialogic finalizing -a provocation to the addressed teacher to respond to our judgment.Tina: OK, it could be my lack of language skill as well.In Swedish it is a strong value-based term.I was thinking about general readers.What do the term "judge" mean to you then?Eugene: I think your Swedish understanding is probably correct.For me, "judgment," or better to say, authorial judgment, is making a subjective value-laden evaluative opinion about a phenomenon or a deed, for which the author of the judgment takes responsibility.I am against the Christian call not to judge other people.I think authorial judgments are unavoidable and important to make especially when they are highly dialogic.In itself, an authorial judgment is a deed, putting its author on record and on line of judgments by others.What do you think?
E34
Eugene: I think this was the teacher's interpretation at the moment of the lesson.Later, she seemed to disagree with her initial judgment, but it was not stated explicitly in the book.
Ana: Tina, I agree that the children were having a lively flow of ideas -but the teacher interpreted it different!! See the quote of hers.On another note -about our own disagreements and misunderstandings: It seems to me that it is sometimes hard to follow whose voice is saying what -since we all refer constantly to someone else's voice!I am not sure if this is possible to untangle -or is it just the really complex nature of our analysis.
Tina: Ana, your first point: yes, exactly!This is intriguing and worth to notice: that the children in fact have a vital web of ideas but the teacher did not seem to value it during the lesson.
Kelly: I agree and find it interesting that humor and laughter is a sign for some teachers that the lesson is getting off track when, in this case as Tina noted, the children are having a flow of ideas.I think this reflects the issue of the teacher dealing with classroom management that was mentioned previously in this paper.Generally, students are seen as "on task" when they are serious in how they are discussing ideas and seen as "off task" when they are acting "silly" and using humor.For the teacher, the students were simply tired or got stuck with the inquiry, "It was time to move on."On the one hand, we agree with the author teacher that it may be OK "to move on" when students' ability to dialogue on an inquiry or a phenomenon is exhausted because of their intellectual, emotional, or attention ability.We also agree that the discussion might have been stuck here.
However, on the other hand, we suspect that it was the teacher who got stuck and not the students.For the teacher, variable and unstable gravity did not make sense during the lesson, as she admitted later.Also, it seemed to us, at that moment she did not know how to guide the children in their phenomenon of having three different outcomes for the same experiment.After having conversations with other teachers during seminars organized by David Hammer and Emily van Zee10 , the teacher seemed to have a second thought, It sounded as though Rachel [lines 128-137] thought the force of gravity was a variable that acted according to its own whims, and my sense at the time was that this did not seem like a reasonable explanation.
E35
But conversations later with other teachers gave me more to think about.Maybe Rachel was being more reasonable than I thought.She might have been thinking about how other things in nature have a varying effect, such as the wind: If you threw a ball from the same spot, in the same direction, at the same speed over and over again, the ball would not land in the same exact place every time.The wind might change speed or direction, causing the ball to take a slightly different course each time.Of course, the ball would probably land in the same place a few times, just like the book and the paper hit the ground at the same time more than once.Thinking about her idea in relation to this example made sense to me, so maybe it could be a reasonable explanation for these first graders.Gravity changes just like wind changes.Earlier, when I had asked them directly to resolve gravity's inconsistent effect, it is a possibility that they did not answer because they did not see the inconsistency (Hammer & van Zee, 2006, p. 78, bold is added by us).
In our view, the teacher's proposed "mechanism" explaining the observed three different outcomes of the same experiment with falling paper and book is brilliantly creative.We applaud to the teacher's intellectual move of finding a natural phenomenon that behaves similarly to Rachel's (and Allison's, and some other children's) suggestion that gravity may be whimsically unstable.We also love the teacher's new pedagogical and, even more important, dialogical approach of taking children's suggestions seriously rather than dismissing them as children being stuck, silly, and/or tired, exhausting their intellectual and attention abilities.Russian philosopher of dialogism Mikhail Bakhtin argued that genuine dialogue starts when the participants assume that all of them have consciousnesses with equal rights and begin to take each other seriously (Bakhtin, 1999, p. 6).
Here we want to add a note on the role of humor and laughter in classrooms.For instance, turn 226 from the transcript lead us to think that at least some, if not all, of the students were neither running out of steam nor of ideas at all.On the contrary, they (Allison and Ebony) continued to engage in a genuine dialogue, trying to explore the gravity mystery at depth.In doing so they communicated vividly both with words and bodily gestures when seeking to illustrate and creatively explain the issue of that day's lesson.When considering the video-documented sequence it is quite obvious that they did not look bored or stuck as they talked it through.It is also notable that this turn was preceded by a peer discussion and not a teacher-led question or comment about the joke: the tired gravity.That means that the humorous, perhaps "childish," talk did not distract them from involving in the intended school-science at the moment.Humor could be seen as the learners' way to deal with knowledge playfully and dialogically, rather than a break from productive learning.Thus, it does not exclude relevant learning, but rather disguises it in another language.Consequently, it could be argued that it may be problematic to dismiss young students' way of exploring by means of dialogic jokes as non-serious or inappropriate knowledge building.However, we argue that even if the students had purposefully desired to totally drop the topic about the gravity experiment, thereby switching to another topic, they should have the academic right to do so, at least for a while (Matusov & Marjanovic-Shane, 2019).
It is interesting for us that in Hammer and van Zee's book, the two quotes of the teacher we brought above followed one after another in the reverse order to the order we gave them, and that they were left without any explicit dialogic responses to each other.We wonder if the teacher still thinks that her students became exhausted and confused, even after she found a possible new intellectual and pedagogical approach of how she could have moved the inquiry forward?Knowing what the teacher learned afterwards about wind as a model for unstable gravity, would she have switched the class discussion to another phenomenon or not?If so, why?If not, how might she proceed?How would she address the joke in the context of the wind mechanism?Would the joke about tired gravity emerge at all, in her view?We do not know.
Four puzzling concerns
After reading the case of tired gravity in Hammer and van Zee's book, we have developed four major related puzzles for them: 1. Why had the teacher interrupted the class discussion on the inconsistency of their experiments' outcomes, unstable gravity as a possible explanation of this inconsistency, and children's joke of "tired gravity" metaphorically capturing the essence of their "mechanism" for whimsical gravity (and even modeling a whimsical behavior)?Prompted by the teacher's own later reflection (see her second big quote above), we rejected her initial explanation of children being tired and confused (see her first big quote above).Rather we suspected that the teacher became epistemologically, pedagogically, and dialogically paralyzed and frustrated.But, if so, why?Teacher, what do you think?
2. Why didn't the teacher come up with the idea of wind modeling whimsical gravity proposed by Rachel?Why didn't the teacher recognize science behind it?Of course, there could be zillion reasons for a missed teaching-learning opportunity as David Hammer and Emily van Zee discussed in the book: the teacher's lack of creativity in the moment, getting tired, being distracted by diverse multiple demands at the moment, and so.However, we suspected that a systemic problem was looming even if other explanations were also applicable.We sensed that something in the teacher's pedagogical orientation trapped her and prevented her from seeing other possibilities at the moment.If so, what was it and why did it prevail?Teacher, David, and Emily, what do you think?
3. Why didn't the teacher's then-reflections (immediately after teaching) and her now-reflections (when writing the case and reflecting on it) explicitly address each other, when strong implicit contradictions existed between them, and especially because they closely followed each other on the same page of the book (p.78)?We see contradictions in the then-teacher claiming that she abruptly switched the classroom topic discussion because the children were tired and confused and the now-teacher taking on Rachel's idea of unstable gravity seriously.This question is for both the authors of the book -David Hammer and Emily van Zee -and for the author teacher.It is possible that they either did not notice these disagreements, or they would disagree with us that the teacher's reflections contradicted each other.However, again, we suspect a deeper problem behind possible neglect or disagreement.
4. Finally, we wonder what we might do in this teacher's shoes as dialogic pedagogy educators (and researchers) being outside of the urgency of the moment in contrast to the teacher's situation.
Here we want to propose ideas addressing all four of our puzzles for the readers' judgments and further discussions.
We suspect that a major meta-problem of the case of "tired gravity" is rooted in the authors' understanding of science as "refined thinking about mechanisms" behind natural phenomena (or human phenomena in social sciences).By "authors," we mean David Hammer, Emily van Zee, and the teacheras-author of this case.This view of science can be better understood in the light of the concepts of readymade science and science-in-action, developed by the French sociologist of science Bruno Latour (Latour, 1987(Latour, , 1993(Latour, , 1996(Latour, , 1999;;Latour & Woolgar, 1979).Latour described ready-made science is a science without any or with only minimum human subjectivity.It is about the world how it is, independently of its observers and researchers.In contrast, science-in-action is a practice of cleaning out researchers' statements about studied phenomenon from researchers' subjectivity through a special discursive practice in a scientific community.Some readers may comment that Latour described a positivist science (positivist
E37
ready-made science and positivist science-in-action), which is a good point, in our view.However, school science is often, if not always, about teaching positivist science as well, positivist ready-made science, to be exact.Positivist ready-made science is about learning "facts" -statements of consensual truth, cleaned from any subjectivity, that Latour called "high modality statements" (e.g., "The Earth rotates around the Sun") (Latour, 1987).Traditional school with its monologic pedagogy focuses on imposing facts on students through monologic instruction and exams.The authors of the book, however, tried to get away from this approach by shifting from teaching ready-made science to engaging students in science-in-action guided by the teacher.
We see the problem in how, Hammer and van Zee, the authors of the book, defined the science practice as refined thinking about mechanisms.Namely, we see the problem as rooted in the authors' peculiar mixing of science-in-action with ready-made science practices.From the science-in-action approach, the authors have taken the discursive nature about science as a discourse of making mechanistic explanations through refining thinking about mechanisms in a dialogue.For instance, the teacher guides Brianna and others to refine and deepen their observations and reporting, in line 41 when the teacher replies to Ebony's statement, "The paper fell first" with, "Now do you mean it hit the ground first, or it just started to fall first?"The teacher creates two alternatives for Ebony's (and some other) children's discourse, introducing a possibility that they might perform their experiments differently, which results in different outcomes.Also, along the same lines of evidence of dialogic pedagogy, the authors recognize, value, and promote authorial nature of students' creating mechanistic explanations.In contrast to conventional monologic pedagogy, the teacher and the researchers are not focused on making the students produce the correct explanations, but rather on the children's authorship of diverse explanations and testing them against each other.The teacher makes it important to acknowledge each child's observation, and contribution.For instance, the teacher replies to Ebony, "So, you're saying the paper hit the ground first, and then the book hit the ground.[Ebony nods his head in affirmation.]Then we have two other friends who are saying that the book and the paper hit the ground at the same time."and then acknowledges Rachel's and other children's remark "Twice!" by replying, "Twice.You did it twice and that's what you noticed."This promotes Allison to voice her authorship, "Yeah, me too!" The teacher did not hunt for the correct answer, as many conventional teachers often do, but rather she hunted for students' creative authorial conceptualizations of mechanisms of the discussed physical phenomena.In our judgment, these two powerful aspects of science-in-action, discursivity and authorship, are embedded in the authors' defining science as refinement of thinking about mechanisms.Importantly, they constitute both dialogic pedagogy and the authors' dialogic analysis of this dialogic pedagogy.
However, we also argue that the authors' definition of science involves two aspects from readymade science and that exactly these two aspects created the problems in the case of tired gravity.The first aspect of ready-made science is about the insistence on the exclusion of human subjectivity from the science practice.The authors defined science as refining it from the students' thinking about mechanisms.They "forgot" Latour's discovery that the science-in-action practice primarily focuses on elimination of scientists' subjectivity from their explanations and facts.In the case of tired gravity, the children-scientists focused on emphasizing their scientist subjectivity rather than eliminating it.Thus, Ebony proclaimed, "To me, first, the paper fell first" (line 4).Ebony emphasized his scientist subjectivity via his intonation and then via his repetition (lines 8, 11, 13, 17), echoed by other scientists-children (lines 18-20, 22, 25, 34, 51).To us (sic!), Ebony and other scientists-children seemed to imply that a phenomenon can reveal itself differently for different people.Of course, this goes against positivist science-in-action studied by Latour.
E38
Scientists have to be disciplined to eliminate any subjectivity from their observations of a phenomenon.Correctly trained scientists must be mutually replaceable to experience and see the same thing from the same experiment.Thus, a part of (positivist) science-in-action is not only about creative generating, arguing, and testing mechanisms but also about eliminating their own subjectivity from their own observations and experiments.Interestingly, that at one point the teacher got apparently involved in discussing her scientistschildren's subjectivity to eliminate it through a consensus.Thus, on lines 36-37, 39, 41-42, the teacher seemed to be checking if a problem of different outcomes of the experiment was rooted in a miscommunication, "So, what I'm hearing is that we have one person that said when he did it, that-…that when he dropped at the same time, from the same height-…-that the paper fell first.Now do you mean it hit the ground first, or it just started to fall first?""It hit the ground first," replied Ebony.There is no miscommunication and the teacher moved to focus her scientists-students on finding a mechanism behind the discrepancy of the experiment outcomes.
In our meaning-making interpretation and analysis, the teacher's focus on generating and considering mechanisms as definition of science, insisted by the researchers David Hammer and Emily van Zee, partially blinded her from guiding her scientists-students sensitively and dialogically to explore their differences further and deeper.The question of how her scientists-students conducted their experiments seemed to be technical for the teacher, staying in the way of more important science actions of generating, arguing, and testing mechanisms.
In contrast, in our view, exploring differences is central for the science making.It is interesting for us that the teacher did not ask the students to show how exactly they conducted their experiments but throughout the lesson mostly did the experiments herself, with children occasional repeating what she did.On lines 86, 88, the teacher firmly closed the topic of the scientists-subjectivity by insisting that in their experiments, they all did the same thing, "How could it be that we all-we got different results-… -when we did the same thing?"The question for the class became what natural mechanism could be responsible for diverse outcomes of their experiments rather than considering how and what they might have done differently while doing their experiments with falling paper and book.With their major focus on discourse about mechanisms, experimental science-in-action, full of human subjectivity that will be eventually eliminated to become ready-made science, was apparently not important, neither to the teacher, nor to the researchers.In fact, we argue, their focus on discourse about mechanisms contributed to an emergence of an epistemological and pedagogical trap for the teacher -to guide her scientists-students to imagine a natural mechanism behind the inconsistent outcomes of their experiments and away from the issue of how differently the scientists-students might have conducted their experiments.
The second ready-made science aspect, hidden in the authors' definition of science, is even more consequential and problematic in our view, than the first one, discussed above.The authors' definition of science as a refinement of everyday thinking about mechanisms of natural phenomena11 implies that the definition of science practice pre-exists the science practice itself.This is exactly the position of the readymade science described by Latour (1987).Aristotle labeled activities, which definition and goal pre-exist the activities themselves as "poïesis " (Aristotle, 2000).In contrast, in the science-in-action practice, its goal and definition emerge in the discourse of the community of relevant scientists, according to Latour's study.Aristotle called this type of activities "praxis."The teacher promotes and hunts for ready-made definition of science -namely, mechanisms, -while missing the whole discussion of the scientist subjectivity -"to me" -initiated by Ebony, as a scientific practice.Einstein's definition of science as a refinement of everyday E39 thinking, Hammer's and van Zee's definition of science as thinking about mechanisms, Latour's definition of science(-in-action) as elimination of subjectivity12 are very interesting but still problematic and contested insights, in our view.For example, Latour's definition of science(-in-action) that reflects praxis of the elimination of researchers' subjectivity 13 has been contested by quantum mechanics in general and by the famous Copenhagen interpretation approach formulated by Danish physicist Nils Bohr and by German physicist Werner Heisenberg in specific, arguing that elimination of the observer's subjectivity is impossible from a quantum phenomenon in principle (see, Heisenberg's principle of uncertainty), "There is no quantum world.There is only an abstract quantum mechanical description.It is wrong to think that the task of physics is to find out how nature is.Physics concerns what we can say about nature" (Nils Bohr, quoted in Kumar, 2008, electronic version).Thus, deciding what is science and what is "scientific" belongs to the scientific practice itself and to debates in a community of scientists while scientists engage in science, and does not pre-exist, or at least, does not fully pre-exist, the science practice.Science, science-in-action, is not poïesis but praxis.Or, in other words, using Bakhtin's neo-Kantian terminology, it is possible to say that in sciencein-action, the definition of science in unfinalizable (see, Nikulin, 2010).
We argue that the author-teacher's acceptance of the final definition of science, promoted by the educational researchers, severely limited her epistemological horizon, thus, preventing her to recognize emerging teaching-learning opportunities in the scientists-children's discourse.Even more, it robs the students from their own teaching-learning opportunities, not even mentioning their teacher's guidance.Thus, the "to me" discussion was abruptly put to the end when the teacher introduced her new inquiry, "How could this be that we all did the same thing-we dropped the paper and the book-… -at the same place, at the same time.How could it be that we got all these different results?One person found out the paper hit the ground first, and then the book.We have two other friends that found out the book and paper fell down at the same time" (lines 73, 77-80).By implicitly making the diverse and personalized outcomes of the experiments strange and intellectually unacceptable, the teacher effectively closed a possibility for an inquiry, "Should we get the same results of the experiment or is it OK to have different results?"In everyday life, people are often faced with legitimately different perceptions, experiences, interpretations, meanings, and judgments while observing or being involved in a shared event.Probably, the best example of that is the Japanese 1950 movie by the director Akira Kurosawa "Rashomon" based on Ryåunosuke Akutagawa's short stories (Akutagawa & Lippit, 1999).In the movie, the audience sees four different versions of a murderous crime that was perceived, experienced, and told by three participants of the crime and one hidden observer.The audience is left to think for themselves what "really" happened and whom to blame for the crime.Of course, in art, a diversity of subjective perceptions, outcomes, meanings, and judgments is very often legitimate and expected.In natural sciences, it is problematic, although not completely impossible in the Theory of Relativity or in Quantum Mechanics, while in social science it is open for a heated debate (Creswell, 2007;Denzin & Lincoln, 2005).The author teacher could have engaged herself and promoted further the children's "to me" discourse and the inquiry behind it.
In few lines down the lesson, the teacher closed up another possible inquiry for the same reason of her epistemological horizon being severely limited by the preset definition of what science is about.As we already discussed above, on lines 86 -88 the teacher shut down a possible inquiry of whether the scientists-children really do the same thing in their experiment or not.The author teacher apparently wanted to focus the children's attention on finding natural mechanisms that might be responsible for the differences E40 of the outcomes of their experiments with falling paper and book, while all of them "did the same thing" (line 88).Effectively, an inquiry of whether they actually did the same thing or not became blocked.
We wonder, when Rachel finally proposed a mechanism for the different outcomes of the experiment on lines 128-137 by suggesting that gravity may act differently on paper and book at different times, if the author teacher took this suggestion seriously and pushed it forward by asking her scientistsstudents, "Why, why would gravity sometimes make the book come down first, and then the paper" (lines 139-143).However, in our judgment the teacher struggled to accept Rachel's mechanism as legitimate.The teacher apparently undermined Rachel's explanation by emphasizing "the same gravity" (lines 139-143) and "the same force of gravity" (lines 145-147), tacitly rejecting Rachel's proposal.We suspect that the teacher struggled because she might have sensed the anthropomorphic nature of Rachel's mechanism.In our interpretation, the teacher struggled to formulate her question in a response to Rachel's proposal, "Why would gravity-why would gravity?[Three-second pause] How do I phrase this…" because she might have been trying to avoid anthropomorphism in her question and still "why would gravity sometimes make the book…" might have sounded as if gravity were a person.Her final formulation on line 147 was cleaned from any apparent anthropomorphism but at expense of firmly rejecting Rachel's mechanism, "how could the same force of gravity… give us three different results?"Only after conversations with other teachers, the teacher could see a possibility for a non-anthropomorphic mechanism behind Rachel's proposalunstable and apparently "whimsical" behavior of wind (see Hammer & van Zee, 2006, p. 78, the quote is above).Again, according to our interpretation and analysis, the teacher's preconceived notions of what science is and what kind of mechanisms are legitimate severely limited her epistemological horizon and, thus, obstructed her recognition of emerging teaching-learning opportunities, injured her guidance, and, finally, inhibited students' critical dialogue.
Of course, we do not mean that if the teacher had not preconceived notions of science and mechanisms, she would have been able to recognize all teaching-learning opportunities emerging in the students' classroom discourse.We agree with David Hammer and Emily van Zee (2006) that it is impossible in principle.Teaching and learning are authorial and creative processes (Matusov, 2011).Our authorial approach to education is based on Bakhtin's authorial ontological and polyphonic approach to dialogic meaning making: an existential point of view that stands in sharp contrast to instrumental education (and instrumental educational research).On the instructional level, instrumental teaching designs imply standardized teaching "technology" where the participants' unique and creative voices tend to be neglected.Moreover, authorial teaching and learning might be considered as a "performance art" in which both the teachers and students develop critical voices which transcend the culturally given, such as pre-given norms, rules, conventions and fixed educational goals (cf.Matusov, ibid.).We don't think that this teacher's approach to teaching was instrumental.On the contrary, we think that she had a creative ontological, polyphonic and authorial approach to teaching.However, we argue that her apparent treatment of science as poïesis severely and systematically limited her ability to recognize certain teaching-learning opportunities.
This time the teacher was not effective in blocking students' discourse undesirable for the teacher (if our interpretation is correct, of course).Instead of refining Rachel's mechanism from its anthropomorphism sensed by the teacher, Ebony developed a joke of "tired gravity" (line 209), essentially and effectively torpedoing the teacher's desire, as other peers enthusiastically joined the joke.We wonder if the teacher perceived this joke -their goofing off --as carnivalesque resistance (cf.Bakhtin, 1984) to her pedagogical regime of scientific mechanisms.It would have been interesting to talk with Ebony and the other children participating in the joke to check if they sensed that the teacher made anthropomorphic mechanisms a taboo in the classroom discussions and they rebelled against this taboo through this joke.
E41
Of course, Ebony's joke might have been just a joke and not a sign of resistance in their own eyes (or children might have diverse sense of the joke among each other), but we argue what makes Ebony's utterance, "Maybe it's tired.[Laughs]," a joke is its explicit anthropomorphism for gravity contrasted with the official classroom discourse focusing on naturalism for any phenomena they discussed and actively promoted by the teacher.In another classroom context, like for example, writing stories or poetry, tired gravity might not generate any laugh (or it still might but maybe less likely -more research is needed).So, our reply to David Hammer's and Emily van Zee's important questions, "What's the joke?What understanding goes into making it or finding it amusing?"(Hammer & van Zee, 2006, p. 90) is that we suspect children's carnivalesque resistance to the anthropomorphism taboo as a meaning of their "gravity getting tired" joke.
In our judgment, the teacher's response to the students' joke of "tired gravity" was very deliberate silencing the children by the teacher's switching the legitimate topic of the classroom discussion, "We're going to do something different now with the piece of paper; watch this" (line 223 and then see line 228).Arguably, in four previous times, the teacher's silencing the students' discourses and potential inquiries was not deliberate, but this time it was.In our view, it was not only suppression of the children's carnivalesque resistance to her pedagogical regime but also her admission of helplessness.She did not know what to do.She exhausted herself epistemologically and pedagogically.Her epistemological, pedagogical, and dialogic horizons collapsed.She wanted to move on.
We argue that the teacher's epistemological poïesis, limited as it might be, monologized her pedagogy and the entire classroom discourse.Granted, the teacher's epistemological poïesis of predefined science and mechanisms was much-much more open than the epistemological poïesis of conventional teachers hunting for one or few correct answers by their students.This teacher viewed science and its learning as a process of discourse and authorship.That fact made her pedagogy essentially dialogic.Still, her "seeing science" was predefined on a meta-level of what science is and what kind of mechanisms are legitimate in science.We argue that she still had preset curricular endpoints, although very different from conventional pedagogy.Her preset curricular endpoints were at a meta-level.Arguably, the teacher wanted that at the end of her lessons her students would learn that science is a refinement of their thinking about natural mechanisms about natural phenomena.Elsewhere I (the first author) argued that preset curricular endpoints are birthmarks of the pedagogical excessive monologism (Matusov, 2009(Matusov, , 2018a)).
For a teacher, seeing science-as-praxis in a classroom discourse is a peculiar thing.Why is it peculiar?How can a teacher recognize science in a classroom discourse among students, if science is an elusive thing in itself, emerging in and from the practice of the scientists-students?!It reminds us of the paradoxes of the Russian fairytale "Go I Know Not Whither and Fetch I Know Not What" 14 or by Socrates in the "Meno" dialogue (Plato & Bluck, 1961).Paraphrasing Socrates' paradox about research making, one can ask a researcher preparing for a research project, "If you know what you are searching for, why do you search for it?! [Implying that the researcher has already found what she wants to find] But if you don't know for what you are searching, what are you searching for?! [Implying that the researcher is clueless and disoriented]" Both the Russian fairytale and Socrates' research paradox describe praxis.Teacher's seeing science-as-praxis in a classroom discourse with her students is based on all previous definitions of science -by Albert Einstein about a refinement of everyday thinking, by David Hammer and Emily van Zee about mechanisms, by Bruno Latour about elevating modalities of scientists' statements, by Nils Bohn about scientists saying about our experiences and relationships with the world and so on -taking as helpful but always problematic and always limited insights rather than the final definitions.A teacher should expect that her students and she will develop their own helpful, problematic, limited, and elusive -unfinalized -
E42
definitions of science through their own science practice in the classroom.This teacher orientation can prepare a science teacher to see science in the "to me" discourse, in an anthropomorphic mechanism, in a joke of "tired gravity" 15 , in a metaphor, in a poetry "The tired wind" by Alvin Willis 16 , and so on.
Yet, we characterize the teacher's pedagogy as deeply dialogic.She accepted and seriously considered any mechanism of the discussed natural phenomena proposed by her students.Her teaching was authorial because she engaged her own mind and heart in trying to understand and engage her students in developing, clarifying, and testing their own mechanism proposals.In this, the teacher treated her students as "a plurality of [opaque, non-transparent] consciousnesses, with equal rights and each with its own world, [that] combine but are not merged in the unity of the event" (Bakhtin, 1999, p. 6, the italics is original).Even when she rejected the students' "tired gravity" as tiresome goofing around, she was willing to revisit the students' contribution and to imagine a serious proposal for a natural mechanism of capricious wind.Arguably, this revisiting is another evidence of the teacher's dialogic pedagogy teacher-orientation.For us, dialogic pedagogy is not "ideal" or "model" teaching but rather a dramatic pedagogy that is full of problematic moments to be considered, analyzed, praised, and, yes, criticized.This makes dialogic pedagogy risky and always contested.In our view, dialogic pedagogy ALWAYS involves missed teachinglearning opportunities and excessive monologism -both required for dialogic critical reflection among professionals (and even students).Our judgment of this teacher's dialogic pedagogy is based on her genuine interest in and commitment to her students' subjectivities and authorship.Her lapses of this commitment and interest and her struggles that we noticed are manifestations of challenges of dialogic pedagogy and not betrayals of it.In our view, it is much healthier to see (and expect) that dialogically minded educators are monologically corrupt than to expect incorruptible "model dialogic teachers" and "model dialogic teaching".
Finally, however limited it might be, in our interpretation and judgment, we argue that the teacherauthor, David Hammer, and Emily van Zee were involved in a dialogic analysis of the case and this and other teachers' dialogic pedagogy in their book.The main goal of dialogic analysis is to deepen meaning of the studied phenomenon, imagine new possibilities, abstract and problematize values, and so on (Matusov et al., 2019, in press).Such a dialogic approach also means to provoke responsive questions by addressing and testing alternative interpretations.This is exactly what the authors and the teachers did in their book.They engaged their mind and heart in recognizing events, interpreting them, meaning making, imagining new possibilities and alternatives and testing them, and making their authorial judgments rather than just focusing on recognizing patterns and their relationships.In our view, these processes constitute dialogic analysis and distinguish it from other traditional types of analysis like, for example, discourse analysis.We will turn to describing and discussing discourse analysis in the following sections.
Postscriptum
We emailed an earlier draft of our manuscript to David Hammer, Emily van Zee, and the Teacher 17 for their feedback, hoping to spark their public discussion around our dialogic analysis.To our big surprise, David and Emily became very upset with our critique of the Teacher's teaching practice.The Teacher herself did not respond, although she was probably attending to our intense email communication.At the end of the day, David did not give his permission to publish his objections to our paper.As to Emily, she Eugene Matusov, Ana Marjanovic-Shane, Tina Kullenberg, Kelly Curtis Dialogic Pedagogy: An International Online Journal | https://dpj.pitt.eduDOI: 10.5195/dpj.2019.272| Vol.7 (2019) E51 5. Suma: Because when she [i.e., Blue Riding Hood] was wandering around in the forest and he [i.e., the woodcutter] met her and the he told her that he's going to show her grandmother how to behave (.) and he had an axe and (.) the the (.) he took the skin off the wolf and he killed grandma.6. Ian: No they didn't know there was bears in the forest and erm there they thought she would just get lost in the woods.
In turn 19, Penda seemed to introduce yet another basis of assignment of guilt by considering who started the mess, "Yeah she [granny] started everything it was all her fault (.) if she hadn't thrown Red I mean Blue Riding Hood none of this would have happened." It seems to us that although the involved children sensed the disagreement among each other, neither they nor the teacher apparently recognized and defined the tension, if our interpretation is correct 21 .We definitely would ask the children and the teacher a question about this tension and how they defined the guilt and assigned it to the characters.
In the turn 11, the teacher asked the students (again?) to rank the characters by their guilt of bad behavior.In turns 12-16, some children provided diverse answers.We wonder why the teacher did not ask the children to provide the reasons behind of their ranking.
Ana:
However, to give both the teacher and the students credit, after reading the two-page story "Blue Riding Hood" (Hunt, 1995, pp. 133-134), I conclude that the story itself is a rather bad, even senseless quasi parody of the classic fairytale "Little Red Riding Hood" -where the task of the story is to make everyone "bad" -but without any motivation why would such change be meaningful.I feel very sad and disappointed with the story in the first place: there is no moral dilemma -the message is just that everyone is bad.The question about "degrees" of moral corruption seems a false question for me.I don't see it as actually educational -since it does not lead to any insights that can be revealing about ethics, in my view.It is probably not easy for the teacher and for the students to engage in genuine dialogue about it, I mean the degree of guilt, except probably developing a genuine critique of it, which would go against the didactic material, the teacher used.But even with this story a dialogue could be developed about the children's questions about what is good/bad.And they have just started -when the 30min was over and the teacher cut them off.*sigh!* Eugene: Dear Ana, I respectfully disagree with you.I found the "Blue Riding Hood" story interesting, full with working-class (and peasant) humor and sensibilities similar to the classical medieval French fairytales (Darnton, 1984).Let me retell the gist of the story (i.e., its plot).The Blue Riding Hood heads to her old grandmother, who lives in woods, to give the grandmother cakes in exchange of getting a dinner.However, the girl gets lost in the woods.The Wolf volunteers to help the girl and brings her to the grandmother's house.For his work, the Wolf demands the girls' cakes.When the girl hesitates, the Wolf threatens the girl with a bite.The clever girls offer a cake covered with a poison.The Wolf eats the poisoned cake and dies.The girl comes to the grandmother who gets 21 Tina: See a similar phrase below: I don't think we, ourselves, should join the traditional monologic paradigm by suggesting "accurate" or "correct" interpretations.But I understand the intention behind this phrase, I think.However, I suggest that we delete it and instead proudly show that we don't care about the correctness of our interpretations at all?We rather represent creative, situated and embodied voices in its relativity?Eugene: An interesting issue.In my view, we should be concerned about correctness of our interpretation because otherwise we would talk on behalf of people, which is rather monologic to me.The teacher and even kids might recognize the disagreements but might strategically ignore them for a while for whatever reason.I think we should be humble in our interpretations when we do not have access to the participants to ask them directly (we could still disagree with them but at least we would have asked them
E52
upset that the girl did not bring all cakes for her and also that the girl is late.The grandmother expels the girl to the woods late at night.The girl is frightened but meets the Woodcutter.The Woodcutter and the girl violently force themselves to the grandmother's house and send her to the woods where she gets eaten by a bear.The Woodcutter marries the Blue Riding Hood and they live happily ever after.
In my view, the story is full of motivations -sometimes rather selfish and violent but also mundane and at times even noble and generous -of common people of low means, struggling in making their ends meet.It could have been fun to consider and judge these motivations: possible rights and wrongs of them.For example, the Wolf was apparently nice, volunteering to help the Blue Riding Hood girl, although he did not tell her in advance that he wanted to be paid by her with her cakes.The Wolf has a family that he supports, making him apparently a good parent, --a half of the cakes would go to his family.Knowing that in advance, the girl might reject his help.But the Wolf threatened the girl with biting, -thus, setting her mind on a counter-offense of poisoning and killing him.Each of the characters' motivation and action were understandable and even somewhat justified, but also highly questionable.The carnivalesque humor of plain folks involves constant flipflop of power, described by Bakhtin in his analysis of novels by French medieval writer Rabelais (Bakhtin, 1984).The powerful and scary Wolf was trumped by the Blue Riding Hood, who tricked and poisoned him to death; the Blue Riding Hood was bullied by her Grandmother, who sent her Granddaughter to the forest at dark night; finally, the Grandmother was violently de-crowned (Bakhtin's term) by the Woodcutter and the Blue Riding Hood, who made the Grandmother taste her own medicine by sending her to dark night forest.At the same time, I agree with Ana that the characters felt empty, lacking, in my view, a sense of humanity in their relationship with each other.In my judgment, the story is pregnant with dialogic inquiries both for the students and for the teacher that unfortunately did not realize in the lesson.
In our brief dialogic analysis, we also took into account the didactic materials the teacher apparently used to organize this teaching unit (Hunt, 1995, pp. 30-31).In those materials we read: "Discuss, with the whole class, the orders of blame arrived at by different groups.Encourage the children to justify their choices by referring to specific parts of the text" (p.30).Thus, we see the teacher's question to the children in turn 11, "Okay should we now try to put the characters in some sort of order?" not as a dialogic question -a serious question based on the teacher's real interest in what the students think -but as the teacher following a preset didactic strategy aimed to arrive at certain preset curricular endpoints ("To prompt the children to discuss the behaviour of the characters in a story; justifying their judgements of who is most and least blameworthy by referring back to the text" (p.30)).
Eugene: Dear Ana, as you wrote the text of the paragraph above, I wonder what exactly made the teacher monologic, in your view.Was it her use of the pre-existing didactic materials?My answer is "yes" and "no."I can envision a teacher using pre-existing didactic materials to promote a genuine dialogue with her/his students.In this case, the pre-existing didactics serves as a dialogic provocation for the students and the teacher who become genuinely interested with their minds and hearts in addressing each other and the provocation -wherever it may lead them.Unfortunately, this did not happen in the Case#3.Rather, and I agree with your judgment, it seems that the teacher mechanically followed the didactics, without dialogically engaging in it and with her students.The students tried to address the teacher's disinterested question without much addressing and questioning the story or even each other.
Ana: I agree that a preset monologic teaching instruction can be used as a provocation.But in this case, the teacher does not use the material as a provocation, not does she enter into a dialogue together
E53
with the students.In my view, she seems to use these instructions to get back "on track" with the class as prescribed, without guiding students into a deeper analysis of their positions and ideas.
Accordingly, in her turn 17, the teacher asked the children to focus closely on what happened in the story.This pedagogical move reminds us of the teacher from Case#2.It is interesting that David Skidmore does not seem to notice this parallel that the teachers in both cases tried to focus the students on the text and not on considering the differences among each other.We also wonder, based on our reading of the didactic materials used by the teacher in Case#3, if she might have tried to guide her students to the preset curricular endpoints, albeit in much looser, "constructivist," consensus-seeking way than the more traditional teacher of Case#2.Maybe, pedagogy in Case#3 is much less dialogic than David Skidmore and we (initially) thought.The teacher's relativistic statement ending the discussion (turn 33), which we found rather decontextualized and not very thoughtful, but David apparently likes "no uniquely 'correct'" answers, supports our suspicion that this class was not much dialogic.In our view, when faced with the diversity of ideas and opinions, a dialogic approach would focus on putting these diverse ideas in critical contact with each other, comparing them and contrasting, them, creating assumptions implied by each diverse idea and checking them, etc.Instead, the teacher conclusion about the relativity of "the right and the wrong answers," remains monologic.If we are accurate in our interpretation, we would like to ask both the teacher and David Skidmore why they are ideologically attracted to this universal relativism.
Speaking in general, we wonder if a researcher's focus on Discourse Analysis, on identifying structural and functional patterns, makes it difficult for the researcher to recognize a dialogic pedagogy because the latter requires the researcher to focus on deepening the participants' authorial meanings and to engage in dialogue with them.This may suggest that a discourse analysis, focusing on revealing functional-structural patterns, is not sufficient to recognize, analyze, and critically meaningfy dialogic pedagogy.Research on dialogic pedagogy may require a dialogic analysis.
Finally, at the end of our essay, we want to make a "big picture" comment.In our view, at the beginning of the 21 st century, people around the world have experienced two types of oppressions: 1) oppression by positivism/modernism that tries to manage people like objects signified by "big data", "best practices", "research-driven, evidence-based policies," "universal truth," "consensus among rational and informed people," and so on; and 2) oppression by "post-truth," "alternative facts," "identity politics," "relativism," and "post-fact" of social engineering.We see a trap in fully rejecting or fully accepting either positivism or social engineering as such.In our view, dialogic research preserves positivism of factdiscovery while curbing it through authorial meaning-making of these facts.At the same time, dialogic analysis recognizes the legitimacy of authorial actions transcending reality, which social engineering can be a part of, and demands responsibility for it from its actors in a critical dialogue.Now, we are turning to comparing and contrasting dialogic analysis and discourse analysis through a dialogue among us.
Conclusion
Question#1: Is discourse analysis always positivist and monologic?
Tina: I think we should be careful with equating all types of discourse analysis with positivism.Is it more convincingly with the distinction monologic vs. dialogic science?For example, I am thinking of Linell (2009) who distinguishes between monologistic vs. dialogistic science.Accordingly, he views monologism as a counter-theory to dialogism.As far as language and language use are concerned there are essentially two authorities to lean on in this contrasting paradigm of science: "the individual speakers and the language system, the latter of course being ultimately (at least partly)
E54
based on implicit social contracts among users.These are the sovereign 'monological' meaningdeterminers." (p. 35).With such a distinction, the monologic paradigm is understood as containing positivism or, as Bakhtin terms it, "exact science" (Bakhtin, 1986) among other more or less monologistic approaches to science, language and dialogue.What do you think?
Ana: I think that discourse analysis in its ideological approach is always positivist -because it tries to capture the "objective," the "given," "how things really are," the phenomenon as it is in its essence, independent of anyone's subjectivity.For David Skidmore -it is not important what the students or teachers think as people, why they think so, what might it mean to them, etc., e.g., all what is a dialogic revelation of their voice in an encounter with the text and with each other.What is important for him are forms and processes of ANY dialogue as a process in which human subjectivities figure as ephemeral, local input into much more universal systematic process, conceptualized as something that (positively) exists, i.e. as a positive given that any independent, disinterested researcher (i.e.without subjectivity) could discover, observe, describe, analyze and interpretcoming to the consensus with others that they are observing the same given phenomenon.In that sense, discourse analysis strives for a consensus about the assumed given reality -i.e., it is positivist.
In fact, the Case#3 actually beautifully shows that looking for the formal characteristics of the dialogic pattern can lead one to conclude that there is dialogue where it actually does not exist for the actual pedagogy.In the Case#3, we can actually see how a discourse analysis can completely miss finding a dialogue, when it only looks at the superficial structural and functional patterns.Thus, in this case we can discern what seems to be just the beginning of a serious dialogue, but only for the children, and NOT for the teacher, who seems to have followed the preset instructions of the didactic material, rather than to have engaged and tried to lead an authentic dialogue!The teacher's remarks in lines #11 and #33 interrupt and stop the actual dialogues that the children are trying to have amongst themselves.What do you think?
Tina: I fully agree to your interpretation of the episodes; how the teacher deliberately chose to kill the dialogicity in her preplanned effort to pursue her teaching that relied on instructional material about how to teach this story.She therefore seemed to forget to be attuned to the emerging ontological dialogue that took place between the children (as in our Case#1 where some peers continued to engage vividly in a conversation of the mystery of gravity when the teacher decided to suddenly move on).Line #11 in Case#3 is a bit complex to analyze though, due to the fact that we do not know how the peer discussion (and the discussion between the students and the teacher) really unfolded.There is a long transcriptional break because the researcher had to fix a new audiotape to the following recording.So, I am not sure whether the teacher interrupted the discussion abruptly or not.[Ana: Good point!!] We know nothing about the adjacent preceding conversational episode, unfortunately.However, as far as I can see, I do think that also Skidmore noticed this and was quite concerned about it throughout his analysis of precisely this case.What do you think was missing?What do you mean should be added in order to be an even more dialogical analysis?
Ana, let me now turn to your proposed take on positivism, as you identify some fundamental features here above.They are thought-provoking, but I think we must discuss how positivism could be, or should be, defined in relation to existing interpretations of the term.In my eyes there are a number of definitions throughout history when coming to the scientific paradigm of positivism.On the one hand, some scholars should agree that just seeking after visible empirical realities in terms of what is "positively given" is a kind of general positivism (cf.Hall, 1987;Ritzer & Stepnisky, 2018, p. 108).But on the other hand, it is commonly argued that positivism in the strictest sense methodologically, has more to do with the dominant norm of studying social phenomena with the
E55
same methods as the natural sciences.One crucial implication following from such a paradigmatic guideline is the focus of discovering stable, invariable and universal laws and outcomes which are both possible to generalize and predict in subsequent studies.
However, poststructuralist (and postmodern) discourse analyses, for example critical discourse analysis and discourse psychology analysis (e.g., Potter's version, see Potter, 1996).Are you sure that also these analytic approaches assume a (pre)given essence, seeking mere objectivity?I thought they were highly relativistic perspectives and, thus, anti-essential in their postmodern outlook, although I can understand what you are problematizing when stating that they have an ambition to be "objective", i.e. independent of human subjectivity, and so forth.The latter is interestingly also discussed in Sullivan's (2011) book on dialogic analyses, in the light of a variety of discourse analyses.He seems to support the idea that dialogical research does not neglect human subjectivities, neither the participants' nor the researcher's (although he seems not to contrast it with positivism what I can see so far).For instance, he says, In other varieties of discourse analysis, the temptation is to uncover the power dynamics, including unconscious, social and historical power dynamics, which are responsible for the organization of truth-claims in discourses (e.g., Fairclough, 1992;Parker, 1992;Walkerdine, 1987).Much of this suspicion of the truth-claims of the talk derives from French philosophy, including Jacque Lacan, Ferdinand de Saussure, Roland Barthes and Michel Foucault (see Kress, 2001).They argue that the author is one who reproduces and adds to social meanings but whose intentions are largely irrelevant to the organization and study of the talk.In both these varieties of discourse analysis, the text is an object of suspicion and the author is ambivalently spoken of as either a strategic agent or ultimately irrelevant to the production of the text (Sullivan, 2011, p. 10).
Furthermore, I agree that it is important to notice how discourse analysis may be based on arriving at consensual facts or truths.I prefer to conceptualize this monological dimension as a finalizing research approach that stands in sharp contrast to a non-finalizing methodology where the authors, as we try to do here, never claim we had arrived at final truths.I therefore see our suggested conclusions as tentative, highly temporary instances of finalizations, left open for validation by the research communities and all the readers and participants involved, including us.We are not stable and may find it reasonable to deconstruct our insights later on, in response to critique or questions addressed to us.
Eugene: I think ALL pure discourse analysis, as such, is monologic and positivist, unless it is deeply embedded in a dialogic analysis.I think discourse analysis as such is positivist because it mainly focuses on how things really are -i.e., on the given, rather what the things subjectively and authorially mean for different people, including the researchers, in an unfolding dialogic contact.Of course, discourse analysis cannot escape meaning/sense making but often it is often very uncomfortable with deepening meaning/sense as it is afraid to lose its objectivity, generalizability, and validity.
I think that there is a confusion of sociocultural and cultural-historical contextual positivism that fights universal and decontextualized positivism with dialogism.Sociocultural and cultural-historical contextual positivism (e.g., Vygotsky and neo-Vygotskians) studies how culture, history, institutions, economy, and so on (i.e., diverse forms of the contextually given) shape, mediate, and, thus, pattern human behavior, activity, and discourse.In contrast, dialogism studies how people transcend, author, address, and reply to their diverse given contexts -i.e., the culturally, socially,
E56
biologically, historically, politically, and economically given.I want to contrast contextual positivism and dialogism with the following quotes about child development:
Contextual sociocultural positivism
The following quote from Per Linell's book nicely illustrates the contextual sociocultural positivism that I am talking about, According to a Vygotskyan sociocultural theory, the individual also partially repeats the sociohistorical evolution.What the child today learns in culture and in school in, say, mathematics, took humanity centuries and millennia to develop.Nowadays, individuals and groups are crucially dependent on the support of cultural (cognitive) artifacts, such as pen-and-paper, the abacus, the slide rule, the mini-calculator or personal computer, but with the help of these, they learn to master complex mathematical operations and can move into domains of much more advanced knowledge than was ever possible for earlier generations who belonged to other sociocultures (Linell, 2009, p. 253).
Dialogism
In contrast, dialogism can be illustrated by Alexander Lobok's critique of traditional psychology, For an 'objective' external onlooker, the childhood of different children is largely indistinguishable.All children play certain games, absorbedly listen to fairytales, react to various events, and so on.In fact, nearly all modern psychology research testifies to these 'childhood uniformities' and their typologies.The reason for this supposed uniformity is a flaw in the main approach of modern psychology.Modern psychology often focuses on universal, generalizable, predictable, and regular principles, which is the standard of the science.Anything else is viewed as nonscientific.How else it can be?!
The problem with this conventional approach to psychology, however, is that the human being is the only 'object' in the Universe that is defined by a subjective cognizing world of her or his own, building above the subjective lived experiences and feelings and redefining them -a world, unique for each person, which cannot possibly be viewed from outside, except for some of its outward objective artifact manifestations of this subjective cognizing world.If so, a question emerges: can a particular human being, his/her particular and unique subjective cognizing world be a subject of science -a subject of scientific observation and interpretation?Can a particular child with his/her unique subjective world, subjective Cosmos, not overlapping with subjective cognizing worlds of all other people in principle, be a subject of science?
Thus, for a researcher, it would appear strange to avoid addressing this individually subjective world since it is exactly the disparities of people's inner subjective experiences that, in all likelihood, make up our essence as humans.It is not what a person has in common with other people what makes her or him become a unique personality.On the contrary, what makes one a genuine person is precisely what he or she by no means shares with the others.I strongly argue that the phenomenon of childhood is not defined by those things that make children of a certain age group category look mostly alike.Childhood, rather, is made of
129
Teacher: So, you're sayin' the force of gravity is pulling the book down at a different time than the paper.131 Student: Yeah.132 Rachel: Yeah, probably.And, sometimes it's pulling it down at the same time, or pulling the paper down-134 Alison or Brianna[?]:Before the book.135 Brianna[?]:And then the book, and then the paper [?]-136 Rachel: -before the book and then the book's pulling it down before the paper.
197
Teacher: But, you found out something different.198 Ebony [nods]: The book-the paper fell first.199 Teacher: How could that be? 200 Ebony: I don't know.[Ebony shrugs his shoulders.]
Dialogic analysis vs. discourse analysis of dialogic pedagogy
Eugene Matusov, Ana Marjanovic-Shane, Tina Kullenberg, Kelly Curtis Dialogic Pedagogy: An International Online Journal | https://dpj.pitt.eduDOI: 10.5195/dpj.2019.272| Vol.7 (2019) As Tina points out, when the children continue laughing about "tired gravity" (lines 209-222), the teacher tries to redirect the children to engage more seriously by making the transition in line 223, "We're going to do something different now with the piece of paper..." This effectively ends the laughter.
<<Tara Ratnam, feedback, 2019-02-06: Here, I don't agree with the teacher that students got silly or stuck.I think it is the teacher who missed seeing its potential for further exploration.I am in the field observing/participating in preservice teachers' 'simulation lessons'.Time after time I've seen student teachers simply letting go of students' diverse ideas.For most part, they just allow students to say what they think and where it supports their (student teachers') point, they pick it up.The rest fall by the wayside.In a physics class (in the introductory part for a lesson on Newton's first law), to a question what happens if the bus you are traveling in suddenly breaks, there were three different answers, "I fall forward", I fall backwards" and I fall forward first and then backwards".The teacher went on to the next question she had planned to ask without stopping to problematise why 3 students spoke differently of experiences of the same phenomenon.>> Eugene Matusov, Ana Marjanovic-Shane, Tina Kullenberg, Kelly Curtis Dialogic Pedagogy: An International Online Journal | https://dpj.pitt.eduDOI: 10.5195/dpj.2019.272| Vol.7 (2019) | 2019-03-29T09:25:17.316Z | 2019-03-06T00:00:00.000 | {
"year": 2019,
"sha1": "53e564b209c030339e1972a488621437ec62288e",
"oa_license": "CCBY",
"oa_url": "http://dpj.pitt.edu/ojs/dpj1/article/download/272/183",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "53e564b209c030339e1972a488621437ec62288e",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
271966609 | pes2o/s2orc | v3-fos-license | Extremely uncommon torsion and communicating ruptured rudimentary horn pregnancy at third trimester: A case report
Torsion and rupture are life-threatening emergencies in rudimentary horn pregnancy, an extremely rare type of ectopic pregnancy. This case report aims to share the diagnosis and treatment of a patient, with torsion and ruptured horn pregnancy in a setting, with limited resources. It highlights the challenges faced and the strategies employed to ensure appropriate care. A 38-year-old woman, gravida 2, para 1, presented to the Obstetric and Gynaecology (OBGYN Department of Hiwot Fana University Hospital with a diagnosis of uterine rupture after she presented with a complaint of pushing down pain of 1 h, decreased fetal movement of 1-day duration, and with sudden and severe lower abdominal pain and distension. Conservative management was chosen, but deteriorating symptoms necessitated an emergency laparotomy, confirming a ruptured rudimentary horn pregnancy and surgically excising the horn. Ruptured rudimentary horn pregnancy with torsion is an extremely uncommon and perilous obstetric emergency that necessitates swift diagnosis and surgical intervention. For advanced primitive horn pregnancy, laparotomy combined with horn removal continues to be the gold standard of therapy. Healthcare providers can improve patient outcomes and alleviate the burden of life-threatening conditions by promoting multidisciplinary collaboration and embracing innovative, technologically advanced techniques.
Introduction
A rudimentary horn pregnancy occurs when a fertilized egg implants and develops in an undeveloped area of the uterus. 1 This underdeveloped part of the uterus is called the rudimentary horn of a unicornuate uterus.An incidence of 1 in 75,000-150,000 pregnancies has been observed for rudimentary horn pregnancy, which is an extremely rare kind of ectopic pregnancy. 2Torsion and rupture of a horn pregnancy are extremely uncommon and can be severe and life-threatening emergencies.This case report aims to share the diagnosis and treatment of a patient, with torsion and ruptured horn pregnancy in a setting, with limited resources.It highlights the challenges faced and the strategies employed to ensure appropriate care.she was acutely sick-looking, and on physical examination, the patient was in significant distress, with marked tenderness and guarding in the lower abdomen.Her vital sign results are as follows: pulse rate = 138 beats/min, respiration rate = 26/min, and oxygen saturation rate in off oxygen = 89% which revealed tachycardia and hypotension, but she was not in a shocking state.So, these results suggest she was in a critical state.
After observing the symptoms and suspecting uterine rupture, a prompt ultrasound was conducted.The results of the ultrasound showed the presence of fluid in the peritoneal cavity, 32 weeks (by Femur Length [FL]) sized Right (Rt) cornual pregnancy, Fetal Heart Beat (FHB) −VE was seen, uterus was observed and empty.Unfortunately, due to resources, we were unable to use imaging techniques.Nevertheless, based on the ultrasound findings, it strongly indicated right cornual pregnancy to R/O uterine rupture, which required immediate exploration.
The patient was immediately prepared for emergency laparotomy.Intraoperatively, a right side ruptured rudimentary communicating horn pregnancy was confirmed, with evidence of torsion of the rudimentary horn with 1080° rotated clockwise at the base as well as there were three sites of rupture that is anterior, at the fundus, posterior and the fetus was inside rudimentary horn (Figure 1 transfusion was required due to significant blood loss and transfused with 2 units of blood.
Patient follow-up
Postoperatively, the patient was closely monitored in the intensive care unit.She received intravenous antibiotics and analgesics to prevent infection and manage pain.The patient was provided with comprehensive counseling on family planning and contraceptive options.The patient's recovery, after surgery, showed improvement.She was released from the hospital in stable condition.She was advised to attend regular follow-up appointments to monitor her recovery and reproductive health.
Discussion
Third-trimester rupture of rudimentary horn pregnancy with torsion is an extremely rare and potentially fatal condition, particularly in low-resource settings. 1Prompt diagnosis and surgical intervention are critical for avoiding serious complications. 3 In managing such complex obstetric emergencies in resource-constrained environments, clinical acumen, timely recognition, and multidisciplinary collaboration are critical. 4ffective management of ruptured rudimentary horn pregnancies relies heavily on collaboration among various medical specialties.Timely diagnosis and appropriate surgical intervention can be ensured through the early involvement of obstetricians, radiologists, and experienced laparoscopic surgeons. 4espite significant advancements in diagnosing and treating ruptured rudimentary horn pregnancies, this condition continues to pose a life-threatening risk, demanding immediate intervention to prevent adverse maternal outcomes. 5An illustrative case reported by Houmaid and Hilali involving rupture at 16 weeks of gestation emphasizes the severity of this condition and highlights the crucial need for enhanced awareness and education among healthcare providers. 6uptured rudimentary horn pregnancy with torsion is an extremely uncommon and perilous obstetric emergency that necessitates swift diagnosis and surgical intervention. 1The scarcity of reported cases in the medical literature reflects the difficulties in recognizing this condition and the lack of awareness among healthcare professionals.In resource-limited settings, delays in seeking antenatal care (ANC) and limited access to advanced imaging modalities can exacerbate the challenges in diagnosing this condition. 7onsidering the rarity of this condition, it is imperative to enhance awareness among healthcare practitioners, particularly those working in low-resource settings, to facilitate early identification and appropriate management. 1 Successful outcomes heavily rely on multidisciplinary collaboration between obstetricians, radiologists, and experienced laparoscopic surgeons.
Laparoscopic approaches have emerged as effective and less invasive methods for managing ruptured rudimentary horn pregnancies. 4These techniques have been shown to minimize postoperative complications and accelerate recovery. 8,9However, the widespread implementation of laparoscopy in resource-constrained environments might be limited due to factors such as the availability of skilled surgeons and advanced equipment.
Conclusions
In conclusion, the diagnosis and management of ruptured rudimentary horn pregnancy with torsion pose substantial challenges in both well-equipped healthcare settings and resource-constrained environments.
First-trimester ultrasonography is the only sensitive, noninvasive diagnostic method known for rudimentary horn pregnancy.For advanced primitive horn pregnancy, laparotomy combined with horn removal continues to be the gold standard of therapy.In addition, by fostering multidisciplinary collaboration and embracing innovative, technologically advanced techniques, healthcare providers can enhance patient outcomes and alleviate the burden of this potentially life-threatening condition.
Future research endeavors should prioritize increasing awareness and knowledge of this rare condition among healthcare providers, particularly in regions where access to advanced imaging is restricted.In addition, conducting studies comparing various surgical approaches and their respective outcomes could contribute to refining the management of ruptured rudimentary horn pregnancies.
Timeline
On July 1, 2023, the patient was admitted, and at that time, management and investigations were started.On the day of her admittance, she had an emergency laparotomy, and she received postoperative care for 10 days.It took 3 months to prepare this case report, during which the patient's agreement was obtained and the case was presented to the ethical committee and the Obstetric Department at Hiwot Fana Specialized Hospital.
(a) and (b)).Due to the complexity of the case and limited resources, the surgical team faced challenges during the procedure.A skilled and experienced surgical team performed the excision of a rudimentary horn (Figure 1(c)) and there was a delivery of 1.8 kg freshly dead female (Figure 1(d)), then right-side salpingo-oophorectomy was done (Figure 1(e)), and hemostasis was secured to control the bleeding.Intraoperative blood
Figure 1 .
Figure 1.(a) Right rudimentary horn pregnancy with torsion.(b) Rudimentary horn pregnancy with rupture anteriorly.(c) The cavity after extracting the fetus from the ruptured rudimentary horn.(d) 1.8 kg extracted dead fetus.(e) Uterus after performing right-side salpingo-oophorectomy. | 2024-08-29T05:05:39.751Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "bb1027af45ed2257a08fb7579cee7458e7f4d56f",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bb1027af45ed2257a08fb7579cee7458e7f4d56f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266976657 | pes2o/s2orc | v3-fos-license | ZIP4 upregulation aggravates nucleus pulposus cell degradation by promoting inflammation and oxidative stress by mediating the HDAC4-FoxO3a axis
Background: Extracellular matrix metabolism dysregulation in nucleus pulposus (NP) cells represents a crucial pathophysiological feature of intervertebral disc degeneration (IDD). Our study elucidates the role and mechanism of Testis expressed 11 (TEX11, also called ZIP4) extracellular matrix degradation in the NP. Materials and methods: Interleukin-1β (IL-1β) and H2O2 were used to treat NP cells to establish an IDD cell model. Normal NP tissues and NP tissues from IDD patients were harvested. ZIP4 mRNA and protein profiles in NP cells and tissues were examined. Enzyme-linked immunosorbent assay (ELISA) confirmed the profiles of TNF-α, IL-6, MDA, and SOD in NP cells. The alterations of reactive oxygen species (ROS), lactate dehydrogenase (LDH), COX2, iNOS, MMP-3, MMP-13, collagen II, aggrecan, FoxO3a, histone deacetylase 4 (HDAC4), Sirt1 and NF-κB levels in NP cells were determined using different assays. Results: The ZIP4 profile increased in the NP tissues of IDD patients and IL-1β- or H2O2-treated NP cells. ZIP4 upregulation bolstered inflammation and oxidative stress in NP cells undergoing IL-1β treatment and exacerbated their extracellular matrix degradation, whereas ZIP4 knockdown produced the opposite outcome. Mechanistically, ZIP4 upregulated HDAC4 and enhanced NF-κB phosphorylation while repressing Sirt1 and FoxO3a phosphorylation levels. HDAC4 knockdown or Sirt1 promotion attenuated the effects mediated by ZIP4 overexpression in NP cells. Conclusions: ZIP4 upregulation aggravates the extracellular matrix (ECM) degradation of NP cells by mediating inflammation and oxidative stress through the HDAC4-FoxO3a axis.
INTRODUCTION
Low back pain (LBP) is a significant clinical symptom that accompanies nearly 80% of the population and it remains one of the leading causes of disabilityadjusted life-years (DALYs) worldwide [1,2].Notably, intervertebral disc degeneration (IDD) has been regarded as a vital contributor to LBP, accounting for 26-42% of patients with LBP [3].IDD arises from multiple factors, such as inflammation, oxidative stress, and mechanical stress.These factors facilitate the decline of nucleus pulposus (NP) progenitor cells, thus culminating in intervertebral disc dysfunction and structural destruction [4].IDD has a sophisticated pathogenesis that has not been completely investigated.Current therapies aiming at reducing or controlling pain do not reverse the process of IDD [2].Therefore, we need to delve into the AGING pathogenesis of IDD to discover novel underlying treatment options.
The disruption of extracellular matrix (ECM) synthesis and degradation equilibrium plays a contributory role in instigating the disturbance observed in matrix components.Dysregulated expression and activation of matrix metalloproteinases (MMPs) are significant factors in ECM degradation [5].MMPs, a protease family that depends on zinc and calcium ions, extensively participate in degrading all kinds of ECM matrices in the body [6].Zinc ions are important constituents of metalloproteinases.Changes in the concentration of intracellular zinc ions pertain to IDD [7].Zinc homeostasis is indispensable for sustaining normal cellular functions [8].The zinc transporter family, one of the molecular mechanisms of zinc homeostasis, exerts a significant function in regulating the dynamic equilibrium of zinc ions [9].ZIP4 belongs to the SLC39A family of zinc transporters, and exhibits altered levels in numerous types of cancer, such as ovarian cancer [10] and non-small cell lung cancer [11].It facilitates cancer cell proliferation, migration, invasion, and metastasis.Notwithstanding, how ZIP4 functions under IDD circumstances remains poorly understood.
Histone deacetylase 4 (HDAC4), belonging to the HDAC family, functions importantly in transcriptional regulation and cell cycle development [12].Reportedly, HDAC4 expression is upregulated in the intervertebral disc tissues of IDD mice, while HDAC4 overexpression bolsters NP cell apoptosis and exacerbates IDD in mice [13].Engaging in a plethora of biological activities, including cell proliferation, apoptosis, oxidative stress, and inflammation [14], FoxO3a, a constituent of the forkhead transcription factor family, demonstrates its multiple functions, such as proliferation, apoptosis, cell cycle regulation, and DNA damage [15].Enhanced FoxO3a fosters NP cell proliferation and suppresses NP cell apoptosis, hence mitigating IDD [16].FoxO3a is known as a downstream molecule of HDAC4.Its functional mechanism in IDD still needs more investigation.
In this investigation, a noteworthy elevation in the profile of ZIP4 was discovered in both NP tissues of IDD patients and NP cells following IL-1β treatment.Through ZIP4 overexpression, facilitation of oxidative stress, inflammation, apoptosis, and ECM degradation was observed in NP cells treated with IL-1β.Regarding the mechanism, ZIP4 downregulated the HDAC4-FoxO3a pathway; HDAC4 overexpression abated the damage-boosting function of ZIP4 overexpression in the ex vivo IDD model; FoxO3a knockdown offset the damage-promoting function of ZIP4 in the in vitro IDD model.We hypothesized that ZIP4 upregulation regulated the HDAC4-FoxO3a axis to expedite IDD progression.
Clinical samples
We harvested NP tissues from 30 people suffering from IDD (51-71 years of age) and normal NP tissues from 30 patients with spinal cord damage (32-49 years of age) from Third Hospital of Henan Province.The tissues were kept in liquid nitrogen at a temperature of -80° C. Our research, conducted under the approval of Third Hospital of Henan Province, adhered to the ethical guidelines outlined in the Declaration of Helsinki.Prior to participation, all individuals provided their informed consent by signing the appropriate documentation.The degree of IDD was evaluated by Pfirrmann grades according to T2-weighted section images.
Cell viability detection
The CCK-8 (Cell Counting Kit-8) assay was used method to assess cell viability using a CCK8 kit (Cat.NO.CC1410-100, G-CLONE, Beijing, China).Briefly, NP cells were into 96-well plates (5,000 cells per well).After 24 hours for cell adhering and growing, the cells were treated with different experimental conditions for 24 hours.The CCK-8 reagent was diluted in the appropriate culture medium according to the manufacturer's instructions, then the culture medium was replaced with the CCK-8 reagent.After incubating the cells for 2 hours, the absorbance of the samples was measured using a microplate reader at a wavelength of 450 nm.
Detection of reactive oxygen species (ROS) levels
In accordance with the supplier's guidelines, the measurement of ROS levels in NP cells was performed using 2ʹ,7ʹ-dichlorodihydrofluorescein diacetate (DCFH-DA) staining (Beyotime, Shanghai, China).After the designated treatment, cells in each experimental group were subjected to a 30-minute incubation in a completely dark environment at a temperature of 37° C with 5 μM DCFH-DA (Order No. 35845, Sigma-Aldrich, St. Louis, MO, USA).A fluorescence microscope was exploited to monitor the fluorescence intensity.
Enzyme-linked immunosorbent assay (ELISA)
The conditioned medium of NP cells in each group was harvested, followed by 15 minutes of centrifugation (5000 rpm) at 4° C. The supernatant was kept at -80° C in preparation for the following experiments.ELISA kits (Westang, Shanghai, China) were used to confirm the levels of TNF-α (Order No. F02810) and IL-6 (Order No. F01310), both inflammatory factors, in the supernatant of NP cells.
Lactate dehydrogenase (LDH) and oxidative stress mediator detection
The LDH Cytotoxicity Assay Kit (Cat.No.C0016, Beyotime, China) gauged the LDH level in NP cells undergoing IL-1β treatment in keeping with the instructions of the manufacturer.Commercial kits, specifically MDA (Cat.No. A003-1-2) and SOD (Cat.No. A001-3-2) acquired from Nanjing Jiancheng Bioengineering Institute (Jiangsu, China) were employed to assess the levels of oxidative stress mediators (MDA, SOD) in NP cells.These steps were meticulously carried out as per the instructions provided by the manufacturers.
Analysis of statistics
All experimental procedures were replicated three times to ensure the accuracy and reliability of the outcomes.GraphPad Prism 8.0 software (GraphPad Inc., San Diego, CA, USA) was used for data analysis.The data are displayed as the mean ± standard deviation (SD).To compare two groups, an independent sample t test was implemented, whereas for comparisons among multiple groups, one-way analysis of variance (ANOVA) was utilized.A statistical significance level of P<0.05 was considered indicative of significant differences.
Availability of data
The datasets during current study are available from the corresponding author on reasonable request.
The ZIP4 profile increased in IDD NP cell models
In an effort to investigate the expression characteristics of ZIP4 in the ex vivo IDD model, we exposed NP cells to varying concentrations of IL-1β (5, 10, 20, and 50 ng/ml) for 24 hours.RT-PCR and western blot analyses indicated that, compared to the CON group, ZIP4, MMP-3, and MMP-13 levels in NP cells exhibited a concentration-dependent increase in response to IL-1β treatment, while collagen II and aggrecan levels were repressed (Figure 1A, 1B).Following the preceding experiment, NP cells were subjected to various concentrations of H2O2 (10, 25, 50, 100 μM).RT-PCR and western blot analyses revealed that, when compared to the CON group, H2O2 treatment led to an augmentation in the profiles of ZIP4, MMP-3, and MMP-13 in NP cells while concurrently reducing collagen II and aggrecan levels (Figure 1C, 1D).These phenomena demonstrated that ZIP4 level is increased in NP cells treated with IL-1β and H2O2.
ZIP4 level is increased in the NP tissues of IDD patients
In this investigation, NP tissues from both IDD patients and normal patients were harvested.The mRNA profile of ZIP4 was subsequently analyzed using RT-PCR.Based on the obtained data, a notable increase in the ZIP4 mRNA level was observed in the IDD patient NP tissues compared to the non-IDD patient NP tissues (Figure 2A).In addition, the ZIP4 mRNA level was promoted in grade III-IV IDD patients (vs.grade I-II IDD patients, Figure 2B).In addition, ZIP4 level had a positive relationship with the grades of IDD patients (Figure 2C).Through western blot analysis, it was revealed that the protein expression of ZIP4 exhibited a remarkable augmentation in the nucleus pulposus (NP) tissues obtained from patients diagnosed with IDD, in stark contrast to the protein level observed in the NP tissues of normal patients (Figure 2D).
ZIP4 upregulation aggravates ECM degradation in NP cells following IL-1β treatment
To investigate the role of ZIP4 in IDD, a ZIP4 overexpression cell model was constructed (Figure 3A).Cell viability was determined and the result showed that ZIP4 overexpression reduced cell viability (Figure 3B).In addition, ZIP4 overexpression promoted LDH, TNF-α and IL-6 production (Figure 3C-3E).Further experiments showed that the levels of COX2, iNOS, MDA and ROS were elevated after ZIP4 upregulation, while SOD level was reduced (Figure 3F-3I).Western blot revealed that following ZIP4 upregualtion, ECM of NP cells were enhanced (Figure 3J).Next, NP cells were treated with IL-1β (20 ng/ml) following transfection for 24 hours.Compared to the CON group, IL-1β expanded LDH release in NP cells.When matched against the vector+IL-1β group, ZIP4 overexpression augmented LDH release in NP cells undergoing IL-1β treatment (Figure 4A).ELISA showed that vis-à-vis CON, an increase was discovered in TNF-α and IL-6 profiles in NP cells subjected to IL-1β treatment.As opposed to the vector+IL-1β group, ZIP4 overexpression boosted the profiles of the inflammatory factors in NP cells subsequent to IL-1β treatment (Figure 4B, 4C).ZIP4 overexpression increased COX2 and iNOS expression in the cells (Figure 4D).Moreover, ZIP4 overexpression upregulated MDA and ROS levels and downregulated SOD levels in the cells (Figure 4E-4G).Western blot showed that the levels of MMP-3 and MMP-13 were significantly elevated in NP cells treated with IL-1β, and further elevated after ZIP4 overexpression.By contrast, collagen II and aggrecan were noticeably reduced in the NP cells undergoing IL-1β treatment and further repressed after ZIP4 overexpression (Figure 4H).These findings confirmed that ZIP4 upregulation fostered inflammation, oxidative stress, and apoptosis in NP cells and exacerbated their ECM degradation.
ZIP4 knockdown alleviates ECM degradation in NP cells treated with IL-1β
To delve into the function of ZIP4 in IDD, we transfected NP cells with sh-NC and sh-ZIP4 (Figure 5A).ZIP4 knockdown suppressed LDH, TNF-α and IL-6 profiles in NP cells exposed to IL-1β (Figure 5B-5D).The western blot results indicated that in relation to the sh-NC+IL-1β group, ZIP4 knockdown decreased COX2 and iNOS levels in the NP cells (Figure 4E).ZIP4 knockdown lowered MDA and ROS levels and increased SOD levels in NP cells treated with IL-1β (versus sh-NC+IL-1β) (Figure 5F-5H).As suggested by western blot, compared to the sh-NC+IL-1β group, ZIP4 knockdown repressed MMP-3 and MMP-13 expression and boosted collagen II and aggrecan expression in the treated cells (Figure 5I).These phenomena confirmed that ZIP4 knockdown impeded NP cell inflammation, oxidative stress, and apoptosis and ameliorated ECM degradation in NP cells treated with IL-1β.
The influence of ZIP4 on the HDAC4-FoxO3a pathway
To understand the impact of ZIP4 on the HDAC4/ FoxO3a pathway, we gauged HDAC4, FoxO3a, Sirt1 and NF-κB levels in NP cells following IL-1β treatment subsequent to transfection via western blotting.In relation to CON, IL-1β promoted the profiles of HDAC4 and NF-κB phosphorylation while reducing FoxO3a phosphorylation and Sirt1 levels in NP cells (Figure 6A, 6B).In contrast to the corresponding control groups (vector+IL-1β or sh-NC+IL-1β), ZIP4 overexpression aggravated the elevation of HDAC4 and NF-κB phosphorylation and further reduced FoxO3a phosphorylation and Sirt1 levels in NP cells treated with IL-1β (Figure 6A).In contrast, ZIP4 knockdown reduced HDAC4 and NF-κB phosphorylation and significantly enhanced FoxO3a phosphorylation and Sirt1 levels in NP cells treated with IL-1β (Figure 6B).The above outcomes revealed that ZIP4 promoted the profile of HDAC4 and activated the Sirt1-NF-κB pathway.
HDAC4 knockdown weakened the damage-boosting effects mediated by ZIP4 overexpression
For the purpose of probing the influence of HDAC4 on ZIP4, NP cells were transfected with sh-NC and sh-HDAC4.Western blotting was used to examine the transfection efficiency (Figure 7A).Next, NP cells were transfected with sh-HDAC4 and/or ZIP4 overexpression plasmids and then treated with 20 ng/ml IL-1β for IL-1β (Figure 7E).MDA and ROS levels were lowered, while SOD levels were heightened in the sh-HDAC4+ZIP4+IL-1β group vis-à-vis the ZIP4+IL-1β group (Figure 7F-7H).Western blotting revealed that MMP-3 and MMP-13 expression was downregulated, while collagen II and aggrecan expression was upregulated in the sh-HDAC4+ZIP4+IL-1β group compared with the ZIP4+IL-1β group (Figure 7I).Western blot analysis suggested that the levels of HDAC4 and NF-κB were reduced, while FoxO3a phosphorylation and Sirt1 levels were enhanced in the sh-HDAC4+ZIP4+IL-1β group versus the ZIP4+IL-1β group (Figure 7J).These phenomena confirmed that HDAC4 knockdown weakened the damage-promoting function mediated by ZIP4 overexpression in the in vitro IDD model.
FoxO3a knockdown offsets the damage-promoting function mediated by ZIP4 overexpression
To investigate the influence of Sirt1 on ZIP4-mediated effects, we treated NP cells with the Sirt1 activator Resv (30 μM).The cytotoxicity assay kit revealed that in contrast with the IL-1β group, Resv treatment reduced LDH release in ZIP4-overexpressing NP cells treated with IL-1β (Figure 8A).Based on the ELISA results, Resv treatment vigorously mitigated the expression levels of TNF-α and IL-6 in NP cells following IL-1β treatment when compared to the IL-1β group (Figure 8B, 8C).As suggested by western blot data, compared to the IL-1β group, Resv treatment repressed the profiles of COX2 and iNOS in NP cells following IL-1β treatment (Figure 8D, 8E).The levels of MDA and ROS were lowered after Resv treatment, and the level of SOD was enhanced (versus the ZIP4+IL-1β group, Figure 8F, 8H).Western blot confirmed that in contrast to the ZIP4+IL-1β group, Resv addition suppressed the profiles of MMP-3 and MMP-13 but enhanced those of collagen II and aggrecan in NP cells (Figure 8I).Western blot analysis revealed that Resv failed to influence HDAC4 expression but enhanced FoxO3a phosphorylation and Sirt1 expression in NP cells following IL-1β treatment.Meanwhile, the NF-κB phosphorylation level was significantly reduced by Resv treatment (Figure 8J).Therefore, Sirt1 activation repressed inflammation, oxidative stress, apoptosis, and ECM degradation in NP cells treated with IL-1β and offset the damage-boosting function mediated by ZIP4 overexpression in the ex vivo IDD model.
DISCUSSION
IDD, a leading contributor to low back pain, neck pain, and relevant dysfunctions, lays the pathological foundation for intervertebral disc herniation, spinal stenosis, and other diseases [20].The primary pathological alterations in IDD encompass fewer NP cells and ECM degradation [21].Here, we discovered that ZIP4 expression was elevated in an in vitro IDD model and IDD tissues, whereas ZIP4 overexpression downregulated the HDAC4-FoxO3a pathway to facilitate inflammation and oxidative stress, hence exacerbating NP cell ECM degradation.NP cells are indispensable for sustaining the function of the intervertebral disc.Early cell senescence and apoptosis are leading contributors to IDD, and the major manifestations include a reduction in the function and number of NP cells in degenerative intervertebral discs [22].ECM degradation in the intervertebral disc can be attributed to the imbalance of ECM anabolism and catabolism, lessened ECM synthesis, and heightened activity of proteases that can degrade ECM, directly culminating in an increase in ECM catabolism [23].Inflammatory factors exert a significant function in IDD progression.The interplay and aberrant expression of inflammatory factors can damage the balance of ECM decomposition and metabolism in the intervertebral disc, resulting in inflammatory responses and engaging in or boosting IDD development [24,25].IL-1β, a significant proinflammatory cytokine in the interleukin-1 (IL-1) family, is the primary cause of proteoglycan degradation in the intervertebral disc and an important factor for boosting high MMP expression.It is also a significant cytokine that induces IDD [26].Moreover, H2O2 can also lead to oxidative stress, intervertebral disc cell death and ECM degradation [27].Here, we established an in vitro IDD model by treating NP cells with IL-1β or H2O2.We uncovered that IL-1β bolstered TNF-α, IL-6, COX2, and iNOS expression in NP cells; MDA and ROS levels and MMP-3 and MMP-13 profiles dramatically increased in IL-1β-or H2O2-treated NP cells; and the profiles of SOD, collagen II, and aggrecan were dramatically decreased.Our findings revealed that IL-1β facilitated NP cell inflammation, oxidative stress, and ECM degradation.
Members of the zinc transporter family participate in IDD occurrence and development.For instance, zinc transporter ZIP8 (ZIP8) expression is notably heightened in denatured NP tissues, while ZIP8 downregulation hampers the profiles of ECM degrading enzymes and restores those of ECM proteins in NP cells undergoing treatment with IL-1β to retard IDD progression [28].ZIP8 expression is remarkably elevated in NP cells subsequent to IL-1β treatment.ZIP8 knockdown alleviates the ECM degradation of NP cells elicited by inflammatory stimulation [29].ZIP4 is a member of the zinc transporter SLC39A/ZIP family.Reportedly, ZIP4 overexpression suppresses hepatoma carcinoma cell apoptosis and bolsters their migration and invasion [30].ZIP4 presents high expression in pancreatic cancer tissues.ZIP4 knockdown dampens pancreatic cancer cell migration and invasion, slowing pancreatic cancer development [31].Here, we discovered that compared to the corresponding control group, the ZIP4 profile was elevated in NP cells following IL-1β treatment and IDD patient NP tissues.ZIP4 overexpression boosted inflammation, oxidative stress, and apoptosis in NP cells treated with IL-1β and exacerbated their ECM degradation, whereas ZIP4 knockdown reversed these effects.These discoveries confirmed that ZIP4, a pathogenic factor for IDD, can speed up IDD development.
Studies in recent years have suggested that HDAC4 exerts a regulatory function in endplate chondrocyte degeneration and NP cell degeneration [32].For instance, HDAC4 bolsters morphological alterations in endplate chondrocytes and augments ECM degradation and endplate cartilage degeneration [33].In an IDD mouse model, GSK3β was downregulated in intervertebral disc tissues, and upregulating GSK3β mitigated NP cell apoptosis and disc degeneration in IDD mice by repressing HDAC4 [13].
FoxO3a is a downstream molecule of HDAC4.HDAC4 can promote FoxO3a deacetylation to increase its transcriptional activity [34].FoxO3a activation promotes superoxide dismutase 2 (SOD2) synthesis to cramp oxidative stress, hence effectively retarding IDD [35].Here, we uncovered that ZIP4 upregulated HDAC4 and reduced FoxO3a phosphorylation.HDAC4 downregulation weakened the damage-promoting function mediated by ZIP4 overexpression in the in vitro IDD model, accompanied by FoxO3a phosphorylation AGING upregulation.These outcomes demonstrated that ZIP4 modulated the HDAC4-FoxO3a axis to function in the context of IDD.
Sirt1, classified as a class III histone deacetylase, exhibits extensive involvement in the regulation of various age-related cellular mechanisms.These mechanisms encompass autophagy, apoptosis, energy metabolism, and antiaging processes [36].During the progression of IVDD, Sirt1 is downregulated in degenerative discs and exerts a protective effect by regulating cellular senescence and promoting regeneration [37].Interestingly, HDAC4 has an inhibitive effect on SIRT1 by inducing the loss of histone acetylation on the SIRT1 promoter region under proinflammatory cytokine interferon-gamma (IFN-γ) treatment [38].In nasal epithelial cells, the HDAC4 profile was elevated upon IL-13 treatment, which repressed SIRT1 and initiated NF-κB signaling.HDAC4 knockdown activated SIRT1/NF-κB signaling and mitigated inflammatory responses and mucus generation in nasal epithelial cells after IL-13 treatment [39].NF-κB, an indispensable transcription factor, plays a pivotal role in the modulation of IL-1β-elicited cell viability, migration, apoptosis, and inflammatory response in both human chondrocytes and NP cells [40][41][42].SIRT1 overexpression reversed IL-1β-elicited ECM degradation and cell apoptosis by deacetylating RelA/p65 and inhibiting NF-κB nuclear translocation [43].Here, we found that ZIP4 overexpression leads to reduced Sirt1 expression and enhanced NF-κB phosphorylation.HDAC4 knockdown enhanced Sirt1 while suppressing NF-κB pathway activation, and these results were consistent with the reduced inflammation and oxidative stress in NP cells.However, the administration of the Sirt1 activator Resv significantly improved IL-1β and ZIP4 overexpression-mediated apoptosis, inflammation, ECM degradation, and oxidative stress in NP cells.These data suggested that the ZIP4-HDAC4-Foxo3-Sirt1-NF-κB pathway plays a role in NP cell dysfunction.
However, several shortcomings need to be further investigated in the future.First, the functions of ZIP4 in mediating IDD progression should be confirmed in animals.Second, the upstream mechanism of ZIP4 in NP cells requires more experiments for clarification.Third, since cell death has a pivotal role in the dysfunctions of NP cells in IDD, further experiments should be conducted to determine cell apoptosis or autophagy after selectively regulating ZIP4 expression.
CONCLUSIONS
To summarize, our research reveals that ZIP4 overexpression bolsters inflammation and oxidative stress in NP cells following IL-1β treatment and aggravates cell ECM degradation.Regarding the mechanism, ZIP4 downregulates HDAC4 and boosts FoxO3a acetylation.This study proposed that the novel ZIP4/HDAC4/FoxO3a/Sirt1/NF-κB axis exerts an important function in IDD.Our findings may afford new ways of thinking and novel targets for treating and ameliorating IDD.
Figure 2 .
Figure 2. ZIP4 expression in the NP tissues of IDD patients.The NP tissues of IDD patients and normal patients were collected.(A) RT-PCR was used to ascertain the ZIP4 mRNA profile.(B) ZIP4 mRNA levels in different grades of IDD patients.(C) Correlation between ZIP4 mRNA level and grades of IDD patients.(D) Western blotting was used to detect ZIP4 protein level in the NP tissues of both IDD and normal patients.N=3.***P<0.001.
Figure 3 .
Figure 3. ZIP4 upregulation aggravates ECM degradation in NP cells.(A) NP cells transfected with the vector and ZIP4 overexpression plasmids.Western blot experiments were used to check the transfection efficiency.(B) CCK8 assay was used for evaluating cell viability.(C) The cytotoxicity detection kit evaluated LDH release in NP cells following IL-1β treatment.(D, E) TNF-α and IL-6 profiles in the cells verified by ELISA.(F) Western blot showing the expression levels of COX2 and iNOS in the cells.(G-I) MDA, SOD, and ROS levels in the cells were determined.Scale bar=100 μm.(J) MMP-3, MMP-13, collagen II and aggrecan levels in the cells confirmed by western blot.N=3.NS P>0.05, *P<0.05,**P<0.01,***P<0.001(vs.vector). | 2024-01-15T06:17:25.917Z | 2024-01-12T00:00:00.000 | {
"year": 2024,
"sha1": "c1f0d4be0988703880520649f8a3a23dc3040f07",
"oa_license": "CCBY",
"oa_url": "https://www.aging-us.com/article/205412/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aa950619f97e077e6385d997342d3ade8ed223d2",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16902089 | pes2o/s2orc | v3-fos-license | CD146 Expression in Human Breast Cancer Cell Lines Induces Phenotypic and Functional Changes Observed in Epithelial to Mesenchymal Transition
Background Metastasis is an important step in tumor progression leading to a disseminated and often incurable disease. First steps of metastasis include down-regulation of cell adhesion molecules, alteration of cell polarity and reorganization of cytoskeleton, modifications associated with enhanced migratory properties and resistance of tumor cells to anoikis. Such modifications resemble Epithelial to Mesenchymal Transition (EMT). In breast cancer CD146 expression is associated with poor prognosis and enhanced motility. Methodology/Principal Findings On 4 different human breast cancer cell lines, we modified CD146 expression either with shRNA technology in CD146 positive cells or with stable transfection of CD146 in negative cells. Modifications in morphology, growth and migration were evaluated. Using Q-RT-PCR, we analyzed the expression of different EMT markers. We demonstrate that high levels of CD146 are associated with loss of cell-cell contacts, expression of EMT markers, increased cell motility and increased resistance to doxorubicin or docetaxel. Experimental modulation of CD146 expression induces changes consistent with the above described characteristics: morphology, motility, growth in anchorage independent conditions and Slug mRNA variations are strictly correlated with CD146 expression. These changes are associated with modifications of ER (estrogen receptor) and Erb receptors and are enhanced by simultaneous and opposite modulation of JAM-A, or exposure to heregulin, an erb-B4 ligand. Conclusions CD146 expression is associated with an EMT phenotype. Several molecules are affected by CD146 expression: direct or indirect signaling contributes to EMT by increasing Slug expression. CD146 may also interact with Erb signaling by modifying cell surface expression of ErbB3 and ErbB4 and increased resistance to chemotherapy. Antagonistic effects of JAM-A, a tight junction-associated protein, on CD146 promigratory effects underline the complexity of the adhesion molecules network in tumor cell migration and metastasis.
Introduction
Metastasis is an important step during the natural history of cancers, as it transforms a local disease into a disseminated and often incurable one. A lot remains to be understood regarding cellular and molecular mechanisms by which tumor cells evade the original site, and re-localize to distant sites. First steps of metastasis include down-regulation of cell adhesion molecules, alteration of cell polarity and reorganization of cytoskeleton. This leads to enhanced migratory properties and resistance of tumor cells to anoikis. Such modifications resemble Epithelial to Mesenchymal Transition (EMT) that occurs in physiological and pathological situations [1]. EMT has been classified into three different subtypes, type 3 being associated with tumorigenesis [2]. CD146 (or MCAM, Mel-CAM, MUC18, S-endo1) was first described on malignant melanomas as a melanoma progression antigen [3]. In normal tissues, CD146 is expressed by smooth muscle cells, placental trophoblasts [4] and a subset of activated Tcells [5]. CD146 is a component of the inter-endothelial junction [6] and is now recognized as a marker of mesenchymal cells [7]. A recent report supports the importance of CD146 as a marker of bone marrow stromal cells with the ability to transfer the hematopoietic microenvironment to heterotopic sites [8].
CD146 is a 113 kDA glycoprotein that belongs to the immunoglobulin superfamily. It contains five immunoglobulinlike domains, one trans-membrane region and a short cytoplasmic tail. The presence of several protein kinase recognition motifs in the cytoplasmic domain suggests the involvement of CD146 in cell signaling [9]. CD146 mediates homotypic and heterotypic cell-cell interactions, although its ligand or counter receptor is not known [10]. Its role in endothelial development is suggested by studies in Zebrafish [11]. Its function in cell migration has been suggested by several observations [12][13][14][15]. Indeed, forced expression of CD146 in a mouse mammary carcinoma cell line increases its metastatic ability in mouse models [16]. In addition, several reports indicate that CD146 is over-expressed on human prostate cancer cells and that CD146 over-expression increases metastasis of prostate cancer cells in nude mice [17,18]. Similarly, CD146 expression has been associated with advanced tumor stages in human ovarian cancers and pulmonary adenocarcinomas, predicts early tumor relapse and poor prognosis [19,20]. We have previously reported that CD146 expression is associated with high grade and triple negative (ER 2 /PR 2 /ERBB2 2 ) phenotype in human breast primary tumors and is included in a stromal gene cluster enriched in mesenchymal genes. In addition, we showed that increased risk of death is associated with CD146 expression in the epithelial compartment of breast tumors [13]. These findings have been recently confirmed and extended in a study of 505 primary breast tumor tissues by Zeng et al. [21]. The authors report that CD146 expression is associated with triple-negative breast cancers, high tumor stage and poor prognosis suggesting that CD146 expression might be a potential predictive marker of poor response to treatment. Based on these observations, we investigated whether CD146 expression would induce mesenchymal genes expression in breast carcinoma cell lines. Using four carcinoma cell lines, we show that increased expression of CD146 is associated with loss of cell-cell contacts, enhanced cell migration and increased mesenchymal markers mRNAs expression. Opposite results were found in carcinoma cells with an epithelial-like phenotype. We further show that down-modulation or over-expression of CD146 are associated with opposite changes in JAM-A expression, in heregulin responses and increased resistance to chemotherapy suggesting that poor prognosis of CD146 positive breast tumors is related to CD146-induced EMT and increased resistance to chemotherapy.
Results
To assess the potential role of CD146 in EMT, different cell lines were used in this study: MCF-7, SKBR3, CAL51 and MDA-MB-231. Levels of CD146 expression and ER, PR and Her2 status are summarized in Table 1. MCF-7 and SKBR3 have been classified in the luminal molecular subtype [22], CAL51 [23] and MDA-MB-231 [22] in the basal subtype. From these wild-type cell lines, we generated MCF-7 and SKBR3 cell lines which expressed high levels of CD146, as well as CAL51 and MDA-MB-231 cells in which expression of CD146 was significantly down-regulated (see Figure S1 for CD146 expression of these modified cell lines).
CD146 expression is associated with loss of cell-cell contacts
The morphology of the different cell lines was analyzed in a colony scattering assay. To this end, cells were plated at very low density and the morphology of the colonies was analyzed after 5 days. In these conditions, 60% of mock-transfected MCF-7 colonies were compact colonies (Fig. 1A) with a majority of cells presenting cell-cell contacts. When CD146 was over-expressed, MCF-7 cells were no longer able to form cell-cell junctions, and 60% of colonies were scattered colonies (Fig. 1B). In agreement with these results, the knockdown of CD146 in CAL51 cells decreased the percentage of scattered colonies from 45% to 10%. Quantification of compact versus scattered colonies indicated that the loss of cell-cell contact, a mesenchymal characteristic, was associated with CD146 expression, whereas ability to form cell-cell contacts, an epithelial characteristic, correlated with the lack of CD146 expression (Fig. 1B). Modulation of CD146 expression induces changes in mRNA levels for markers of EMT In order to identify the pathways involved in morphological changes controlled by CD146, we used Q-RT-PCR to measure variations of different EMT marker genes as a consequence of the modulation of CD146 expression. At least six different RNA preparations for each modified cell line were compared to mock transfected cells. We found that CD146 expression in MCF-7 cells was associated with an increased expression of vimentin, Ncadherin and Slug mRNAs ( Table 2). Changes in vimentin protein levels were confirmed by immuno-fluorescence analyses (data not shown). Similar results were observed in SKBR3 cells with forced expression of CD146, except for N-cadherin mRNA expression. In CAL51 cells, decreased CD146 expression was associated with increased E-cadherin mRNA expression. Looking for different transcription factors with known role in EMT, only variations of Slug mRNA systematically correlated with CD146 levels in all four cell lines. In SKBR3 and MDA-MB-231 cells, CD146 expression correlated with increased levels of matrix metalloproteinase (MMP)-2 and MMP-9 expression.
CD146 over-expression induces modifications in Estrogen Receptors and response to heregulin
Having previously shown that CD146 + breast tumors are mainly ER 2 PR 2 and that there is no correlation between the CD146 expression and the ErbB2 status [13], we quantified the transcripts encoding ER (a and b), PR and Erb receptors in MCF-7 cells over-expressing CD146 as compared to control MCF-7 cells ( Fig. 2A). We found that CD146 over-expression reduced expression of ERa, ERb and Her3 receptors, whereas PR and Her4 were up-regulated.
Since forced CD146 expression in MCF-7 cells induces changes in Erb receptors, we tested whether CD146 expression would affect the response to heregulin. We observed a two fold increase in vimentin and N-cadherin expression when mock-transfected MCF-7 cells were cultured with heregulin while E-cadherin was decreased by 20%. When CD146 + MCF-7 cells were similarly exposed to heregulin, variations in vimentin and N-cadherin expression remained unchanged, but E-cadherin expression was decreased by 40%, suggesting that the EMT-promoting effect of CD146 is augmented by heregulin (Fig. 2B). This effect on E-cadherin expression was accompanied by up-regulation in transcripts encoding Slug and MMP-9.
Modulation of CD146 expression is associated with changes in breast tumor cells behavior
To evaluate whether CD146 expression affected cellular proliferation we evaluated the proliferation rate in anchoragedependent and anchorage-independent growth assays. CD146 expression did not modify the expansion rate of the four cell lines in anchorage-dependent conditions (Fig. 3A). These results were confirmed by studies on cell cycle (data not shown), in which no difference could be detected as a consequence of CD146 modulation. In contrast, we found that reduced CD146 expression in MDA-MB-231 and CAL51 cell lines inhibited anchorageindependent growth rate, while over-expression of CD146 in SKBR3 cells increased its anchorage-independent growth ability ( Fig. 3B), suggesting that CD146 contributes to tumor transformation in several breast carcinoma cell lines except MCF-7.
To further explore whether CD146 expression also affects tumor dissemination, we tested the migratory properties of the cell lines manipulated for CD146 expression in Boyden chambers assays (Fig. 3C). The fraction of cells that had migrated was evaluated by colorimetric measurement in triplicate experiments, a semi-automated technique providing more reliable and robust results than manual cell counting (data not shown), provided that sufficient numbers of cells have moved through the filter; this can be achieved only with a later readout (30 hours for the MDA-MB-231 cell line and 96 hours with the slowly migrating MCF-7 cell line) than in most published reports. CD146 down-modulation reduced the migration of CAL51 cells by 47%63% (when compared to mock-transfected cells) whereas overexpression of CD146 in SKBR3 cells increased migration by 58%618%. We were unable to demonstrate any change when working with the MCF-7 cell line, suggesting that CD146 forced expression in MCF-7 cells did not recapitulate the full spectrum of EMT-related changes, possibly due to either antagonizing effects of other molecules, or to partners missing for full signaling in a CD146 dependent pathway.
Finally, we studied the chemosensitivity to docetaxel and doxorubicin according to the level of CD146 expression. Cells were exposed to varying concentrations of docetaxel (0 to 100 nM) or doxorubicin (0 to 10 mM) for 72 hours (Table 3), two agents
Opposite variations in CD146 and Jam-A expression
Since expression of the Ig superfamily adhesion molecule JAM-A has been controversially associated with the pro-migratory properties of breast cancer cells through integrin activity regulation [24][25][26][27], we searched whether CD146 and JAM-A variations in expression revealed a consistent pattern. We observed in the four cell lines that JAM-A expression was inversely related to CD146 expression (Fig. 4A). Furthermore, forced expression of CD146 in MCF-7 cells induced a statistically significant decrease in JAM-A expression (23.3%62.5%, p = 0.0313) (Fig. 4B).
As MCF-7 cells express particularly high levels of JAM-A, we hypothesized that the high level of JAM-A in MCF-7 cells may prevent the possible increase in the migration induced by CD146 expression. We therefore down-modulated JAM-A expression by transient transfection with three different commercial siRNAs. One of those reduced JAM-A expression at the cell surface by 94%61% (see Figure S2 for JAM-A expression in siRNA transfected cells). Similarly to previous observations with T47D cells [26], we found that JAM-A inhibition alone induced an increase in the migration of MCF-7 cells (Fig. 4C). When the same siRNA was transfected into MCF-7 cells overexpressing CD146, the migration was increased by 30% when compared to JAM-A inhibition alone, suggesting that high levels of JAM-A prevent the acquisition of migratory abilities by MCF-7 cells when these cells are forced to express CD146.
Discussion
EMT is a reversible differentiation process by which epithelial cells lose their characteristics to acquire mesenchymal properties. EMT occurs in a physiologic way during development, and contributes to tissue repair. EMT also provides neoplastic cells with migratory and invasive properties, and this may be one mechanism by which they leave the primary epithelial tumor site and establish metastases [1,2]. The first step of EMT involves the loss of intercellular junctions (tight junctions, adherens junctions and desmosomes). Overexpression of transcription factors including Snail, ZEB or members of the bHLH family suppresses epithelial markers and induces the expression of mesenchymal genes leading to cytoskeletal changes and increased motility and migration [28][29][30].
Here, we show that CD146 + cells loose cell-cell contacts. CD146 + cells also have increased anchorage-independent growth and migratory abilities when compared to their CD146 2 counterparts. These results are consistent with recently published observations [14,15,21], except for MCF-7 cells which express JAM-A at high levels. We found that forced expression of CD146 in MCF-7 cells did not by itself modify their migration ability in contrast to other cell lines, although it induced some of the molecular and cellular changes associated with EMT. When combining JAM-A inhibition and CD146 overexpression, migration of MCF-7 cells was increased when compared to JAM-A positive cells without CD146 expression. Our results suggest that CD146 and JAM-A exert opposite effects, and that inverse modulation of both molecules increases the amplitude of EMTrelated changes in MCF-7 cells. A possible mechanism involves the PI3K-Akt pathway as it has been shown both that CD146 activates the PI3K-AKT pathway in melanoma cells and that reduced expression of JAM-A increases the pool of available PIP3 [31,32]; one could hypothesize that JAM-A blocked the promigratory function of CD146 by trapping PIP3 and that high expression of JAM-A in MCF-7 cells prevents part of CD146 signaling via PI3K-Akt. Our observation supports the hypothesis that a complex network of adhesion molecules contributes to tumor cell migration and metastasis with variations in different cell contexts since a similar mechanism was not active in SKBR3 cells.
Together with loss of cell-cell contacts, increased anchorageindependent growth and migratory abilities, CD146 expression is associated with an increase in Vimentin and N-cadherin expression. These results support a role for CD146 in inducing EMT in breast cancer cells. Q-RT-PCR analyses of different transcription factors in four breast cancer cell lines showed that some but not all of classical EMT makers varied in relation to changes in CD146 expression; notably, the variation of Slug expression was correlated with the level of CD146 expression at the cell surface. It was first described that Slug transfection in NBT-II cells (bladder carcinoma) induces the first step of EMT to an intermediate stage characterized by modulation of cell-cell adhesion [30]. This intermediate level of EMT could explain that changes in EMT markers observed in this study are not identical for all cell lines; alternatively modulation of CD146 may be insufficient to recapitulate the full spectrum of EMT, because CD146 acts in the context of a complex signaling network, as suggested by the above discussed interaction with JAM-A.
In a genome-wide transcriptional profiling of human breast cancer cell lines, Blick et al. [33] showed that Slug is highly expressed in basal B cell lines that also over-express vimentin, Ncadherin and fibronectin, whereas E-cadherin is down-regulated. In primary tumors, Slug expression is also associated with the basal-like phenotype [34]. We thus suggest that CD146 participates in EMT by increasing Slug expression, although we cannot conclude on the direct or indirect relation between these two molecules. Recently, in addition to increased expression of SLUG, Zeng et al. [21] suggested that CD146 induced EMT is associated with the activation of the small GTPase RhoA.
Our results also indicate that the induction of CD146 expression in MCF-7 cells induced a slight but significant down-regulation of ERa (30% when compared to the mock-transfected cell line). The knockdown of ERa results in an increase in Slug mRNA [35] while the activation of ERa induces the down-modulation of Slug expression either by a direct association with the Slug promoter or by inhibiting the GSK-3b activity [36]; analyses of 500 breast tumors demonstrated a strong inverse correlation between Slug and ERa expression. In primary breast tumors, CD146 expression is associated with ERa negative tumors [13]. Thus, it is possible The specific MFI represents the ratio of the mean fluorescence intensity for the CD146 antibody over the mean fluorescence intensity of the isotypic control (mean 6 standard error of the mean of at least six independent experiments). IC50 is expressed as the mean (6 standard that CD146 modulates Slug expression through ERa signaling and not directly. Heregulin, a ligand of HER receptors and more specifically of ErbB3 and ErbB4, which is essential for ErbB2 phosphorylation in response to heregulin [37], is involved in the progression of different types of human cancers [38] and induces the spread of breast cancer cells in vivo [39]. Heregulin has been shown to induce EMT [40] on SKBR3 cells. Our results indicate that CD146 expression enhanced the response to heregulin, in terms of Slug, MMP2 (increased) and E-Cadherin (decreased) expression. This response was associated with the modulation of Erb receptors in CD146 overexpressing cells and more specifically with an increase in ErbB4 expression.
Cytotoxic assays indicate that the increase in the level of CD146 expression is associated with an increased resistance to doxorubicin and docetaxel (except for MDA-MB-231 cells and docetaxel, maybe due to the residual expression of CD146 in the shRNAtransfected cells). Mostert et al. [41] have demonstrated that in the normal-like breast cancer subtype (10% of breast cancer) circulating tumor cells (CTC) are CD146 positive and then potentially more resistant to chemotherapy.
We conclude that CD146 expression is associated with a mesenchymal morphology and phenotype, the modulation of EMT markers and an increase in cell motility. The increase of the anchorage-independent growth mediated by CD146 suggests an explanation for the higher tumorigenicity of CD146 + cell lines. In a precedent work [13] we demonstrated that CD146 expression was associated with poor prognosis in primary breast cancers. Here, we show that CD146 directly or indirectly contributes to EMT in vitro which may explain higher metastatic potential and poor prognosis observed in patients with tumors over-expressing CD146. CD146 interacts with Erb signaling by modifying cell surface expression of ErbB3 and ErbB4 and enhancing the response to heregulin. In addition to its relation with EMT, CD146 expression is also associated with increased chemoresistance. All these observations are consistent with CD146 expression on malignant breast cells tumors being an adverse prognosis criterion.
Stable modulation of CD146 expression
Cells were plated at 3610 5 cells in six-well plates. Transfections were performed using Fugene-6 (Roche Diagnostics, Meylan, France) as directed by the manufacturer. 48 hours after transfection, puromycin (0.8 mg/mL, Sigma-Aldrich) was added. Transfected cell lines were grown in presence of puromycin.
The negative controls for CD146 expression are the cell lines transfected with the pCMV6-XL4 vector (overexpression) or the TR20003 vector (knockdown).
RNA-interference mediated JAM-A silencing
siRNA duplexes directed against JAM-A were obtained from Sigma-Aldrich. A siRNA that recognizes the green fluorescent protein gene was used as control.
10 5 cells were plated in six-well culture dishes in 2.5 mL medium without antibiotics. Cells were transfected with a mixture of siRNA (10 nM) and Lipofectamine RNai/Max (Invitrogen) according to the manufacturer's protocol.
Quantitative real-time polymerase chain reaction (Q-RT-PCR)
Total RNA was isolated from 1 to 2610 6 cells using an RNA extraction kit (Macherey-Nagel), denatured at 65uC for 10 min and reverse transcribed using Superscript II reverse transcriptase (Invitrogen). Q-RT-PCR was tested using SYBR Green reagent on an ABI7700 system (Applied Biosystems, Foster City, CA, USA). Specific primers (Table S1) were designed using the Primer Express Software (Applied Biosystems). Gene expression was normalized for RNA concentration with b-Actin. The relative level of expression for a particular gene was evaluated using the 2 2DDCt function [43].
Flow Cytometry
Analyses were conducted with a LSRII Flow cytometer (Becton-Dickinson Immunocytometry Systems, San Jose, CA, USA). The antibodies used in this study were as follow: PE-conjugated anti-CD146 (BioCytex, Marseilles, France) and Alexa FluorH 647conjugated anti-CD321 (JamA-F11R, Becton-Dickinson, San Diego, CA). Cells were incubated with antibodies for 30 minutes on ice. Isotype controls were used to set up the threshold for positivity. Dead cells were gated out by staining with Dapi (1 mg/ mL, Invitrogen). The specific mean fluorescence intensity (sMFI) was defined as the ratio for the considered antibody over the mean fluorescence intensity obtained with the appropriate isotypic control.
Colony scattering assay
Five thousand cells were plated in 100 mm plate. After 5 days, 200 colonies were analyzed for their morphology. In scattered colonies, less than 75% of the cells exhibit cell-cell contacts. Other colonies are considered as compact colonies. Six independent plates were analyzed [44].
Anchorage dependent and independent growth assay
To evaluate anchorage dependent growth, 10 6 cells were plated in 100 mm-culture dishes and cells were quantified at day 4.
For anchorage independent growth assay, cells were suspended in 1 mL of 0.3% agarose (Caltag Laboratories, Burlingame, CA) in RPMI supplemented with 10% FCS and plated in triplicate in six-well plates on 2 mL of pre-solidified 0.5% agarose in the same medium, with 1 mL of medium covering the cells. For MCF-7 cells, medium was supplemented with insulin (30 mg/mL). The cells were incubated at 37uC in 5% CO 2 . Colonies were counted after 3 weeks. Seven experiments were analyzed (except for MCF-7, 4 experiments).
Cell migration assays
Before migration, cells were starved overnight (6 hours only for MCF-7 cells) in RPMI medium (Lonza) supplemented with 0.1% Bovine Serum Albumin (BSA, Sigma). Migration was observed in transwell culture inserts of 6.5 mm diameter and 8 mm pore filters (Greiner Bio-One SAS, Courtaboeuf, France). 3610 4 cells in 100 mL of RPMI medium with 0.1% BSA were seeded in the upper compartment and 600 mL of RPMI 10% FCS were added to the lower chamber. Cells were allowed to migrate for 30 h at 37uC for CAL51 cells and 96 h for MCF-7 or SKBR3 cells. After removing cells on the upper side of the transwell, cells on the underside were stained with 0.1% crystal violet solution (Becton Dickinson), and lysed with 10% acetic acid for quantification by colorimetric measurement at 550 nm. Experiments were done in triplicate.
Cytotoxic Assay
Cells were seeded in 96-well plates (5000 cells/well) in 100 mL of complete medium and incubated overnight for adhesion. Complete medium (100 mL) containing Docetaxel (0 to 100 nM, Sigma-Aldrich) or Doxorubicin (0 to 10 mM, Sigma-Aldrich) was added and cells were incubated for 72 additional hours. Each concentration was performed in 3 replicates. Cytotoxic activity was determined using the XTT cell proliferation Kit (Roche Diagnostics) according to the manufacturer's recommendations.
Statistical analyses
Data are presented as the Mean 6 Standard Error of the Mean. Data analysis was performed using the GraphPad Prism 5 software. P values below 0.05 were considered for the detection of statistically significant differences. | 2016-05-12T22:15:10.714Z | 2012-08-30T00:00:00.000 | {
"year": 2012,
"sha1": "a7d0bc57d0f3b5b9140f3cf1aa9772cf1bc236de",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0043752&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7d0bc57d0f3b5b9140f3cf1aa9772cf1bc236de",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12794052 | pes2o/s2orc | v3-fos-license | Cathepsin D serum and urine concentration in superficial and invasive transitional bladder cancer as determined by surface plasmon resonance imaging
Determination of cathepsin D (Cat D) concentration in serum and urine may be useful in the diagnosis of bladder cancer. The present study included 54 healthy patients and 68 patients with bladder cancer, confirmed by transurethral resection or cystectomy. Cat D concentration was determined using a surface plasmon resonance imaging biosensor. Cat D concentration in the serum of bladder cancer patients was within the range of 1.3–5.59 ng/ml, while for healthy donors it was within the range of 0.28–0.52 ng/ml. In urine, the Cat D concentration of bladder cancer patients was within the range of 1.35–7.14 ng/ml, while for healthy donors it was within the range of 0.32–0.68 ng/ml. Cat D concentration may represent an efficient tumor marker, as its concentration in the serum and urine of transitional cell carcinoma patients is extremely high when compared with healthy subjects.
Introduction
Cathepsin D (Cat D) is a ubiquitous aspartyl-family endoproteinase synthesized as a 52-kDa glycosylated preprotein, which is subsequently converted into an active two-chained (34 and 14 kDa) enzyme (1). It is distributed in lysosomes where it is involved in protein degradation and generation. Therefore, it is important for the maintenance of normal cell metabolism (2).
Previous studies have demonstrated that Cat D is involved in tumor progression. Cat D was studied in human primary breast cancer, and enzyme overexpression was found to be associated with an increased risk of metastasis and shorter survival (3,4). A similar association was identified in thyroid (5) and skin (6) cancer.
Urinary bladder cancer (UBC) is the ninth most common cancer worldwide. It is the seventh most common malignancy in males and seventeenth in females and the global standardized incidence rate is 9/100,000 in males and 2/100,000 in females (7). Annually, ~110,500 new cases in males and 70,000 new cases in females are diagnosed, and 38,200 patients in the European Union and 17,000 patients in the USA succumb to UBC (8).
Transitional cell carcinoma (TCC) biology is not completely understood. Surgical removal of the tumor mass remains the most effective treatment method. Understanding the mechanisms affecting tumor origin and progression may provide a novel theoretical basis for therapeutic methods and contribute to treatment that results in disease amelioration.
Approximately 75% of bladder cancer carcinomas are diagnosed as superficial (confined to mucosa and submucosa) and ~25% exhibit muscle-invasive disease (8).
In the present study, Cat D concentration in the serum and urine was investigated using the surface plasmon resonance imaging (SPRI) biosensor. The SPRI technique in combination with the development of sensitive biosensors is a promising tool for the determination of biologically active species. This method is label-free, easy to perform and does not require the use of radioisotopes or special substrates. The SPRI method uses an extremely specific interaction between enzymes and inhibitors (9) or antibody-antigens (10). Methods for the SPRI determination of several diagnostically significant species, including cathepsins B, D (11,12) and G (13), proteasome S20 (14), podoplanin (15) and cystatin C (16) have been developed. The SPR signal reacts to an increase in mass by changing wavelength and polarization angle. This signal is then converted to an image. Co-operation of a biosensor with the SPRI instrument ensures selectivity of the analytical signal. The biosensor contains an immobilized antibody (15) or inhibitor (9), which specifically reacts with the species to be determined. Therefore, only the species which have specifically bonded contribute to the analytical signal.
Few studies have investigated the role of Cat D in TCC. The majority of studies have focused on the evaluation of Cat D expression in TCC (17,18), and all of these studies have identified high Cat D expression in TCC tissues. Few studies have determined the concentration of various cathepsins in the serum and urine (19); however, a single study (20) reported Cat D activity in serum. The aim of this study was to determine the Cat D concentration in the blood serum and urine of patients with bladder cancer. The effects of various parameters of the urothelial cancer on the Cat D concentration were compared.
Materials and methods
Preparation of biological samples. Urine and serum samples of patients with bladder cancer were obtained prior to surgery or admission to the J. Sniadecki Provincial Hospital of Bialystok (Bialystok, Poland). The urine and serum samples were frozen immediately and maintained at -70˚C until Cat D was analyzed. Individuals with additional malignant or inflammatory disease were excluded. Blood samples were obtained from the median cubical vein. Cancer diagnosis was detected by histological examination of tumor specimens obtained from transurethral resection or cystectomy.
Prepared serum samples were diluted two-fold with phosphate-buffered saline and transferred onto the sensor surface for 10 min. The volume of the sample applied on each measuring field was 2 µl.
Urine was centrifuged at 1,850 x g for 15 min and the supernatant was separated. Finally, the sample was filtered once through a paper filter of medium density. The prepared urine samples were then transferred onto the sensor surface for 10 min. The volume of the sample applied on each measuring field was 2 µl.
The total protein concentration was determined using Lowry's method and creatinine (CREA) concentration was determined using Jaffe's method.
The urine and serum concentrations of Cat D were measured in 68 patients (48 males and 20 females; mean age, 66 years) with TCC of the bladder and 54 healthy patients. Approval for this study was obtained from the Bioethics Committee of the Medical University of Bialystok (Bialystok, Poland) and written informed consent was obtained from all the patients and donors.
Procedure of Cathepsin D determination. Cat D obtained from human liver was purchased from Sigma-Aldrich (Steinheim, Germany) and the concentration was determined using the SPRI biosensor. The SPRI technique allows sensitive determination of proteins using highly specific Table I. Diagnostic characteristics of serum Cat D/protein and Cat D/CREA concentration ratios compared with various parameters of urothelial cancer. enzyme-inhibitor interactions. An immobilized pepstatin A (inhibitor) obtained from human liver was purchased from Sigma-Aldrich and used for the Cat D entrapment on the biosensor surface. The biosensor construction and optimization of measurement conditions used were previously described (12). Briefly, plasma or urine samples were placed directly on the prepared biosensor for ~10 min to allow interaction with the inhibitor (pepstatin A). The biosensor was washed with water and HBS-ES buffer solution pH=7.4 (0.01 M 4-(2-hydroxyethyl)piperazine-1-ethanesulfonic acid, 0.15 M sodium chloride, 0.005% Tween 20, 3 mM EDTA) (all Biomed-Lublin, Lublin, Poland) to remove unbound molecules from the surface. The SPRI signal was measured twice on the basis of registered images, following the immobilization of pepstatin A and then following interaction with Cat D from the samples. The signal, which is proportional to coupled biomolecules, was obtained by calculating the difference between the signal prior to and following the interaction with biomolecules. The concentration was determined using the calibration curves of the SPRI signal depending on the concentration of Cat D.
Statistical analysis. The results are presented as the median ± standard deviation. Statistical analyses were performed using Student's t-test, and P<0.05 and P<0.01 were considered to indicate a statistically significant difference. Table II. Diagnostic characteristics of urine protein, Cat D and Cat D/protein ratio compared with various parameters of urothelial cancer.
Results
Changes in Cat D concentration. The Cat D concentration in serum (Table I) and urine (Table II) samples was investigated with regard to the bladder cancer parameters. Cat D/total protein and Cat D/CREA ratios are also shown in Tables I and II. A summary of the results are presented in Fig. 1. A significant difference in serum and urine Cat D concentration levels was observed between bladder cancer patients and healthy subjects (Fig. 1). This indicates the potential of Cat D as a cancer marker. No significant differences in CREA concentration were identified between bladder cancer patients and healthy subjects. To further investigate the results of the present study, Cat D concentration was corrected by serum CREA concentration to eliminate the impact of renal impairment on the observed results. Furthermore, the Cat D/protein ratio was introduced as a novel parameter. In this way, one of the causes of inflammatory proteinuria was eliminated.
Blood serum analysis. In terms of different cancer parameters, few parameters in the serum were statistically significant. When comparing invasive and superficial tumors, values were almost identical; however, the Cat D/CREA ratio was found to be significantly higher in superficial tumors when compared with invasive tumors (P<0.05; Table I). This was due to the significantly higher CREA concentrations (data not shown) identified in invasive tumors when compared with superficial tumors (P<0.05).
In recurrent, multifocal, high-grade and smaller (<30 mm) tumors, serum Cat D levels were elevated; however, no significant differences were identified. This pattern was confirmed by the Cat D/CREA ratio in all the aforementioned groups. Males and older individuals were characterized by higher levels of Cat D; however, this difference was not statistically significant and did not confirm this correlation in relation to the Cat D/CREA ratio (Table I).
Urine analysis. In urine, a significantly higher Cat D/protein ratio was demonstrated in primary, single, smaller and low-grade groups of cancer. Notably, in the case of low-grade tumors, Cat D/protein ratio was significantly higher than that of high-grade tumors, while Cat D concentration alone was marginally elevated in high-grade tumors. This may be explained by the significant difference in protein concentration (data not shown) identified between low-and high-grade tumors (P<0.05).
Discussion
The majority of UBCs are TCCs. The effect of various parameters of TCCs on Cat D concentrations were analysed in this study. The most significant result of the present study is that all bladder tumor cases exhibited significantly higher serum (eight-fold) and urine (seven-fold) Cat D concentrations when compared with healthy control subjects. This shows the efficacy of Cat D concentration as a tumor marker. In the serum, the lowest Cat D concentration for TCC was 1.3 ng/ml, whereas the highest Cat D concentration for healthy donors was 0.52 ng/ml. In the case of urine, the lowest Cat D concentration for TCC was 1.35 ng/ml, whereas the highest Cat D concentration for healthy donors was 0.68 ng/ml. This comparison shows that the concentration of Cat D may have prognostic value for excluding TCC, and Cat D may be used as a tumor marker to reduce the number of cystoscopies.
TCC patients were found to exhibit extremely high, but relatively stable, levels of serum and urine Cat D, which were independent of tumor parameters. Serum Cat D concentrations were found to range between 1.30 and 5.59 ng/ml with the majority of the results at ~3.3 ng/ml and, in the case of urine, the concentration was found to range between 1.35 and 7.14 ng/ml with the majority of the results at ~3.2 ng/ml.
A high recurrence rate is characteristic of TCC of the bladder. In superficial stages, Ta and T1, as well as in particular cases of T2a, it may be effectively cured by bladder-sparing treatment (21). Effectively controlling bladder TCC prolongs survival; however, this requires strict follow-up procedures to guarantee early detection. The European Association of urology (22) and American Association of Urology (23) consistently recommend performing cystoscopy with established procedures. Previous studies have attempted to identify a tumor marker in the blood or urine to facilitate diagnosis and eliminate invasive procedures (24,25). Urine cytology, which is recognized as a traditional test, has low sensitivity. Therefore, a negative result does not exclude the patient from obligatory cystoscopy (26). Novel substances are verified as potential highly sensitive markers to reduce the number of cystoscopies (27).
Further studies using larger numbers of patients are required, which investigate the association between Cat D and the individual parameters that characterize bladder cancer, in particular the recurrence and prediction of progression. | 2017-08-15T09:58:48.090Z | 2014-06-12T00:00:00.000 | {
"year": 2014,
"sha1": "21d543688a10b940a7684494166511bcbc4e693a",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/ol/8/3/1323/download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21d543688a10b940a7684494166511bcbc4e693a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14638613 | pes2o/s2orc | v3-fos-license | Perforation Rates of Cervical Pedicle Screw Inserted from C3 to C6 - A Retrospective Analysis of 78 Patients over a Period 5 of 14 Years -
Cervical spine fixation using cervical pedicle screw (CPS) was first reported by Abumi [1] and Jeanneret [2] in 1994. Both reports described cases of cervical instability caused by cervical trauma. Cervical spine fixation by CPS was introduced as a procedure for the cervical instability of middle and/or lower cervical spine caused by trauma, and the importance of fixation by CPS for posterior cervical decompression and reconstruction was later reported [3,4]. Cervical pedicle screws can achieve rigid fixation compared to other cervical pedicle fixation methods [5, 6], and enable posterior cervical cord decompression. However, cervical pedicle screw insertion is technically demanding because of the narrow pedicle diameter and the risk of serious neurovascular complications including vertebral artery tear, spinal cord injury, and nerve root injury [7]. To achieve more accurate and safe pedicle screw insertion, navigation by two-dimensional imaging system or CT has been employed in recent years [9-12]. However, CPS insertion from C3 to C6 is technically demanding. The purpose of this study was to evaluate the perforation rates and direction of screw perforations in these insertions using CT-based navigation system.
Introduction
Cervical spine fixation using cervical pedicle screw (CPS) was first reported by Abumi [1] and Jeanneret [2] in 1994.Both reports described cases of cervical instability caused by cervical trauma.Cervical spine fixation by CPS was introduced as a procedure for the cervical instability of middle and/or lower cervical spine caused by trauma, and the importance of fixation by CPS for posterior cervical decompression and reconstruction was later reported [3,4].Cervical pedicle screws can achieve rigid fixation compared to other cervical pedicle fixation methods [5,6], and enable posterior cervical cord decompression.However, cervical pedicle screw insertion is technically demanding because of the narrow pedicle diameter and the risk of serious neurovascular complications including vertebral artery tear, spinal cord injury, and nerve root injury [7].To achieve more accurate and safe pedicle screw insertion, navigation by two-dimensional imaging system or CT has been employed in recent years [9][10][11][12].However, CPS insertion from C3 to C6 is technically demanding.The purpose of this study was to evaluate the perforation rates and direction of screw perforations in these insertions using CT-based navigation system.
Pedicle screw insertion technique assisted with navigation system
The basic data used for navigation were preoperative CT scan imaging data, consisting of consecutive axial slices 1 mm in thickness of the patients.The data were transferred to the system computer and were reconstructed into two-dimensional (2-D) and three-dimensional (3-D) images on a video monitor.Other mechanical components consisted of a computer workstation, a surgical reference frame, a probe rod to indicate the position in the surgical field, infrared light-emitting diodes (LEDs) that were attached to the probe rod, an electrooptical camera as a position sensor connected to the computer, and a drill guide.Infrared beams were tracked by the electro-optical camera system and the position of the respective LEDs was identified in real time in the surgical field.
Registration was performed in order to accurately match the computer-reconstructed 3-D surgical space with the real surgical space, by identifying four or more points on the vertebrae and the corresponding points of the vertebrae on the 3-D CT image on the monitor (matched-pair point registration).Though more precise matching of the two spaces is usually obtained by repeated registration procedures with 30 or more randomized points indicated by the probe on the surface of the vertebral body (surface registration), this group's procedure employs only 5 to 6 registration points for two consecutive lamina, to shorten the surgical time.More accurate positioning is possible by using the top of the spinous process and bilateral inferior facet caudal tip as points.
We established a surgical plan a day before surgery and confirmed insertion point of screws, applicability of 3.5 mm screws, point-for-point registration, screw position in relation to vertebral artery.This planning procedure took 20 to 40 min.Evaluation during the surgical plan for navigation provides further benefit by identifying pedicles with insertion risks and excluding such pedicles from operation (about 10% of all pedicles were excluded).Then, the entrance holes, direction, diameter, and depth of the screws were depicted with a cursor on the monitor, and the surgery was initiated.After exposure of the posterior bony elements of the spine, the reference frame was fixed to the spinous processes and the registration procedures described above were performed.After completion of the registration by matched-pair point and surface registration, the screws were inserted under the guidance of the navigation system.The position of the probe or drill guide was superimposed in realtime on CT images on the monitor, and the screws were introduced into the pedicles at the planned position indicated on the monitor.The required time between fixation of reference frame to spinous process and insertion of pedicle screw to each segment (1 or 2 vertebrae) was 10 to 15 min.After all screws were set, the reference frame for registration was removed and additional surgical procedures including decompression or bone graft were followed.If pedicle screw insertion was ineligible, sublaminar cable fixation by SecureStrand was performed.Using postoperative axial CT, the screw insertion status was classified as follows: grade 1 (no perforation), screw is accurately inserted in the pedicle; grade 2 (minor perforation), perforation of less than 50% of screw diameter; grade 3 (major perforation), perforation of 50% of screw diameter or more.The directions of perforations were evaluated as well.
The data were analyzed by a paired-sample Student t test using SPSS (SPSS Japan Inc., an IBM company, Tokyo, Japan), with p<0.05 defined as significant.
Case report
67 year-old male with rheumatoid cervical spine.The subject presented with spinal cord compression and instability at the C3-C4 and C4-C5 levels and showed myelopathy (Fig. 4) Laminoplasy and posterior fusion with CPS from C3 to C5 was performed (Fig. 5).Postoperative axial CT indicated the screw insertion status, which was as follows: bilateral C3, grade 2; right side of C4, grade 3; left side of C4, grade 2; and bilateral C5, grade 1 (Fig. 6).
Discussion
Cervical pedicle screws can achieve rigid fixation compared to other cervical pedicle fixation methods [13,14], and enable posterior cervical cord decompression.However, cervical pedicle screw insertion is technically demanding because of the narrow pedicle diameter and the risk of serious neurovascular complications including vertebral artery tear, spinal cord injury, and nerve root injury [15].Indication of cervical pedicle screw technique is as follows: destructive lesions including RA, DSA, and spine tumor; procedures that include both spinal cord decompression and posterior fusion.For rheumatoid cervical spine, this technique is especially useful because the strong initial fixation eliminates the necessity of postoperative external fixation such as halo vest or collar.
To achieve more accurate and safe pedicle screw insertion, navigation by two-dimensional imaging system or CT has been employed in recent years.In the meta-analysis reported by Tian et al., the accuracy of pedicle screw insertion by CT navigation (90.76%) was significantly improved compared to the two-dimensional imaging system (85.48%)[16].Our institution employs a CT-based navigation system for the cervical pedicle screw insertion [9][10][11][12].The result of this paper was that the percentage of major perforations were 4.4%, total perforation rates were 17.9% for all cervical pedicle screws.Richter et al. [17] reported comparative study of cervical pedicle screw fixation with conventional versus computer assisted surgery (CAS).In their result, pedicle perforation was 8.6% in conventional group and 3.0% in CAS group.Richter et al. indeed reported an excellent surgical outcome.The study of Richter et al. involved screw insertion for the upper thoracic vertebrae that have a larger vertebrae width as compared to the C3-C6 vertebrae that we studied, and vertebrae sizes could be larger in German individuals than in Japanese individuals.The reason for the larger perforation rate in our study could have resulted from the presence of a smaller pedicle in Japanese people and the narrow pedicle sizes of C3-C6 vertebrae.
Higher perforation rates for grade 3 (major perforation) were observed for C4 and C3.Furthermore, higher perforation rates for grade 2 and 3 (including minor perforation) were observed for C5, C4, and C3, in this order.For C6, the number of both major and minor perforations was small.Rheinhold et al. [18] measured the mean outer pedicle width for C3-C6 in human cadavers (mean age, 85 years), and the values for C3, C4, C5, and C6 were found to be 5.7 0.4 mm, 5.6 0.6 mm, 6.2 0.6 mm, and 6.7 0.6 mm, respectively.Yusof et al. [19] studied the transverse pedicle diameter of the C2-C7 of the cervical spine in a Malaysian population using computerized tomography (CT) measurements.The mean transverse diameters of the cervical pedicle of C3, C4, C5, and C6 in males were 5.2, 5.1, 5.2, and 5.5mm, respectively.In females, the mean transverse diameter of the cervical pedicle of C3, C4, C5, and C6 were 4.6, 4.7, 4.9, and 5.2mm, respectively.Our data on CPS perforation supported the results of the abovementioned studies in that pedicles with smaller diameters were found to have a larger number of perforations.The reason for the large number of minor perforations at C5 is unclear.C3 and C4 pedicles are generally narrow and hence, screw insertion is performed carefully.However, C5 is wider than C3 and C4; thus, the surgeon might be less attentive during the screw insertion for C5, which could contribute to this finding.Therefore, careful attention should be paid with respect to major perforations in the case of the C3 and C4 pedicles.
We found that a larger number of minor perforations occurred in the lateral direction than in the medial direction and that many major perforations were observed in the medial direction at the C3 level.During screw insertion in the cervical pedicle, the cortex is thicker in the medial direction and thinner in the lateral direction, and therefore, the CPS is likely to cause perforation in the lateral direction.In this study, perforation occurred in the lateral direction in 76% of the cases.
The possible causes of perforation of pedicle screw are as follows: Deviation of CT-based navigation system caused from unintentional movement of reference frame during operation etc.; lateral perforation caused by pressure from paravertebral muscle to probe, tap, or screw; narrow osteosclerotic pedicle that has no cancellous bone.To avoid perforation under such conditions, countermeasures as follows are required: If the practitioner judges the insertion point or screw direction shown by the navigation system is incorrect, intraoperative x-ray image or fluoroscopy shall be used.If screw direction could not be set sufficiently in the medial orientation, prepare a skin incision externally and insert probe, tap, and screw from the incision.In the case of narrow or osteosclerotic pedicle, skip the pedicle or change the fixation method to lateral mass screw, sublaminar cable, or other.
Conclusions
Major perforations were mostly observed in C4 and C3 pedicles.However, the number of C5 pedicle perforations was as large as C4 or C3 pedicle perforations when the total perforations, i.e., both major and minor perforations, were considered.The perforation rate of C6 pedicle was lesser than that for pedicles from C3 to C5.The major perforation rate for lateral and medial perforations was comparable.CPS insertion from C3 to C5 should be performed with extreme caution even under the CT-based navigation system.
Fig. 1 .
Fig. 1.Position of the cervical pedicle screw at different vertebral levels, observed by postoperative CT.Grade 1 (no perforation), screw is accurately inserted in the pedicle; Grade 2 (minor perforation), perforation of less than 50% of screw diameter; Grade 3 (major perforation), perforation of 50% of screw diameter or more. | 2016-01-11T18:29:14.669Z | 2012-03-28T00:00:00.000 | {
"year": 2012,
"sha1": "7521b3f9de9bce51770d8cbdfa91e41f680fba15",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/37506",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0d3e0cafbd050cdf312a2b6fb1cc10c2f1217cea",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
23455920 | pes2o/s2orc | v3-fos-license | Radiation-Induced Intratumoral Necrosis and Peritumoral Edema after Gamma Knife Radiosurgery for Intracranial Meningiomas
Objective To study the clinical significance and relevant factors of radiation-induced intratumoral necrosis (RIN) and peritumoral edema (PTE) after Gamma knife radiosurgery (GKRS) for intracranial meningiomas. Methods We retrospectively analyzed the data of 64 patients who underwent GKRS for intracranial meningioma. The mean lesion volume was 4.9 cc (range, 0.3-20), and the mean prescription dose of 13.4 Gy (range, 11-18) was delivered to the mean 49.9% (range, 45-50) isodose line. RIN was defined as newly developed or enlarged intratumoral necrosis after GKRS. Results RIN and new development or aggravation of PTE were observed in 21 (32.8%) and 18 (28.1%) cases of meningioma, respectively during the median follow-up duration of 19.9±1.0 months. Among various factors, maximum dose (>25 Gy) and target volume (>4.5 cc) were significantly related to RIN, and RIN and maximum dose (>24 Gy) were significantly related to the development or aggravation of PTE. In 21 meningiomas with development of RIN after GKRS, there was no significant change of the tumor volume itself between the times of GKRS and RIN. However, the PTE volume increased significantly compared to that at the time of GKRS (p=0.013). The median interval to RIN after GKRS was 6.5±0.4 months and the median interval to new or aggravated PTE was 7.0±0.7 months. Conclusion A close observation is required for meningiomas treated with a maximum dose >24 Gy and showing RIN after GKRS, since following or accompanying PTE may deteriorate neurological conditions especially when the location involves adjacent critical structures.
INTRODUCTION
Surgical resection has been the treatment of choice for symptomatic meningiomas 1, 20,24) . Complete resection is often curative for benign meningiomas and the progression-free survival of approximately 95% at 5 years and 90% at 10 years has been reported 22,23) . However, meningiomas that envelop critical neural or vascular structures cannot be completely resected due to postresective neurological deterioration. Over the past two decades, radiosurgery has been conducted as an adjuvant or alternative procedure to resective surgery 1) . Gamma knife radiosurgery (GKRS) can be considered for a tumor that is not suitable for resection because GKRS has shown effective local tumor con-cording to the Macdonald' s criteria 9,17) . Complete response (CR) was defined as a complete disappearance of all enhancing tumors, partial response (PR) as ≥50% decrease in enhancing tumor volume, progressive disease (PD) as ≥25% increase in enhancing tumor volume, and stable disease (SD) as <50% decrease or <25% increase in enhancing tumor volume. We defined local tumor control as CR, PR, and SD. Intratumoral necrosis was defined as low signal intensity area on contrast-enhanced T1weighted images 7,8) . Radiation-induced intratumoral necrosis (RIN) was defined as newly developed intratumoral necrosis or aggravation of pre-existing necrosis of more than 25% on the follow-up imaging after GKRS. Aggravation of pre-existing PTE was defined as at least 25% increase of the volume, compared to the baseline data measured at the time of GKRS.
Statistical analysis
Statistical analysis was performed using SPSS version 12.0 (SPSS, Chicago, IL, USA). Local control rate was calculated from the time of GKRS. To investigate relevant factors, Kaplan-Meier analysis was used for categorical variables, and Cox regression model was used for continuous variables and multivariate analysis. Results were regarded as significant for p<0.05.
RESULTS
The median clinical follow-up duration was 19.9 months (range, 3.7-35.1). Sixty-four meningiomas were assessed by at least one follow-up imaging with a mean imaging follow-up duration of 12.9 months (range, 5.4-33.4).
Local tumor control
Results of local tumor control at the time of the last follow-up were CR in 0 (0%), PR in 3 (4.7%), SD in 60 (93.7%), and PD in 1 (1.6%). The one progressed case was not a true failure because RIN was observed at the last follow-up (17.8 months after GKRS) and tumor shrinkage is expected with further follow-up. The actuarial local tumor control rate at 5 years was 97.9% 13) .
Patient characteristics
Sixty-four patients with a single intracranial meningioma underwent GKRS in our hospital during the period between August 2008 and January 2011. We retrospectively reviewed the clinical records, radiological and dosimetric data of those patients. The patients were comprised of 18 men and 46 women. The mean age at the time of GKRS was 59.7 years (range, 29-90). Among the 64 patients, GKRS was performed as a primary treatment modality in 50 patients, and as an adjunctive therapy after resective surgery for WHO grade I meningioma in 14 patients (7 for the residual, 7 for the recurrent tumor). Among 64 meningiomas, 34 lesions were located in the non-skull base and 30 in the skull base. The locations were the convexity (n=22), petrous apex (n=12), falx (n=7), olfactory groove (n=6), sphenoid wing (n=6), parasagittal (n=5), cavernous sinus (n=5), and tentorium (n=1).
Local tumor control, radiation-induced intratumoral necrosis and peritumoral edema MR imaging was performed every 6 months, including continuous thin cut T1 enhanced images, which was the same technique as MR imaging for GKRS. Tumor volume was calculated from enhancing lesions in T1 enhanced images, and PTE volume was calculated from T2 abnormal signal volume minus the tumor volume. Volume measurements of tumors and PTE were performed using the co-registration program (Leksell Gamma Plan ® , version 8.3.1). Local tumor control was assessed ac-ses (Table 2).
Among 21 lesions with developed RIN after GKRS, PTE was observed in 7 lesions at the time of GKRS and in 14 after GKRS. The mean volume of 7 pre-existing PTE was 8.7 cc (range, 0.1-17.5), however, the mean volume of 14 PTE after GKRS was significantly increased to 15.5 cc (range, 0.8-38.6) (p=0.013, Wilcoxon signed ranks test).
Radiation-induced intratumoral necrosis or peritumoral edema-related symptoms
Among 21 lesions that showed RIN with or without PTE and seven lesions that showed PTE without RIN, 9 lesions were symptomatic during the median follow-up period of 18.2±2.4 months (range, 6.8-29.4) after GKRS. Therefore, symptomatic complications developed during the early period after GKRS in 14.1% (9/64) of the patients in our series. RIN or PTE-related symptoms were headache in 4, dizziness in 3 and seizure in 2. The median onset time was 6.0±0.3 months (range, 3.7-13.5) after GKRS. Among 9 patients, symptoms resolved in six patients after the median time of 5.4 months (range, 0.6-12.2) but the symptoms continued in two patients despite steroid administration. Decompressive surgery was required in one patient because of severe PTE. A 51-year old male patient underwent GKRS for a meningioma of 15.4 cc in volume with a prescription dose of 13 Gy at 50% isodose line. The PTE volume at the time of GKRS was 6.3 cc. The patient complained of dizziness and the follow up MR imaging taken at 5.6 months after GKRS showed RIN and aggravation of PTE. Even after administration of oral steroid, a follow-up MR taken at 11.8 months after GKRS showed a further increase of tumor volume and PTE to 18.7 cc and 38.6 cc, respectively (Fig. 1). His neurological condition deteriorated rapidly, and decompressive surgery was performed eventually. Pathologic examination showed the characteristics of multifocal necrosis.
DISCUSSION
Stereotactic radiosurgery offers a relatively non-invasive and effective method in the management for intracranial tumors, including meningiomas 14) . Metellus et al. 15) compared GKRS and conventional radiotherapy for primary and residual meningiomas. They concluded that both conventional radiotherapy and GKRS were safe and efficient, but GKRS provided a better radiological response, and was more compatible with most patients than conventional radiotherapy. Hsieh et al. 10) compared the treatment results of meningioma between GKRS and linear accelerator-based radiosur-crosis (yes) (p=0.0095) were significant factors related to RIN in univariate analysis. Both the maximum dose >25 Gy (p=0.001, odds ratio=6.313, 95% confidence interval : 2.089-19.080 using the forward stepwise method) and target volume >4.5 cc remained significant in multivariate analysis (p=0.016, odds ra-tio=0.287, 95% confidence interval : 0.104-0.794 using the forward stepwise method) (Table 1). RIN rates of target volume >4.5 cc were 3.3%, 10.0% and 46.7% at 6, 12 and 24 months, respectively. However, RIN rates of target volume ≤4.5 cc were 2.3%, 2.3% and 13.6% at 6, 12 and 24 months, respectively.
Peritumoral edema
PTE was observed in 11 (17.2%) out of 64 meningiomas at the time of GKRS, but in 18 lesions (28.1%) after GKRS during the follow-up period. Among 18 lesions, PTE was newly developed in seven lesions and six showed aggravation of pre-existing PTE, meaning that new or aggravated PTE was observed in 20.3% (13/64) of meningiomas. The median interval to new development or aggravation of PTE was 7.0±0.7 months (range, 1.1-13.1), which was a little later than the median interval to RIN. Among the factors, RIN (yes) (p=0.0008), Paddick's CI ≤0.85 (p=0.0449), pre-existing RIN (yes) (p=0.0129) and maximum dose >24 Gy (p=0.0113) were the significant factors related to new or aggravated PTE in univariate analysis. Both RIN (yes) (p=0.001, odds ratio=0.086, 95% confidence interval : 0.019-0.387 using the forward stepwise method) and maximum dose >24 Gy (p=0.016, odds ratio=10.000, 95% confidence interval : 1.535-65.137 using the forward stepwise method) also remained significant in multivariate analysis. However, pre-existing PTE, target volume and marginal dose were not significantly related to new or aggravated PTE in univariate and multivariate analy- val to the development of RIN (6.5±0.4 months) was a little shorter compared to that of PTE (7.0±0.7 months). These results may suggest that RIN can be a warning sign of the development or aggravation of PTE. Besides RIN, maximum dose >24 Gy was also a significant factor related to new development or aggravation of PTE in our study. Flickinger et al. 6) reported on the possible relations between marginal dose and complications occurring after GKRS of meningiomas. They reported that the actuarial rates of any symptomatic post-radiosurgical sequelae (at 10 years) were 5.3±2.3% in the median marginal dose 14 Gy (range, 8.9-20) and 22.9±9.3% in the median marginal dose 17 Gy (range, [10][11][12][13][14][15][16][17][18][19][20]. In addition, they insisted that complications correlated with the volume of tissue receiving ≥12 Gy. However, marginal dose was not significantly related to new development or aggravation of PTE in our study. This result may be caused by the lowered prescription dose from October 2009 after having patients experience severe PTE after GKRS. Although prescription dose is a fixed value, maximum dose can be varied. Hur et al. 11) reported on the difference of the maximum dose ranges from 0.7-7% by matrix size and target volume. Their results are coherent with our results which showed that maximum dose was significantly related to new development or aggravation of PTE, instead of marginal dose. Among 64 lesions, 11 lesions were treated with marginal dose of 12 Gy showed the range of maximum dose from 23.6-24.3 Gy (mean, 24.1) in our study.
CONCLUSION
In our series, target volume >4.5 cc and maximum dose >25 Gy were significant factors for the development of RIN after gery. They reported that the mean complication rate was 15.2% (range, 8.3-23.6) in the GKRS treated group treated, and 25.9% (range, 17.0-34.9) in those treated using a linear accelerator. Therefore, GKRS can be said to provide more beneficial effects in terms of radiationinduced complications. Radiation-induced imaging changes can be the signs showing treatment response after GKRS, but these changes can also exacerbate patients' symptoms or neurological signs by increasing the mass effect. These unwanted effects can prompt additional treatments, such as steroid administration or decompressive surgery, especially in benign tumors. Therefore, some researchers have studied radiationinduced imaging changes after stereotactic radiosurgery for benign intracranial tumors and relevant factors. Chang et al. 3) reported that patients treated with GKRS for meningiomas experienced peritumorous imaging changes at a rate of 23.6%. In their series, tumor location, maximal dose, and margin dose were related to the occurrence of imaging changes after GKRS in univariate analysis. However, only tumor location (convexity, parasagittal and falx cerebri) remained significant in multivariate analysis. Other studies have reported regarding the significance of tumor location and the location of the non-skull base meningiomas seemed to encounter more adverse radiation effects compared to the skull base lesions 4,5,12,16,19) . However, there was no correlation between tumor location and adverse radiation effects in our study.
Among the imaging changes that can lead to clinical deterioration after GKRS, PTE has been the most frequently studied. Cai et al. 2) reported that worsened pre-existing PTE, or new PTE occurred in about 25% of patients who underwent GKRS for intracranial meningiomas. Tumor-brain contact interface area was a strong predictor for the occurrence of new PTE in their series. Kollová et al. 13) analyzed 400 meningiomas and found that tumors with edema before GKRS, tumor volume >10.0 cc, tumors treated with a maximum dose >30 Gy and margin dose >16 Gy were significantly related to post-GKRS edema (aggravation or new development). We thought that RIN may lead to neurological deterioration because RIN can result in transient tumor volume increase, especially when meningiomas are located adjacent to the critical structures, such as the cerebrospinal fluid pathway. In our series, a target volume >4.5 cc and maximum dose >25 Gy were significantly related to the development of RIN, but there was no significant increase of tumor volume itself in the 21 meningiomas that developed RIN after GKRS. However, RIN was significantly related to new development or aggravation of PTE in both univariate and multivariate analyses. The median inter- GKRS for intracranial meningiomas. RIN and maximum dose >24 Gy were significantly related to new development or aggravation of PTE, and RIN preceded new development or aggravation of PTE. Therefore, we suggest that close observation is required for meningiomas treated with a maximum dose >24 Gy and showing RIN after GKRS, because accompanying PTE may deteriorate neurological conditions when they are located adjacent to critical structures or the cerebrospinal fluid pathway. | 2017-09-15T13:32:31.314Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "994ddd7aba63ec9f5dd8631dc05bc4dbe25a62f5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3340/jkns.2012.52.2.98",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "994ddd7aba63ec9f5dd8631dc05bc4dbe25a62f5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34712039 | pes2o/s2orc | v3-fos-license | Creation of “Super” Glucocorticoid Receptors by Point Mutations in the Steroid Binding Domain*
we that a displayed an increased affinity glucocorticoid steroids and a decreased relative affinity for cross-reacting steroids progesterone and aldosterone. The increased in vitro affinity of the super receptors was maintained in a whole cell bioassay. These results indicate that modifications of the glucocorticoid receptor, and probably the other steroid receptors, may further increase the binding affinity and/or specificity.
Almost all modifications of the steroid binding domain of glucocorticoid receptors are known to cause a reduction or loss of steroid binding activity. Nonetheless, we now report that mutations of cysteine 656 of the rat receptor, which was previously suspected to be a crucial amino acid for the binding process, have produced "super" receptors. These receptors displayed an increased affinity for glucocorticoid steroids and a decreased relative affinity for cross-reacting steroids such as progesterone and aldosterone. The increased in vitro affinity of the super receptors was maintained in a whole cell bioassay. These results indicate that additional modifications of the glucocorticoid receptor, and probably the other steroid receptors, may further increase the binding affinity and/or specificity.
Steroid binding is the first step in a series of events that translate the structural information of the steroid into the observed biological response. Molecular biology experiments have defined the 250 carboxyl-terminal amino acids as being the steroid binding domain of glucocorticoid receptors (1,2). In this region, >96% of the amino acid sequence in the human, mouse, and rat receptors is identical. The homology between the steroid binding domains of all of the steroid receptors (androgen, estrogen, glucocorticoid, mineralocorticoid, and progesterone) is much less but still extensive (3). This homology offers a reasonable explanation for the fact that virtually every steroid appears to interact with more than one class of receptors (4,5). Thus it has proved difficult to selectively recognize the biologically active form of the various receptors on the basis of steroid binding (4,5). The consequences of such cross-reactivity are manifold. It complicates the identification of the steroid binding form of receptors (6) and causes unwanted side effects in i n uitro experiments with cells containing the offending receptors. In clinical settings, the side effects can be severe, such as to limit long term glucocorticoid therapy to only those cases that are not easily * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Both authors contributed equally to this paper and should be considered as first authors. )I TO whom correspondence should be addressed Bldg. 8, Rm. B2A-remedied by other protocols (7).
One solution to this problem is to modify the steroid binding domain to cause increased specificity of steroid binding. Increased binding affinity would also be desirable since lowering the concentrations of steroid needed for full glucocorticoid response would also decrease the binding (and lower the biological responses) with other receptors. Unfortunately, all published reports indicate that this will be very difficult to accomplish. For the glucocorticoid receptor, terminal deletions to give species smaller than amino acids 497-795 (all numbering is for the rat receptor sequence) resulted in more than a 300-fold reduction in affinity (2). The only exception involves a 16-kDa fragment of the rat receptor which was obtained by partial trypsin digestion of steroid-free receptors. The affinity of this 16-kDa fragment is 23-fold lower than that of the intact receptor (8), but it maintains all of the steroid binding specificity of the intact receptor' and still binds heat shock protein 90 (9). Most internal deletions or substitutions and point mutations of the glucocorticoid receptor steroid binding domain either eliminate or greatly decrease steroid binding (1,(10)(11)(12). It thus appears that, aside from the few changes that are seen in rat vs human vs mouse receptors, the native sequence may be optimal for binding glucocorticoid steroids with high affinity and specificity and that many amino acids are crucial for steroid binding.
There have been numerous efforts to identify the crucial amino acids involved in steroid binding to the glucocorticoid receptor. The initial candidates were cysteine (13) and lysine and arginine (14). In fact, it has long been known that intact thiols are involved in the steroid binding of all receptors (15). Direct support for this conclusion was obtained when Dex-Mes,' a thiol-specific (16) affinity label for glucocorticoid receptors, was shown to covalently label just one thiol in the rat receptor, i.e. cysteine 656 (17). As expected (17), the identical cysteine in the mouse (18) and human (19) glucocorticoid receptor is also labeled by Dex-Mes. However, recent data indicate that a vicinal dithiol group is involved in steroid binding by virtue of its ability to form an intramolecular disulfide (20,21) and to react with arsenite (6,21,22), to give modified receptors which no longer bind steroid. Recently we have identified the vicinal dithiols (Cys-656 and -661) and found that yet a third thiol (Cys-640) is involved in steroid binding to glucocorticoid receptors.' Based on the above data, it would be predicted that mutations of Cys-640, -656, and -661 would both reduce (or eliminate) the affinity of steroid binding to glucocorticoid receptors and decrease binding specificity. We now report that cysteineto-serine point mutations of Cys-640 and Cys-661 did cause a reduced affinity. Surprisingly, however, mutations of Cys-656 caused an increase in both affinity and specificity. To the best of our knowledge, this is the first report of a steroid receptor mutation causing a higher affinity and suggests that further increases may be possible.
Antibodies-A monoclonal anti-receptor antibody BUGR-2 (23) and a polyclonal antibody (aP1) against the carboxyl-terminal region of the rat glucocorticoid receptor (24) were gifts from Dr.
Rohert Harrison (University of Arkansas for Medical Science) and Dr. Bernd G o n e r (Friedrich Miescher-lnstitut), respectively. Biotinylated antimouse and anti-rahhit second antihodies for Western blotting were from Vector Laboratories.
Buffers and Solutions-TAPS buffer (25 mM TAPS, 1 mM EDTA, and 10% glycerol) was adjusted to pH 8.8 or 9.5 a t 0 "C with sodium hydroxide. Two-fold concentrated SDS sample buffer (2 X SDS) were introduced into COS-7 cells (10"/100-mm dish) hy standard calcium phosphate transfection methods. After -16 h at 37 "C in a 5% CO, incuhator, excess calcium phosphate and precipitate were removed hy washing with PRS. The cells were incubated for another -48 h in DMEM plus 5% FHS, harvested hy tr-ypsinization followed hy centrifugation (for 10 min a t 1570 X g) and washing 3 times with PRS, and stored a t -80 "C until used.
Steroid Binding Assays-COS-7 cell cytosol containing the steroidfree receptors was obtained by the lysis of cells a t -80 "C and centrifugation a t 15,000 X g (25). For competition hinding assays, duplicate aliquots (72 pl) of COS-7 cell cytosol (33.5% in pH 8.8 TAPS, 27 mM Na2Mo0, buffer) were treated with 4 pl each of ['HI Dex (in pH 8.8 TAPS buffer) and various concentrations of nonradioactive competing steroid (in 20% EtOH in pH 8.8 TAPS buffer; final concentration of ["HIDex = 3 X lo-" M). The average specific binding, determined after 2.5 or 24 h of incubation by first adding a 10% dextran-coated charcoal solution (added volume = 20% of reaction solution volume) to remove free steroid and then subtracting the nonspecific binding seen in the presence of excess nonradioactive Dex, was expressed as percentage of the noncompeted control and plotted uersus the loglo of the concentration of the competing steroid. . 2 8 ) ) for each 60-mm dish. Cells were incubated overnight with the DNA precipitates, after whirh they were washed twice with PRS and treated with fresh medium (DMEM H-16 supplemented with 5% FRS) containingsteroids. After an additional 24 h, extracts were prepared hy four freeze-thaw cycles (-75 "C, 65 'C) and centrifuged for 5 min at 15.000 X R. Heat-treated extracts (5 min, 65 "C) were normalized for protein content and the amount of expressed chloramphenicol acetyltransferase enzyme artivity, in terms of IT-acetylated chloramphenicol. was determined by a nonchromatographic assay (29).
Expression of Receptors with Point Mutations at Cys-640, -656, and -662-Rat glucocorticoid receptors with four different point mutations
were examined: cysteine-to-serine at positions 640,656, and 661 and cysteine-to-glycine at position 656. Cell-free studies were conducted with extracts of COS-7 cells that had been transiently transfected with the corresponding cDNAs. The expression of the wild type and mutant receptors was identical, as determined by Western blotting (Fig. 1 A ) . The presence of the lower M , bands in Fig. 1A for all receptors is probably due to alternative translational starts (30). This conclusion was strengthened by the observation that chymotrypsin digestion of both authentic 98-kDa receptor and the transiently expressed wild type receptor gave, after removal of the amino-terminal half of the receptors, an identical 42-kDa fragment (Fig. l R ) , which has the same binding affinity as the 98-kDa receptor (2,8). Further evidence that the desired point mutations had been effected was obtained by affinity labeling with t3H]Dex-Mes (31). After separation on SDS-polyacrylamide gels and visualization by fluorography, a specifically labeled band was seen for the 640 and 661 mutant receptors at the same molecular weight as for the wild type, 98-kDa receptor. In contrast, no specifically labeled species was seen for either of the 656 mutant receptors.' This is the expected result since Dex-Mes is known to affinity-label only Cys-656 in the rat receptor (17).
Steroid Binding Specificity of the Mutant Receptors-The specificity of steroid binding was determined by Rodbard correction (26) of the data from 2.5-h competition binding assays (20, 32). The results (Table IA) show that there was almost no change in specificity after the mutation of Cys-640. Mutation of Cys-661 had little effect on the binding of RU 486 or cortisol but caused a 6-fold decrease in the binding by 5a-DHT and an approximately 10-fold decrease for progesterone, aldosterone, and 178-estradiol. The effect of mutating Cys-656 depended on the amino acid which was introduced. Replacement with glycine (to give C656G) produced much the same change in specificity as seen for C661S, except that there was less of an effect on aldosterone binding and no effect on cortisol binding. Replacement of Cys-656 with serine (to give C656S) caused a 53-fold reduction in relative affinity for progesterone, aldosterone, and 5a-DHT and a major reduction (210-fold) only for 178-estradiol.
Competition assays of short duration (e.g. 2.5 h) usually give the correct relative affinity values. However, since such short assays do not allow the binding of t3H]Dex to reach equilibrium, inaccurate values can be obtained for slowly dissociating steroids (32,33). Interestingly, in 24-h assays that are approximately at equilibrium, the binding selectivity was found to increase. Thus the specificity for cortisol vs aldosterone binding to the C656G receptor (defined as the ratio of affinities relative to Dex) was raised from 20-fold in the 2.5-h assay to 83-fold in the 24-h assay; this ratio was 4.2fold at both time points with the wild type receptor (Table IB). Similarly, the specificity of C656G for cortisol versus progesterone increased from 16-fold in the 2.5-h assay to 44fold in the 24-h assay, while the ratio was always -1 for the wild type receptor (Table IB).
Steroid Binding Affinity of the Mutant Receptors-We were surprised that none of the cysteine mutations had eliminated steroid binding (Table I). Scatchard analysis (24 h) of each of the receptors revealed that the mutations of Cys-640 and -661 did produce a 3-4-fold decrease in affinity for t3H]Dex (Table 11). Unexpectedly, however, the two mutations of Cys-656 resulted in a 3-and almost 9-fold increase in affinity. With regard to C656G, it should be noted that this increased affinity does not entirely compensate for the decreased affinity of aldosterone and progesterone seen in Table IB. Thus the absolute affinity of progesterone, and probably aldosterone, for the glucocorticoid receptor has decreased as a result of this mutation.
Biological Actiuity of the Mutant Receptors-It is well known that the steroid binding of receptors can be dissociated from the ability to produce a biological response (1, 2). In order to determine if either of the receptors that had been mutated at position 656 were still biologically active, CV-1 cells were transiently transfected with both a mutant receptor expression vector and a vector containing a glucocorticoidresponsive reporter gene (G,6tk/chloramphenicol acetyltransferase). Each mutant receptor was found to be fully active ( Fig. 2 and data not shown). As seen in Fig. 2, Dex induction of chloramphenicol acetyltransferase activity with the C656G receptor occurred at >6-fold lower concentrations than with the wild type receptor. The close correlation between the cellfree affinity of Dex for receptors and the concentration of Dex required to induce the biological response argues that the mutation of an amino acid which is intimately involved in steroid binding (ie. Cys-656) can give novel receptor molecules that are more selective and more responsive than the wild type receptor.
DISCUSSION
Molecular biology offers the prospect of constructing new proteins that have more desirable properties than the naturally occurring proteins. Unfortunately, all reported modifications of the glucocorticoid receptor steroid binding domain result in little or no steroid binding activity (1,2,(10)(11)(12). It thus appeared that the activity and/or the proper tertiary folding of the steroid binding domain is unable to accommodate many changes in amino acid sequence. We now show, however, that substitution of Cys-656 with either serine or glycine yields mutant receptors that not only have higher affinity for glucocorticoids such as Dex (Table 11) and cortisol (Table I) but also have a higher absolute (for C656G) and/or relative (for C656S) binding specificity for glucocorticoids (Tables I and 11). The C656G receptor, which has an affinity 9-fold higher than that of the wild type receptor, is also transcriptionally active at 6 times lower Dex concentration than is the wild type receptor (Fig. 2). Thus C656G and C656S are the first mutant receptors with higher affinities than the wild type receptors and can be considered as "super" glucocorticoid receptors. These super receptors were created by the mutation of Cys-656, which has been considered a crucial amino acid in the steroid binding process for three reasons. First, Cys-656 is covalently labeled by Dex-Mes to give an adduct in which the thiol group of Cys-656 is attached to the C-21 of Dex and thus can be very close to noncovalently bound steroids (17). Second, methyl methanethiolsulfonate reacts with Cys-656 (and Cys-661) to block steroid binding (20,21).' Third, sodium arsenite selectively reacts with Cys-656 and Cys-661' to block steroid binding (22) in a reaction that is specific for glucocorticoid receptors (6). The current results clearly demonstrate that Cys-656 is not an essential amino acid for steroid binding. The data further imply that Cys-656 actually decreases the affinity and specificity of glucocorticoid receptor binding. Since no other steroid receptor contains a cysteine at the comparable position (6), it is likely that Cys-656 has some essential function. It remains to be elucidated what that function is. Similarly, the effect of substitutions of Cys-640 and -661, both of which have been found to be intimately involved in steroid binding' are relatively minor. This suggests that, while numerous amino acids may be required for the proper tertiary folding of the binding cavity, relatively few amino acids are absolutely essential for binding.
In conclusion, a receptor that has higher affinity and specificity than the natural receptors would be advantageous in several instances. Most importantly, it would permit the use of lower doses of steroid to affect full, receptor-mediated activity. This, in turn, would cause less binding of the steroid to other receptors. The current studies with glucocorticoid receptors show, for the first time, that such improved receptors are indeed feasible. Further modifications may yield even more useful receptors for all of the steroid hormones. | 2018-04-03T02:43:03.859Z | 1991-11-25T00:00:00.000 | {
"year": 1991,
"sha1": "87a1dc8f7499602244a5314c7eb034aa36422a22",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(18)54533-6",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1bde4eb28059f8f1a6ef4cd52fbea5cfa3e037b3",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
7669026 | pes2o/s2orc | v3-fos-license | An evaluation of genotyping by sequencing (GBS) to map the Breviaristatum-e (ari-e) locus in cultivated barley
Abstract We explored the use of genotyping by sequencing (GBS) on a recombinant inbred line population (GPMx) derived from a cross between the two-rowed barley cultivar ‘Golden Promise’ (ari-e.GP/Vrs1) and the six-rowed cultivar ‘Morex’ (Ari-e/vrs1) to map plant height. We identified three Quantitative Trait Loci (QTL), the first in a region encompassing the spike architecture gene Vrs1 on chromosome 2H, the second in an uncharacterised centromeric region on chromosome 3H, and the third in a region of chromosome 5H coinciding with the previously described dwarfing gene Breviaristatum-e (Ari-e). Background Barley cultivars in North-western Europe largely contain either of two dwarfing genes; Denso on chromosome 3H, a presumed ortholog of the rice green revolution gene OsSd1, or Breviaristatum-e (ari-e) on chromosome 5H. A recessive mutant allele of the latter gene, ari-e.GP, was introduced into cultivation via the cv. ‘Golden Promise’ that was a favourite of the Scottish malt whisky industry for many years and is still used in agriculture today. Results Using GBS mapping data and phenotypic measurements we show that ari-e.GP maps to a small genetic interval on chromosome 5H and that alternative alleles at a region encompassing Vrs1 on 2H along with a region on chromosome 3H also influence plant height. The location of Ari-e is supported by analysis of near-isogenic lines containing different ari-e alleles. We explored use of the GBS to populate the region with sequence contigs from the recently released physically and genetically integrated barley genome sequence assembly as a step towards Ari-e gene identification. Conclusions GBS was an effective and relatively low-cost approach to rapidly construct a genetic map of the GPMx population that was suitable for genetic analysis of row type and height traits, allowing us to precisely position ari-e.GP on chromosome 5H. Mapping resolution was lower than we anticipated. We found the GBS data more complex to analyse than other data types but it did directly provide linked SNP markers for subsequent higher resolution genetic analysis.
Background
Barley (Hordeum vulgare L.) is a diploid (2n = 14) economically important cereal crop and genetic model for small grain temperate cereals. Golden Promise (GP) is a two-rowed UK spring barley cultivar, and is currently the most responsive genotype for barley genetic transformation. Also, because of its unique properties, the malt extracted from GP is used to distil a number of signature Single Malt Scotch whiskies such as Macallan and Glengoyne. It is a primary induced gamma-ray mutant derivative of the barley cultivar Maythorpe, and is known to contain a mutation in Breviaristatum-e (Ari-e). This mutation in Ari-e in GP (ari-e.GP, also referred to in the literature as GP erectoides) causes a semidwarfing phenotype that has been used widely in barley cultivar development (especially in Scotland) to shorten straw length and reduce the severity of lodging. GP is also susceptible to several fungal pathogens, has short awns (as well as being dwarf), reduced internode length and shows a measure of tolerance to salt [1]. Genetic analysis has previously located ari-e.GP to barley chromosome 5H as a quantitative trait locus (QTL) influencing plant height, and physiological studies have confirmed its relative insensitivity to the addition of exogenous gibberellic acid (GA 3 ) [2]. The Ari-e gene has not yet been cloned although it was recently mapped as a height QTL using the tools of contemporary biometrical genetics in a complex three-way cross [3].
Over the past two decades, many molecular tools have been developed in barley to enable genetic research [4][5][6][7][8][9]. The primary focus has been the construction of molecular marker-based genetic linkage maps that can be leveraged for mapping genes of interest and subsequent marker assisted selection in breeding programs. These have been applied to discover, dissect and manipulate genes determining a range of simple and complex traits. Because of their value, accompanied by their increasing use in genetics and breeding, there has been a continual drive to both reduce marker costs and to avoid ascertainment issues [10] while at the same time enhancing flexibility and marker throughput per assay. It is therefore appropriate that new developments in marker technology are both explored and thoroughly evaluated against the current state of the art. Now that next generation sequencing (NGS) technology has been shown to be capable of discovering and genotyping thousands of markers across almost any genome of interest at low cost and in a single step, a current debate is whether sequence-based genotyping methods are ready to replace many of the established and widely used tools such as highly-multiplex Single Nucleotide Polymorphism (SNP) platforms [9].
Available sequence-based genotyping methods generally rely upon the use of restriction enzymes to produce a reduced representation of the non-repetitive (low copy) regions of the genome. Restriction site-associated genomic DNA (RAD) typing is such an approach and has been used in several species for the construction of linkage maps and application in QTL analyses [11]. In barley, a RAD linkage map was recently produced in a double haploid population and used for QTL analysis [12]. Elshire and colleagues [13] subsequently described a similar but more straightforward method of genotyping by sequencing (GBS) which works effectively in 96-well (or higher) plate assays. GBS was originally developed for high-resolution association studies in maize [14] and, like RAD, has been extended to a range of species with complex genomes. A two-enzyme GBS protocol has now been developed that produces a uniform library for sequencing and has been applied to both wheat and barley [15]. This GBS approach has been shown to be suited to genetic analysis of rapeseed, lupin, lettuce, switchgrass, soybean, and maize [16][17][18][19][20].
In this report, our biological objective was to identify at high resolution the genetic location of the ari-e.GP semi-dwarfing gene of cultivated barley. However, as a sequence assembly of the barley genome has just been published [21,22], we also wanted to use a sequence-based genetic marker methodology that would in principle allow us to link directly to the genome sequence assemblies and physical map, ultimately as a shortcut to facilitate the identification of the Ari-e gene. We therefore chose to explore use of the two-enzyme based GBS method, using digestion of genomic DNA with a six-base methylation sensitive 'rare-cutter' and a four-base 'common cutter' enzyme. We combined this with the Illumina NGS platform and developed a downstream informatics pipeline to discover co-dominant (SNPs) in an F 11 single-seed descent mapping population from a Golden Promise (GP) by Morex (Mx) cross. In contrast to GP, Mx is a tall spring six-rowed North American barley variety with desirable malting and brewing characteristics. Most importantly, Mx is the reference cultivar used in the barley genome sequencing efforts. Our genetic analysis using GBS data from the recombinant inbred (RIL) population confirmed the location of ari-e.GP on barley chromosome 5H. In the process we discovered 1,949 high-confidence SNPs that we could associate with contigs in the NGS sequence assemblies and physical map.
Results and discussion
Golden Promise by Morex population (GPMx) and variation in plant height A recombinant inbred line (RIL) population of 160 F 11 single-seed descent lines from a GP by Mx cross was developed over seven years, from 2003-2012, at the James Hutton Institute. The 136 F 11 RILs used in this study comprised 56 two-rowed and 80 six-rowed accessions.
The population segregates quantitatively for height as shown in Figure 1 varying between different RILs from 60 to 130 cm. Asymmetric transgressive segregation of plant height across the GPMx population can be observedthere were around 40 lines taller than Morex, but only about 10 shorter than Golden Promise. The Pearson correlation between the heights in the two years was 0.887. Two lines showed a marked difference in height between the two years.
Generation of PstI reference sequences from barley genome assemblies
To facilitate genetic analysis by GBS we first extracted a set of 64 bp reference sequences flanking all predicted PstI restriction sites from barley genome assemblies of the cultivars Morex (genome coverage: (53X) [22]), Bowman (26X) and Barke (20X) using the 'restrict' program from the EMBOSS suite of tools (see Methods). For cultivar Morex, 343,854 restriction sites were identified, yielding a total of 633,331 GBS reference sequences present on 251,433 unique Morex genome assembly contigs. Of all the identified sites, 54,377 flanking sequences had to be excluded because the restriction site was too close to the start or end of an assembled genomic contig and therefore extraction of the full 64 bp sequence was impossible. Extraction of additional sequences unique to the genome assemblies of cultivars Bowman and Barke yielded a further 71,519 and 97,764 sequences, respectively. After removal of chloroplast (cp) sequences a total of 802,046 reference sequences remained that were subsequently used for read mapping. More than half (54%) of the 64 bp reference sequences stemmed from Morex contigs that contain regions of homology to full length cDNAs or expressed genes, previously mapped genetic markers (cM) or sequences that have chromosome arm assignments based on survey sequencing of flow sorted chromosome arms [23].
GBS reads of GPMx
We generated three 48-plex GPMx GBS libraries (GPMx_1-3) representing all 136 progenies and the parents, which were repeatedly represented in each library for QC purposes. We used PstI combined with MseI to digest genomic DNA, with the PstI overhang sequence located in the barcode adapter adjacent to the barcode sequence, and the MseI overhang sequence located in common Y-adapter [15] (barcode sequences in Additional file 1: Table S1). Single-end sequencing starting from the barcoded adapter was performed using Illumina chemistry. Pilot sequencing the GPMx_1 library on an Illumina GAII platform generated more than 61 million single-end reads of 72 bp in length. Of these, over 58 M reads were categorised as having a correct barcode and PstI overhang sequence (from here we call this proportion of the sequences 'categorised reads'). Further sequencing of all three GPMx population libraries, each on one lane of an Illumina HiSeq2000, generated a total of 622 M reads of 100 bp and more than 482 M remaining as categorised reads (see criteria below). The average number of categorised reads obtained per lane was 28.5 M on the Illumina GA II and 160 M on the Illumina HiSeq2000. By applying various filtering criteria (i.e. presence of accurate barcode and complete PstI overhang sequence, and no undetermined nucleotides (Ns) in the reads), the percentages of categorised reads were 93.4% (Illumina GA II) and 77.5% (Illumina Hiseq2000). After deconvolution the distribution of the number of reads per sample ranged from 520,427 to 6,554,933 (Additional file 2: Table S2). The read distribution was relatively even across the population, with only 3 of the 138 lines having less than a million reads. For the parents, we obtained 8.2 M reads from Golden Promise and 11.5 M from Morex, due to repeat sequencing. All sequence reads generated from GPMx were submitted to the Sequence Read Archive section of the European Nucleotide Archive (ENA) (submission: ERP002594 Genotyping by sequencing of a barley mapping population).
Co-dominant markers from the GPMx population datasets
In total, 461 M categorized reads from the GPMx mapping population were mapped to the 64 bp reference sequences using the Bowtie mapping tool [24]. In order to reduce the number of false positive SNPs during downstream analysis, only a single mismatch per read was allowed, and only uniquely mapped reads were included, leaving 46% of 461 M reads mapped to the reference. These categorized reads were then evaluated for single base-pair differences across the population. We removed all dominant markers from the dataset because of our inability to distinguish null alleles from missing data. Using these highly conservative criteria, we identified an initial set of 1,949 co-dominant SNPs with robust allele calls across the population.
Linkage mapping of GPMx population
The 1,949 codominant SNPs were analysed using Join-Map. They were first checked for identical pairs based on segregation data across the population and on this basis 267 were excluded, always dropping the SNP marker with the lower quality score in each identical pair. A further 291 SNPs were excluded as they had more than 20% missing values. The remaining 1,391 high confidence SNPs were clustered into seven linkage groups with the number of markers per group ranging from 109 to 270, with nine remaining isolated at a LOD of six. These linkage groups were ordered using Join-Map's maximum likelihood mapping algorithm. SNPs with a poor fit to the neighbouring SNPs were excluded and the linkage analysis was rerun leaving a total of 1,332 unique high quality SNPs incorporated into the map. The numbers of co-dominant GBS SNPs are presented in Table 1.
Location of SIX ROWED SPIKE 1
A major developmental gene, SIX ROWED SPIKE 1 (VRS1), segregates in the GPMx population. The VRS1 gene has previously been identified [25] and profoundly affects barley spike morphology, but its effect on other plant traits such as height in GPMx is not known. Barley plants carrying a recessive vrs1 allele (e.g. vrs1.a Morex) develop spikes containing six rows of grain in contrast to the ancestral wild type spike which develops only two rows (e.g. Vrs1.b Golden Promise). Alternative alleles at VRS1 also influence the number of tillers that develop on a plant and could, as suggested previously, affect plant height, and this would influence our subsequent analysis of ari-e.GP. The most significant associations between row type and SNPs were with MR_2568613P909R13 and MR_57812P2860R48. Both mapped to 80.5 cM on chromosome 2H ( Figure 2A). All 80 six-rowed lines had the same genotype as Morex, while the 56 two-rowed lines had the same genotype as Golden Promise (i.e. there were no recombinants between these markers and VRS1).
A major plant height QTL overlaps with the Breviaristatum-e (Ari-e) locus We mapped plant height as a quantitative trait for each year separately using the GPMx GBS linkage map. A permutation test with 1,000 permutations had a 95th percentile of 3.0 for each year's height data, and this was used as a genome-wide LOD threshold. This resulted in the identification of three significant plant height QTLs on chromosomes 2H, 3H and 5H ( Figure 2). The major plant height QTL was located on chromosome 5H ( Figure 2C). For each year's height data, the SNP most closely associated with height was MR_47526P1793R57 at 29.7 cM on chromosome 5H. This SNP explained 55.2% of the variance in height in 2009, with the 'bb' genotype having a mean height 27.5 (SE 2.1) cm higher than the 'aa' genotype, and 61.6% of the variance in height in 2010, with the 'bb' genotype having a mean height 24.8 (SE 1.7) cm higher than the 'aa' genotype.
Previously, it was shown that Golden Promise carries a mutation in the dwarfing gene known as Breviaristratum-e (Ari-e) [26]. The position of the gene has been roughly estimated as about 30 cM from the SHORT RACHILLA HAIR (srh) locus [27]. Two induced mutant alleles of Ari-e (ari-e.GP (Golden Promise) and ari-e.1 (cv. 'Bonus')), were introgressed as BC 6 F 3 lines into the background of cv. 'Bowman' resulting in lines BW042 and BW043 [28]. Cross-referencing the SNP markers that define the introgressed region in BW043 (ari-e.GP), which has a genetically well-defined introgression, with the barley genome sequence assembly [22] supports ari-e. GP as the gene underlying the plant height QTL identified using the GPMx population. This also supports the early observation of Ari-e being linked to srh [27], as BW873, a nearly isogenic line of cv. Bowman carrying srh, contains an introgressed segment located 10-30 cM distal to the GPMx height QTL [28]. Restricted multiple QTL mapping (rMQM) detected two further QTLs for height, the most significant markers being MR_1631678P782F7 at 51.6 cM on chromosome 3H (for both years) ( Figure 2B) and a region near VRS1 (80.5 cM) on 2H (Figure 2A). For the latter, in 2009 the most significant marker was MR_1435185P85F60 at 82.0 cM while in 2010 the most significant marker was MR_48841P1435F22 at 83.2 cM. Regression analysis (in Genstat) was used to model the joint effects of these three locations on height. There were no significant interactions among the three QTLs, and so an additive regression on SNP MR_47526P1793R57 from 5H, SNP MR_1631678P7 82F7 from 3H and Vrs1/vrs1 on 2H (for consistency across years) was used. In 2009, these three locations jointly explained 76.6% of the variance in height. For MR_1631678P782F7 on 3H, the 'bb' Morex allele has a mean height 9.7 (se 1.7) cm higher than the 'aa' GP allele, and the vrs1 types (six-row) on 2H had a mean height 10.1 (se 1.7) cm lower than the Vrs1 (two-row) types. In 2010, these three locations jointly explained 77.5% of the variance in height. For MR_1631678P782F7 (3H), the 'bb' allele has a mean height 8.9 (se 1.4) cm higher than the 'aa' allele and the vrs1 types (six-row) (2H) have a mean height 6.8 (se 1.4) cm lower than the Vrs1 (two-row) types. The effect of excluding the two lines with discrepant heights in the two years was investigated, but the QTL locations were unchanged and the differences in the parameter estimates were negligible.
What is the resolution of the GPMx GBS map at Ari-e.GP?
To explore the potential of using GBS and the GPMx RIL population as a platform for gene identification, we used the barley genome assembly to determine the gene content surrounding Vrs1, Ari-e and the flanking GBS markers. As the Vrs1 gene is known [25] we investigated the interval containing the two GBS markers that cosegregated with Vrs1 and that fell between markers, MR_51408P1352R32 and MR_40453P5185R30. These defined a 1.5 cM interval on the GPMx GBS map. Current information [21,22] indicates this interval corresponds to 4.13 Mb on the barley physical map ( Figure 3A). It contains an estimated 52 genes, resulting in a gene density estimate of 12.6 genes/Mb and defining this locus as gene rich (the genome-wide average gene density in barley is 5 genes/Mb).
Unlike VRS1, the identity of Ari-e is not known. The QTL peak for plant height in GPMx (i.e. the ari-e.GP locus) is associated with GBS marker MR_47526P17 93R57 which is flanked by MR_335403P1239R45 and MR_1560792P1192F41. These three markers define a 7.2 cM interval on the GPMx GBS map. However on the barley physical map marker MR_137133P6361R37 (morex_contig_137133) appears to be positioned erroneously ( Figure 3B) making it difficult to estimate the size of the relevant interval ( Figure 3B). Replacing MR_13 7133P6361R37 with a distal marker, MR_137133P636 1R37, the interval defined is 46 Mb and contains an estimated 397 genes. We then investigated a more recent ordering of sequence contigs on the barley genome provided by the POPSEQ methodology [29]. There, the same region corresponds to a 3.3 cM genetic interval in the Barke × Morex population and a 2.1 cM interval in the Oregon Wolfe populations respectively. This region contains over 7,000 anchored sequence contigs spanning a total sequence length of approximately 10 Mb [29]. As the contig sequences represent only a small portion of the physical sequence, Ari-e appears to be located in a relatively low recombining region. Despite residual recombination within this genetic interval, the lack of detected polymorphism suggests that the original parental haplotypes may be similar in this region and increased resolution will likely need to be sought in different populations.
Conclusions
We have shown that GBS is an effective approach for the generation of marker-dense genetic maps in cultivated barley. The short sequence tags enabled us to directly anchor the regions containing both VRS1 and ari-e.GP to the recently released integrated genetic and physical sequence assembly of the barley genome and to crudely define the physical size of the two genetic intervals that we investigated. Our hope was that the flanking markers would ultimately assist us in identifying ari-e.GP. Given the resolution we obtained around ari-e.GP this seems unlikely. Our data also indicate that a region encompassing the major morphological gene VRS1, which determines row-type and number of tillers in barley on 2H and an unknown locus on 3H, also affect plant height in the GPMx population.
An important practical outcome of this work for us was that we found the GBS data more challenging to handle and subsequently to analyse than the current multiplex SNP assay technology we routinely run in the lab [9]. Indeed, this may discourage some groups from adopting the GBS approach. Nevertheless, as the principal determinant of resolution in genetic studies is a combination of the number of recombinants in the population and the number of genetic markers assayed, we were somewhat surprised (and disappointed) that with approximately 1,400 informative genetic markers covering a map length of 1,200 cM, the most closely linked markers to ari-e.GP spanned a region of approximately 7 cM. It is possible that the mutant ari-e.GP locus was induced within a local haplotype that is shared with Morex (i.e. the gene lies within a region of identity by descent or state between both parents). Indeed, evidence from the barley 9 K iSelect SNP genotyping platform on the parents indicates that GP and Morex probably do share a common haplotype across the proximal short arm and centromeric region of 5H. This would unfortunately result in a markerless gap in the genetic map. Similar gaps have been evident in other highdensity genetic maps of barley [9,12,15]. Despite this, the flanking SNP-containing GBS tags will be easy to convert to single-locus markers, and these will be highly valuable for identifying additional recombinants around ari-e.GP and, if pursued further using a map based approach, ultimately the identification of the gene.
Plant material and DNA samples
The GPMx population was developed from a cross between a two-rowed barley (Golden Promise, ari-e.GP/ Vrs1) and six-rowed barley (Morex, Ari-e/vrs1) at the James Hutton Institute (JHI). DNAs were extracted from one week old seedling tissue using the DNeasy Plant Mini kit (Qiagen). Three 48-plex GBS libraries were constructed from a set of 138 progenies from the F 11 single-seed descent generation, along with replicated samples of each parent, respectively.
Plant growth and phenotyping
Ten seeds harvested from a single F 9 generation plant of the GPMx RIL population were planted in soil in a polytunnel in spring 2009. Planting was randomized and plants grown using automatic watering. Plant height measurements were performed on mature plants prior to harvest. Plant height for each line was determined by selecting the 3-5 longest tillers and measuring the distance from the ground to the top spikelets (excluding awns). Bulked seeds harvested from 3-5 plants of each line of the F 10 generation of the GPMx RIL population were planted in the field in spring 2010. Before planting, TGW (Thousand Grain Weight) of each sample was determined and used to calculate the weight of the seeds to be planted in 1x 2.5 m plots (so that each plot has about the same number of seeds). In total, 17 randomly selected lines and parents were planted as randomized replicates (2-3X). Plant height measurements were performed 3-4 weeks after anthesis following the same procedure as above. Lodged plants were lifted before measuring their height.
Constructing GBS libraries
GBS libraries were constructed in a similar manner to Poland et al. [15]. Briefly: A set of 48 barcoded adapters (Additional file 1: Table S1) were generated from complementary oligonucleotides (Sigma) with a PstI overhang sequence and unique barcodes of length 4 nt to 8 nt. In addition, a common Y-adapter was generated corresponding to the 5' TA overhang generated by MseI. Top and bottom strand complementary oligonucleotides for each adapter (50 μM) were annealed using the following program: 95°C for 2 min, decrease to 25°C by 0.1°C/s, hold at 25°C for 30 min. Annealed adapters were diluted 1:10 and their concentration measured using PicoGreen. Barcoded adapters were normalised to 2 ng/μl and the common Y-adapter to 40 ng/μl.
Sequencing and processing raw GBS data
Single-end sequencing from the PstI sites was carried out using Illumina GA II and/or HiSeq2000 sequencer: of the three GBS libraries (GPMx_1, GpMx_2 & GPMx_3), initially GPMx_1 was sequenced on two lanes of Illumina GAII and subsequently all three GBS libraries were sequenced on one lane each of Illumina HiSeq2000. All GBS sequences were submitted to Sequence Read Archive section of the European Nucleotide Archive (ENA) (submission: ERP002594 Genotyping by sequencing of a barley mapping population).
Generation of reference sequences
Reference sequences for the mapping of GBS tags were generated from existing genomic assemblies of the barley cultivars Morex, Bowman and Barke based on Illumina whole genome shotgun sequencing. As a first step in the workflow (see Additional file 3: Figure S1 for a diagram of the full workflow), the EMBOSS program restrict (http://emboss.sourceforge.net/) was used to discover PstI restriction sites in the assemblies. Custom written Java code was then used to extract from the Morex genomic assembly two separate flanking 64 bp sequences extending the restriction site in forward and in reverse direction. This process was repeated for the other two cultivar assemblies and the extracted 64 bp sequences were then compared with the sequences generated from cultivar Morex assembly using the standalone BLASTN program [30] from NCBI (version 2.2.26+). A single hit was obtained per query, and from this we extracted those hits with alignments along the full length of the query sequence, an identity value of less than 100%, and a mismatch number of at least 2. These hits were added to the full set of Morex flanking sequences, thereby providing a global set of reference sequences from the three barley genome assemblies. To further refine the reference sequences, we screened them for chloroplast DNA, which can be a common feature in whole genome shotgun sequencing. This was done by BLASTN, with the combined set of sequences as query against the full barley chloroplast genome sequence (http://www.ncbi.nlm. nih.gov/nuccore/118430366?report=fasta). Hits were filtered to require sequence identity > = 90%, and an alignment length > = 64. We detected 568 chloroplast DNA sequences that were subsequently removed from the reference set.
Read mapping
Prior to mapping, the raw Illumina reads were assigned to their respective samples ('deconvoluted') based on the sample-specific barcodes included in the sequence. Barcode lengths varied between 4 and 8 bases therefore custom written Java code was used for deconvolution, and this also removed the barcodes after assigning the read to a sample, which is a requirement for the successful mapping of the read to a reference sequence. Reads that started with the PstI overhang sequence (TGCAG) after barcode removal were accepted, quality trimmed to remove bases of quality Phred < 20 from the 3'-end (distal to the PstI site), and then shortened from the 3'-end to a standard length of 64 bases. Reads that were shorter than this after quality trimming were discarded.
Reads were then mapped to the 64 bp reference sequences using the Bowtie mapping tool (version 0.12.7, [24]). To avoid cross-mapping of reads between similar sequences, the " -best -strata" switch was used, which ensures that multi-mapped reads are only mapped to the location with the fewest mismatches. In order to reduce the number of false positive SNPs during downstream analysis, only a single mismatch per read was allowed ("-v 1"), and only uniquely mapped reads were retained ("-m 1").
SNP discovery and genotype calling
We used the FreeBayes software [31] to discover single nucleotide polymorphisms (SNPs), as well as custom Java code for converting the resulting VCF file into a human-readable text file. Within FreeBayes, the SNPs were filtered to retain those where the minimum number of reads with the alternative allele was greater than 3, which provided a total of 57,328 SNPs. We then applied the following filters: the minimum fraction of reads with the alternative allele for a SNP should be greater than or equal to 0.1; the percentage difference between the base qualities for the reference and alternative alleles should be less than or equal to 5; the SNP quality score cut-off should be greater than or equal to 20. This procedure yielded 18,251 SNPs. Then, within Excel, further filters were applied: we required a total read coverage of greater than or equal to 700 (ie. a mean of at least 5 reads for each sample in the population), which left 3,246 SNPs; the percentage of heterozygous samples was less than or equal to 2%, which left 1,985 SNPs; the ratio of alternative allele/reference allele was greater than or equal to 0.5, which left 1,968 SNPs.
Genotypes were then called based on the proportion of the reference allele. We identified this as homozygous for the reference allele if the proportion was greater than 0.8, as homozygous for the alternative allele if the proportion was less than 0.2 and as heterozygous if the proportion is between 0.2 and 0.8. Samples with fewer than three reads if designated homozygous, or with fewer than six reads if designated heterozygous, were recoded as missing. Nineteen SNPs had a missing genotype for one of the parents, and these were also excluded to leave 1,949 SNPs for linkage mapping. Visual inspection of both mappings and SNPs was carried out using the Tablet software [32].
Linkage mapping
The SNP data were sorted by decreasing quality score before analysis with JoinMap [33]. This ensured that when co-segregating SNPs were excluded, the lower quality SNPs were preferentially dropped. SNPs with greater than 20% missing values were also excluded from the JoinMap analysis. SNPs were grouped using the independence LOD score, and then ordered within each linkage group using the maximum likelihood algorithm. The GBS tags were mapped to reference sequences generated from Morex, Bowman and Barke WGS shotgun assemblies. Those from Morex contain previously published anchored genetic/ physical markers, which we assumed to be correct. We define these as anchoring markers on the genetic linkage groups. Additional file 4: Table S3 provides a list of 1,332 unique co-dominant GBS markers used for map construction and ordered according to their map location on the GPMx population. It highlights 403 genetically redundant markers, the correspondence of all GBS tags to expressed genes (MLOC's) and their genetic position on the IBSC consensus map (IBSC, 2012).
QTL mapping
QTL interval mapping was used to locate QTLs for the 2009 and 2010 height data separately, using MapQTL [34]. A permutation test with 1,000 permutations was used to establish the LOD threshold. Restricted multiple | 2017-04-10T21:28:59.042Z | 2014-02-06T00:00:00.000 | {
"year": 2014,
"sha1": "1bcdd8916e65b620eb0483e7a5170b155935e577",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-15-104",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc010b7c9b5d4cc3e6b84327102cc844220a7962",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246071698 | pes2o/s2orc | v3-fos-license | LC-ESI-MS/MS Identification of Biologically Active Phenolics in Different Extracts of Alchemilla acutiloba Opiz
Liquid chromatography electrospray ionization tandem mass spectrometric (LC-ESI-MS/MS) qualitative and quantitative analysis of different extracts from the aerial parts and roots of Alchemilla acutiloba led to the identification of phenolic acids and flavonoids. To the best of our knowledge, isorhamnetin 3-glucoside, kaempferol 3-rutinoside, narcissoside, naringenin 7-glucoside, 3-O-methylquercetin, naringenin, eriodictyol, rhamnetin, and isorhamnetin were described for the first time in Alchemilla genus. In addition, the antioxidant, anti-inflammatory and cytotoxic activity of all extracts were evaluated. The results clearly showed that among analyzed extracts, the butanol extract of the aerial parts exhibited the highest biological activity comparable with the positive controls used.
Introduction
The genus Alchemilla includes a large number of types in the Polish flora that are not easy to identify. Alchemilla species are apparently very similar to each other, but apart from the morphological diversity, they differ remarkably in terms of ecology, phytosociology and geography.
Alchemilla herb has long been used in European and Asian folk medicine in diseases caused by poor metabolism as well as for the treatment of eczema, wounds, ulcers" and gynaecological problems [1][2][3]. Due to the relatively high tannin content, the Alchemilla herb has an effect characteristic of this group of compounds. After oral administration of the extract, diarrhea is reduced or inhibited [4]. The astringent properties of tannins impede the penetration of fluid into the intestinal lumen, as well as stop bleeding from damaged capillaries, reduce inflammation of the mucous membranes of the gastrointestinal tract, and inhibit the growth of pathogenic bacteria strains [4]. Alchemilla herb extract also acts on the skin and connective tissue, especially in damaged areas, such as scars after wounds and burns. Sometime after application, partial regeneration of the capillaries is observed, the spots and traits on the skin slowly disappear, and the normal immunity and elasticity of the epidermis are restored in these places [3,4].
Alchemilla herb extracts have been used in chronic gastric and intestinal disorders with symptoms of abdominal pain, loss of appetite, vomiting, and diarrhea. The herb is used externally for skin damage and inflammation (compresses), vulvar itching, vaginal 2 of 12 discharge and vaginitis (irrigations), and mild diseases of the mouth and throat. In addition, an infusion or decoction of the herb of Alchemilla is used in the form of compresses and washes in conjunctivitis [4,5].
Earlier studies of the chemical composition of Alchemilla usually concerned the broadly understood species of Alchemilla vulgaris. Previous findings showed that Alchemilla species comprise mostly phenolic compounds such as tannins, phenolic acids, flavonoids [6][7][8][9], as well as essential oils [10] and fatty acids [11,12].
Extraction is one of the most important steps in receiving active compounds from medicinal plants. The biological quality of plant-derived extracts is based on the contents of the active compounds, and various solvents can extract various amounts of active compounds due to distinct affinities for specific groups of active compounds [13].
To the best of our knowledge, there are no such studies of Alchemilla acutiloba. Due to the importance of Alchemilla species in traditional medicine, the purpose of the present research was to evaluate the biological properties of different extracts of the aerial parts and roots of A. acutiloba and conduct their qualitative and quantitative analysis using LC-ESI-MS/MS.
Phytochemical Analysis
Total phenolic content (TPC) for A. acutiloba extracts and subfractions was estimated using Folin-Ciocalteu reagent, and the results were expressed as gallic acid equivalents (GAE) per g of dry extract (DE) ( Table 1). The results showed that the aerial parts (AH) have the highest phenolic content (279.82 ± 1.52 mg GAE/g DE) than the roots (AR) (192.54 ± 0.49 mg GAE/g DE). The results obtained in our study were better compared to data presented in previous research of Alchemilla genus. For instance, Vitkova et al. [6] demonstrated that TPC for ethanolic extracts of the aerial parts of thirteen Alchemilla species belonging to different sections varied from 33.75 to 82.71 mg GAE/g of dry extract. Neagu and co-authors [14] also found lower amounts of polyphenols in aqueous and 70% ethanolic extracts of the herb of A. vulgaris (94.66 and 112.33 µg GAE/mL of extract). On the other hand, Boroja et al. [15] found 558.19 mg GAE/g of dry extract total phenolic compounds in methanolic extract of the aerial parts of A. vulgaris and 442.32 mg GAE/g of dry extract in extract of the roots, and these values are almost twofold greater than our results. The content of polyphenols was also examined in fractions obtained after fractionating of crude extracts between solvents of different polarity. The results showed that the TPC level was in the range of 93.15 ± 0.95 (ethyl acetate fraction from the roots) to 215.61 ± 2.10 mg GAE/g DE (butanol fraction from the aerial parts). Karatoprak et al. [16], who also studied amounts of phenolics in different fractions of A. mollis, obtained lower values with the highest TPC level for butanol fraction (63.50 mg GAE/g extract).
Thus, the results obtained in our study indicated that the aerial parts of A. acutioloba are a rich source of polyphenols.
The total flavonoid content (TFC) was also studied using the previously described colorimetric method [17]. The data were expressed as quercetin equivalents (QE) per g of dry extracts (DE). The results presented in Table 1 showed that the higher content of TFC was noticed for ethyl acetate fraction and crude extracts of the aerial parts (189.25 ± 0.95 and 113.79 ± 1.09 mg QE/g DE, respectively). The lowest content was noted for diethyl ether fractions of the aerial parts and roots (1.57 ± 0.02 and 1.92 ± 0.01 mg QE/g DE, respectively). The results obtained in our study were higher than those described by Karatoprak and coauthors [16]. In their research of the aerial parts of A. mollis, quantitative estimation revealed that methanol extract possessed 50.63 ± 0.59 mg CA/g DE flavonoid content followed by water extract-47.34 ± 0.39 mg CA/g DE, and 70% methanol-34.63 ± 0.59 mg CA/g DE. Flavonoids were not found in the hexane and ethyl acetate fractions. Also, a lower content of flavonoids was found in the aerial parts and roots of A. vulgaris (13.30 ± 1.69 and 19.80 ± 0.35 mg RUEs/g, respectively) [15].
The total phenolic acids content (TPAC) in A. acutiloba extracts and fractions were presented in Table 1. The amounts of TPAC found were relatively low. A higher content was noted for butanol fraction and crude extract of the aerial parts (72.19 ± 0.39 and 39.15 ± 0.18 mg CAE/g DE, respectively). TPAC was not found in the diethyl ether fraction of the aerial parts of A. acutiloba. Boroja et al. [15] found that methanolic extracts of the aerial parts and roots of A. vulgaris contain 33.43 and 57.36 mg CAE/g DE, respectively.
Qualitative and Quantitative Analysis
The next stage of our study was qualitative and quantitative analysis of flavonoids and phenolic acids in the crude extracts and fractions from the aerial parts and roots of A. acutiloba. The optimized LC-ESI-MS/MS procedure allowed us to identify the largest amount of phenolic compounds in the butanol extract from the aerial parts (20) The content of polyphenols was also examined in fractions obtained after fractionating of crude extracts between solvents of different polarity. The results showed that the TPC level was in the range of 93.15 ± 0.95 (ethyl acetate fraction from the roots) to 215.61 ± 2.10 mg GAE/g DE (butanol fraction from the aerial parts). Karatoprak et al. [16], who also studied amounts of phenolics in different fractions of A. mollis, obtained lower values with the highest TPC level for butanol fraction (63.50 mg GAE/g extract).
Thus, the results obtained in our study indicated that the aerial parts of A. acutioloba are a rich source of polyphenols.
The total flavonoid content (TFC) was also studied using the previously described colorimetric method [17]. The data were expressed as quercetin equivalents (QE) per g of dry extracts (DE). The results presented in Table 1 showed that the higher content of TFC was noticed for ethyl acetate fraction and crude extracts of the aerial parts (189.25 ± 0.95 and 113.79 ± 1.09 mg QE/g DE, respectively). The lowest content was noted for diethyl ether fractions of the aerial parts and roots (1.57 ± 0.02 and 1.92 ± 0.01 mg QE/g DE, respectively). The results obtained in our study were higher than those described by Karatoprak and co-authors [16]. In their research of the aerial parts of A. mollis, quantitative estimation revealed that methanol extract possessed 50.63 ± 0.59 mg CA/g DE flavonoid content followed by water extract-47.34 ± 0.39 mg CA/g DE, and 70% methanol-34.63 ± 0.59 mg CA/g DE. Flavonoids were not found in the hexane and ethyl acetate fractions. Also, a lower content of flavonoids was found in the aerial parts and roots of A. vulgaris (13.30 ± 1.69 and 19.80 ± 0.35 mg RUEs/g, respectively) [15].
The total phenolic acids content (TPAC) in A. acutiloba extracts and fractions were presented in Table 1. The amounts of TPAC found were relatively low. A higher content was noted for butanol fraction and crude extract of the aerial parts (72.19 ± 0.39 and 39.15 ± 0.18 mg CAE/g DE, respectively). TPAC was not found in the diethyl ether fraction of the aerial parts of A. acutiloba. Boroja et al. [15] found that methanolic extracts of the aerial parts and roots of A. vulgaris contain 33.43 and 57.36 mg CAE/g DE, respectively.
Qualitative and Quantitative Analysis
The next stage of our study was qualitative and quantitative analysis of flavonoids and phenolic acids in the crude extracts and fractions from the aerial parts and roots of A. acutiloba. The optimized LC-ESI-MS/MS procedure allowed us to identify the largest amount of phenolic compounds in the butanol extract from the aerial parts (20) The concentrations of all individual compounds that were quantified by comparison of peak areas with the calibration curves obtained for the corresponding standards are shown in Table 2. The protocatechuic acid, rosmarinic acid, quercitrin, isoquercitrin, kaempferol 3-rutinoside and narcissoside were detected in quantifiable amounts in all studied extracts. The p-coumaric, caffeic, vanillic and 4-hydroxybenzoic acids content of the butanol extract of the aerial parts (AH-B) (187.6 ± 0.2, 78.9 ± 0.2, 76.8 ± 0.4 and 70.4 ± 0.2 μg/g DE, respectively) were found to be much higher than those of the other extracts. Caffeic and gentisic acids were previously found in the aerial parts of A. mollis [16]. The content of caffeic acid in 70% methanolic extract of A. mollis (40 μg/g DE) was higher than in our study for 60% methanolic extract (23.50 μg/g DE). On the other hand, Nikolova et al. [18] found that caffeic, protocatechuic, gentisic, and salicylic acids occurred in the greatest quantity among identified free phenolic acids in the extract of A. jumrukczalica and A. vulgaris, but amounts of these acids were lower than those received in our study. Ferulic acid was detected in quantifiable amount only in AH-B extract (15.9 ± 0.1 μg/g DE), and gallic acid only in the crude extracts from the aerial parts and roots (5.89 ± 0.05 and 2.45 ± 0.1 μg/g DE). The amount of identified flavonoid aglycones was relatively small. The highest content of these compounds was also observed in AH-B extract (26.2 μg/g DE). Among the obtained extracts, it was found that the ethyl acetate (AH-O) and the methanol (AH) extract of the aerial parts of A. acutiloba have the highest total flavonoid glycosides content (1291.7 and 1033.8 μg/g DE, respectively). Narcissoside, kaempferol 3-rutinoside and quercitrin were the most abundant glycosides in all extracts studied. High amounts of rutin was observed in the crude extract and the ethyl acetate fraction of the aerial parts (235.00 ± 3.54 and 332.50 ± 1.18 μg/g DE, respectively). Karatoprak et al. [16] also found a great amounts of rutin in different extracts of the aerial parts of A. mollis. In particular, the butanol fraction and 70% methanolic extract were rich in rutin (840 and 720 μg/g DE, respectively).
The concentrations of all individual compounds that were quantified by comparison of peak areas with the calibration curves obtained for the corresponding standards are shown in Table 2. The protocatechuic acid, rosmarinic acid, quercitrin, isoquercitrin, kaempferol 3-rutinoside and narcissoside were detected in quantifiable amounts in all studied extracts. The p-coumaric, caffeic, vanillic and 4-hydroxybenzoic acids content of the butanol extract of the aerial parts (AH-B) (187.6 ± 0.2, 78.9 ± 0.2, 76.8 ± 0.4 and 70.4 ± 0.2 µg/g DE, respectively) were found to be much higher than those of the other extracts. Caffeic and gentisic acids were previously found in the aerial parts of A. mollis [16]. The content of caffeic acid in 70% methanolic extract of A. mollis (40 µg/g DE) was higher than in our study for 60% methanolic extract (23.50 µg/g DE). On the other hand, Nikolova et al. [18] found that caffeic, protocatechuic, gentisic, and salicylic acids occurred in the greatest quantity among identified free phenolic acids in the extract of A. jumrukczalica and A. vulgaris, but amounts of these acids were lower than those received in our study. Ferulic acid was detected in quantifiable amount only in AH-B extract (15.9 ± 0.1 µg/g DE), and gallic acid only in the crude extracts from the aerial parts and roots (5.89 ± 0.05 and 2.45 ± 0.1 µg/g DE). The amount of identified flavonoid aglycones was relatively small. The highest content of these compounds was also observed in AH-B extract (26.2 µg/g DE). Among the obtained extracts, it was found that the ethyl acetate (AH-O) and the methanol (AH) extract of the aerial parts of A. acutiloba have the highest total flavonoid glycosides content (1291.7 and 1033.8 µg/g DE, respectively). Narcissoside, kaempferol 3-rutinoside and quercitrin were the most abundant glycosides in all extracts studied. High amounts of rutin was observed in the crude extract and the ethyl acetate fraction of the aerial parts (235.00 ± 3.54 and 332.50 ± 1.18 µg/g DE, respectively). Karatoprak et al. [16] also found a great amounts of rutin in different extracts of the aerial parts of A. mollis. In particular, the butanol fraction and 70% methanolic extract were rich in rutin (840 and 720 µg/g DE, respectively). For the best of our knowledge, isorhamnetin 3-glucoside, kaempferol 3-rutinoside, narcissoside, naringenin-7-glucoside, 3-O-methylquercetin, naringenin, eriodictyol, rhamnetin, and isorhamnetin were described for the first time in Alchemilla genus.
Antioxidant Activities
The antioxidant activities of various extracts of the aerial parts and roots of A. acutiloba were determined employing the 2,2-diphenyl-1-picrylhydrazyl (DPPH • ) and 2,2-azinobis-(3-ethyl-benzthia-6-sulfonic acid) (ABTS •+ ) radical scavenging assays. It was found that the radical scavenging activity depended on concentration. At a dose of 50.0 µg/mL, the DPPH • scavenging abilities were the highest for the ethyl acetate (94.85 ± 0.30%) and the butanol (87.31 ± 0.17%) fraction of the aerial parts, and in the ABTS •+ assay for the AH-B (80.56 ± 0.17%). Our result can be compared to the radical scavenging activity of A. jumrukczalica and A. vulgaris complex (the commercial herbal mixture of Alchemilla species) analyzed by Nikolova et al. [18]. The extracts showed significant antiradical activity, similar to those obtained in our study, with IC 50 values of 12.09 and 19.62 µg/mL for A. jumrukczalica and A. vulgaris complex, respectively. Approximately ten times higher IC 50 values in the DPPH • test for various A. mollis extracts were obtained by Karatoprak et al. [16]. The most active were 70% methanolic and water extracts with IC 50 values 0.21 and 0.24 mg/mL, respectively. Boroja et al. [15] also studied antioxidant capacity with DPPH test of the methanolic extracts from the aerial parts and roots of A. vulgaris. They found that these extracts possessed a significant scavenging effect with IC 50 value 5.96 and 11.86 µg/mL, respectively. Quite similar results were also obtained in the ABTS •+ assay (IC 50 = 14.80 and 32.49 µg/mL, respectively). Among the analyzed extracts, the butanol fraction of the aerial parts of A. acutiloba were also the most active ones interfering with the formation of iron and ferrozine complexes (IC 50 = 11.43 ± 0.18 µg/mL of DE), which suggest their high chelating capacity and ability to capture iron ions before ferrozine. The chelating activity of AH-B fraction was similar to the activity of the standard used (Na 2 EDTA*2H 2 O, IC 50 = 9.45 ± 0.03 µg/mL). The IC 50 values of antioxidant capacities are presented in Table 3.
Anti-Inflammatory Activity
In our study, the COX-1 and COX-2 enzymes' inhibitory activities of extracts of A. acutiloba were investigated as a mechanism of their anti-inflammatory action. In the present study, for what we believe is the first time, the anti-inflammatory activity of A. acutiloba was investigated. The extracts were studied at two concentrations: 50 and 100 µg/mL. Indomethacin at a concentration 5 µM was used as a positive control. The results of the inhibition of both cyclooxygenases were recorded as the percentage inhibition of prostaglandin biosynthesis and are presented in Table 4. According to Eldeen et al. [19], a minimum inhibition of 50% is needed for plant extracts to be considered active.
Our results revealed that at a concentration of 50 µg/mL, AH-B, AR-B and AH-O extracts were capable of inhibiting the activity of COX-1 enzyme by 76.82%, 63.17% and 52.65%, respectively, whereas the inhibition of COX-2 was higher (79.75%, 72.64%, and 60.23%, respectively). At a concentration of 100 µg/mL, the most active against COX-1 was also AH-B (83.14%) followed by AR-B (78.29%), and AH-O (74.50%), while the weakest were diethyl ether fractions from the aerial parts and roots-AH-E (10.43%) and AR-E (28.71%). Except for AH-E and AR-E, all extracts in 100 µg/mL concentration showed good activity against COX-2 (54.25-95.10%). Therefore, our results can be of importance for further research of A. acutiloba as a potential anti-inflammatory treatment. Boroja and co-authors [15] evaluated the anti-inflammatory activity of the aerial parts and roots of A. vulgaris using COX-1 and COX-2 assays, and the assay for determination of COX-2 gene expression. Their research revealed the preferential COX-2 inhibitory activity of methanolic extract from the aerial parts of A. vulgaris. Trouillas et al. [20] also examined the antiinflammatory activity of A. vulgaris for the inhibition of 15-lipoxygenase activity, and their results showed that the anti-inflammatory effect of A. vulgaris can be related to the inhibitory activity of phenolics on arachidonic acid metabolism through the lipoxygenase pathway. AH-60% methanolic extract of the aerial parts, AR-60% methanolic extract of the roots, AH-B-butanol fraction from the aerial parts, AR-B-butanol fraction from the roots, AH-O-ethyl acetate fraction from the aerial parts, AR-O-ethyl acetate fraction from the roots, AH-E-diethyl ether fraction from the aerial parts, AR-E-diethyl ether fraction from the roots. Indomethacin (5 µM)-78.65 ± 1.28% (COX-1); 91.07 ± 2.45% (COX-2)].
Evaluation of Cytotoxicity
The MTT assay indicated that tested extracts obtained from the aerial parts and roots of A. acutiloba possessed different cytotoxicity (Figure 3). In the case of these from the aerial parts, it was demonstrated that the butanol fraction (AH-B) was not cytotoxic towards GMK cells, as the CC 50 value was equal to approx. 1000 µg/mL. Other extracts from the aerial parts was more cytotoxic towards GMK cells. The CC 50 values for AH, AH-O, and AH-E were approx. 360 µg/mL, 270 µg/mL, and 274 µg/mL, respectively. Interestingly, all fractions from roots (AR-B, AR-O, and AR-E) were less cytotoxic than the initial extract (AR). Thus, CC 50 values for AR-B, AR-O, and AR-E were approx. 1000 µg/mL, 1000 µg/mL, and 235 µg/mL, while CC 50 value for AR was equal to 104 µg/mL. Based on the obtained results, it was proved that the butanol fraction from the aerial parts (AH-B) as well as butanol and ethyl acetate fractions from the roots of A. acutiloba (AR-B, AR-O) were the most promising extracts.
Plant Material
The aerial parts and roots of Alchemilla acutiloba Opiz were collected near Karpacz in Poland (coordinates N 50 • 78 09 ; E 15 • 73 09 ). A voucher specimen (voucher no. AA-0818) was deposited in the Department of Pharmaceutical Botany, Faculty of Pharmacy, Medical University of Lublin. The plant species was identified by Prof. Tadeusz Krzaczek.
Extraction Procedure
Different solvent systems (60% methanol, diethyl ether, ethyl acetate and n-butanol) were used to prepare the extracts and subfractions of the A. acutiloba aerial parts and roots. The air-dried, ground aerial parts and roots (20.0 g) were separately extracted with 60% methanol (3 × 200 mL) in an ultrasonic bath (InterSonic IS-4, Olsztyn, Poland) at a controlled temperature (40 ± 2 • C) for 45 min. Extractants were evaporated under reduced pressure to dryness under vacuum at a controlled temperature (40 ± 2 • C), and then subjected to lyophilization using a vacuum concentrator until constant weights were obtained. The obtained yields were as follows: from the aerial parts-AH-4.6 g; from the roots-AR-3.
Total Flavonoid, Phenolic and Phenolic Acids Content
Total flavonoid (TFC) and total phenolic content (TPC) were established using the colorimetric assays as described previously [17]. The absorbance was measured at 430 and 680 nm, respectively, using Pro 200F Elisa Reader (Tecan Group Ltd., Männedorf, Switzerland). The results for TPC were expressed as mg of gallic acid equivalent (GAE) per 1 g of dry extract (DE), and for TFC as mg of quercetin equivalent (QE) per 1 g of DE. Total phenolic acids (TPAC) content was assessed using Arnov's reagent as described in Polish Pharmacopoeia IX (an official translation of PhEur 7.0) [21]. The results were expressed as mg of caffeic acid equivalent (CAE) per 1 g of DE.
LC-ESI-MS/MS Analysis
An Agilent 1200 Series HPLC system (Agilent Technologies, Santa Clara, CA, USA) coupled to a 3200 QTRAP mass spectrometer (AB Sciex, Redwood City, CA, USA) was used for the analysis of phenolic acids and flavonoids in A. acutiloba various extracts. The separation of compounds was performed on a Zorbax SB-C18 analytical column (2.1 × 100 mm, 1.8 µm, Agilent Technologies, Palo Alto, CA, USA) at 25 • C. Elution was conducted using solvent A (0.1% HCOOH in water) and solvent B (0.1% HCOOH in acetonitrile). The following gradient elution program was used: 0-2 min-20% B, 3-4 min-25% B, 5-6 min-35% B, 5-6 min-35% B, 8-12 min-65% B, 14-16 min-80% B, 20-28 min-20% B. The flow rate was 300 µL/min. The mass spectra of analyzed compounds were acquired in the negative ESI mode, and the optimum values of the source parameters were as follows: capillary temperature 450 • C, curtain gas 30 psi, nebulizer gas 50 psi, source voltage −4500 V for phenolic acids and flavonoid glycosides, and capillary temperature 550 • C, curtain gas 20 psi, nebulizer gas 30 psi, and source voltage −4500 V for analysis of flavonoid aglycones. The other details of LC-ESI-MS/MS analysis were described in our previous research [17].
All assays were performed using 96-well microplates (Nunclon, Nunc, Roskilde, Denmark) and were measured in an Infinite Pro 200F Elisa Reader (Tecan Group Ltd., Männedorf, Switzerland). Results were expressed as the IC 50 values of A. acutiloba extracts based on concentration-inhibition curves. L-ascorbic acid, Trolox and Na 2 EDTA*2H 2 O were used as a positive control.
Cyclooxygenase-1 (COX-1) and Cyclooxygenase-2 (COX-2) Inhibitory Activity
The extracts of A. acutiloba were examined for cyclooxygenase-1 (COX-1) and cyclooxygenase-2 (COX-2) inhibitory activity using a COX (ovine/human) Inhibitor Screening Assay Kit (No. 560131, Cayman Chemical Company, Ann Arbor, MI, USA) according to the protocol of the manufacturer. The extracts were tested different concentrations. Indomethacin was used as a positive control.
Evaluation of Cytotoxicity
The cytotoxicity of tested extracts and fractions was assessed using representative normal cells, namely green monkey kidney cells (GMK) [22]. This cell line was purchased from BIOMED Serum and Vaccine Production Plant (Lublin, Poland). The GMK cells were cultured as described previously in details [22]. For cytotoxicity determination, GMK cells were seeded in 96-well plates at concentration of 2 × 10 4 /well and maintained for 24 h at 37 • C. Next day, the serial dilutions of tested extracts in culture medium were prepared (1000 µg-1.95 µg/mL), and then 100 µL of these solutions was added to the cells. Cells cultured without tested extracts were served as control (0 µg/mL). After 24-h incubation, the cell viability was assessed using MTT assay [22]. The results were presented as mean values ± standard deviation (SD) of three independent experiments. Statistical analysis was performed using unpaired Student's t-test and differences were considered significant when p < 0.05 (GraphPad Prism 5, version 5.04 Software, GraphPad Software, San Diego, CA, USA). The values of half-maximum cytotoxic concentration (CC 50 ) were determined using 4-parameter nonlinear regression analyses, GraphPad Prism 5, version 5.04. The CC 50 denotes a concentration of extracts required for reduction of cell viability to 50%.
Statistical Analysis
The results were expressed as mean values ± standard deviation (SD) of three independent experiments. The data from cell culture experiments were subjected to statistical analysis using unpaired Student's t-test and differences were considered significant when p < 0.05 (GraphPad Prism 5, version 5.04).
Conclusions
Medicinal plants are a great source of novel pharmaceutical products. Modern phytotherapy recommends using appropriate extraction procedures and standardization of extracts containing purified and concentrated active compounds [23,24]. To account for this fact, in our research we prepared and studied dry extracts and fractions of A. acutiloba leaves and roots using fractionated extraction and solvents of various polarity. The crude methanol-water (6:4, v/v) extracts, prepared with the solvent indicated previously as the most effective extractant of polyphenols, were used as the starting extracts for fractionation. The solvents were selected experimentally based on previous research on other plant materials [24][25][26]. The extraction conditions enabling the best recovery of phenolic acids and flavonoid compounds from the raw material and biological activity were selected. The fractionation procedure allowed for the enrichment of the extracts in selected analytes, e.g., phenolic acids such as caffeic acid, 4-hydroxybenzoic acid, vanillic acid, p-coumaric acid, salicylic acid in AH-B (butanol fraction from the aerial parts), and flavonoid glycosides such as quercitrin, isoquercitrin, kaempferol-3-rutinoside, rutin and narcissoside in AH-O (ethyl acetate fraction from the aerial parts).
In our in vitro research, we characterized polyphenol composition and evaluated biological properties of different extracts from leaves and roots of A. acutiloba. Thus, we identified the main polyphenols as well as determined the antioxidant, anti-inflammatory, and cytotoxic properties of these extracts.
The phytochemical investigation of the aerial parts and roots of A. acutiloba led to the qualitative and quantitative analysis of flavonoids and phenolic acids. Among these compounds, isorhamnetin 3-glucoside, kaempferol 3-rutinoside, narcissoside, naringenin-7-glucoside, 3-O-methylquercetin, naringenin, eriodictyol, rhamnetin, and isorhamnetin were described for the first time in the investigated species.
Moreover, our findings demonstrated that the butanol extract of the aerial parts of A. acutiloba as well as the butanol and ethyl acetate extracts of the roots exhibited strong antioxidant and anti-inflammatory activities, and were not cytotoxic towards GMK cells.
Taking into account the results of present as well as previously published studies of Alchemilla genus, it is reasonable to conclude that A. acutiloba is an abundant source of secondary metabolites that benefit health. It seems to be clear that comprehensive and welldesigned future research on phenolic compounds from A. acutiloba will be of significant importance in pharmacy and medicine. | 2022-01-21T16:24:48.325Z | 2022-01-18T00:00:00.000 | {
"year": 2022,
"sha1": "94c838c9a7ed0d00733abcb710d9b8a17c1cdc12",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/3/621/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac003d973657c2320b3647e877246f0f0363996b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
134496725 | pes2o/s2orc | v3-fos-license | The Archaeoseismology of Historical Buildings : A Model Study from the L ' Aquila Area in Italy
Much of Italy is characterised by two features: an increased risk of seismic activity and a profusion of old and historic buildings. These factors force us to consider the relationship between building safety and practices of conservation and protection, and as such have a direct bearing on our approach to preserving the country’s cultural heritage in general. The guidelines issued on the assessment and reduction of seismic risk to cultural heritage assets in the Prime Ministerial Decree (DPCM) of 9 February 2011 underline the importance of studying such properties in terms of their vulnerability to seismic activity, using “factors of confidence” (FC) to translate the qualitative assessments produced during previous phases into quantitative measurements. In addition to the building survey, which describes the precise three-dimensional form of a structure and the relationships between its constituent parts, a substantial part of our knowledge of a building is provided by stratigraphic analysis of the above-ground elements. Similarly, a great deal of useful information can be derived from historical analysis. This paper outlines an archaeoseismological study developed by archaeologists from the University of L’Aquila and researchers from the ITC-CNR in the same city, which applies a multi-disciplinary approach to the study of historic buildings in areas of seismic activity.
Introduction
The "Guidelines for Evaluating and Reducing Seismic Risk to Cultural Heritage Assets" issued by the Italian Ministry of Cultural Assets and Activities ("Ministero dei beni e delle attività culturali" or "MiBAC") [1], which seek to standardise approaches to the conservation of cultural assets in areas prone to seismic activity, are the fruit of a synergistic process of interaction between scientific and humanistic areas of study.The concept of archaeoseismology, however, has only really come to the fore in recent decades [2][3][4][5][6].Today, we can point to many cases in which this new approach has been applied, both within Italy, most recently in the Abruzzo and Emilia regions, and internationally; consider, for instance, the numerous archaeoseismological studies conducted in Greece, Corresponding author: Ilaria Trizio, researcher; research fields: procedure for archaeological, architectural and urban survey; use of the ICT applied to the analysis of the built heritage (3D GIS and HBIM).E-mail: ilaria.trizio@itc.cnr.it.
Such examples clearly demonstrate how the collection of evidence of seismic activity-by calculating the possible material effects of one or more earthquakes-when combined with archaeological investigation of the immediate setting, can yield vital information that might otherwise have been entirely unavailable.
An archaeoseismological approach requires the collaboration of various professional figures, both as a result of the methods it involves, and due to the broad range of applications it enjoys in a variety of fields.This is not surprising, particularly if we appreciate that documenting and recording data on our cultural heritage forms the basis for conservational, archaeological and art-historical research projects and efforts to safeguard our cultural assets, such as seismic-risk prevention and monitoring the condition of complex architectural structures with a view not only to ensuring their conservation, but also making them accessible to, and usable by, the wider population.
This methodology has identified, in the fields involved in the reconstruction process following the 2009 earthquake, the basis for an active form of collaboration between different bodies, and a synergistic meeting of different professional figures (archaeologists, architects, engineering, geologists, etc.), one that will assist us in identifying methodological approaches to meet the demands of caring for, recovering, re-employing and restoring our cultural heritage assets.
To test this methodology, the Castle of Fossa-in the village of the same name a few miles from L'Aquila, in the so-called "seismic crater"-was selected from the many examples of "cultural heritage asset" present in the area.
The Research Project: Context and Motivation
To learn about a cultural asset, to promote it, and to derive the greatest value from it: it is in terms of these three intentions that we measure any common attempt we may make to recover the past of such a property and root it fully within the present to ensure that it is actively preserved going forward.
We learn about the historic, artistic and archaeological context of a site not only by reading the layers of the earth, stratigraphically, but also by examining the palimpsest of historical building work that has been carried out on it over time.We promote it by sharing this knowledge through a combination of modern and traditional instruments of communication and advanced technologies.We derive the greatest value from the site, ultimately, by identifying its diachronic qualities and peculiarities-its history-through reconstructions and processes of restoration that reflect the particular technical, formal and compositional features of each historical period, and especially by restoring functionality in a manner that is compatible with the past but that also fulfils the needs of the present and the future.As such, the process of reconstruction that follows an earthquake, or any other form of natural or anthropogenic hazard or emergency, must inevitably reflect these three considerations: learning about, promoting, and deriving the greatest value from the property in question.Over time, this has informed the development of a methodology that can be used in any geographic context, and that we attempt, here, to describe.
Along with the municipality of L'Aquila, the area affected by the 2009 earthquake-the so-called "seismic crater"-includes 56 "minor" municipalities across an area of hills and mountains in the central Apennines.It is a region of outstanding nature and scenic beauty that includes a number of protected areas (the Gran Sasso and Monti della Laga national park, the Sirente-Velino regional park, and various nature reserves) and a diverse range of human settlements, with areas of limited human impact, such as those in the high mountains, and areas of greater human presence, such as the river valleys, basins and upland plains located between the mountains proper.
This heterogeneity itself lends the region a number of distinctive qualities.Here, the bond between the inhabitants and their environment has persisted, virtually unchanged, for millennia, driven by the use of natural resources, sheep-farming and the cultivation of crops.The immeasurable scenic and natural patrimony created by the region's geomorphological complexity and the variety of its ecosystems and plant and animal life in general, is enhanced by the numerous markers that this harmonious coexistence has left over the centuries.If we describe the historic towns and villages of the crater as "minor", it is purely in reference to their size-in terms of both physical extension and population-and not, by any means, to their historical and cultural worth, which is expressed at a number of different scales, from that of individual buildings to the broader urban context and wider landscape.Indeed, throughout the region, we find examples of remarkable buildings framed by historic urban landscapes of comparable overall architectural quality, which are themselves integrated organically with surrounding landscapes of immeasurable scenic and environmental richness.
In addition to the traditional imperative of historical accuracy and consistency, in these towns and villages we have to contend with a rich fabric of buildings, structures and urban landscapes that paint a varied and differentiated diachronic picture of different ages in the history of the region.The 57 municipalities of the "crater" fall variously within the provinces of L'Aquila (42), Teramo (8) and Pescara (7), while in terms of geographical, historical and cultural cohesion, they can be grouped into at least 10 different territorial units.
Ultimately, whatever material we are able to uncover-whether physical or intangible-will serve alongside the highland landscapes, ancient drove roads, dry stone constructions, traditional water networks and water mills in providing the basis for an organic, integrated approach to the recovery of an ancient, multi-layered patrimony of immeasurable value, an approach that will instrumental in promoting development thanks to the three criteria outlined above: "learning about", "promoting", and "deriving the greatest value from".
Starting in the early twentieth century, the area in question has been marked by a sharp fall-off in population, as a result of which a large number of buildings are underused, or not used at all.Consequently, many buildings in the region are inadequately maintained, which leaves them particularly vulnerable.Emigration has also seen the partial abandonment of traditional agricultural practices, which has repercussions in terms of land maintenance and changes to the landscape.At the same time, however, the depopulation of these minor towns and villages-a result of the shifting economic and employment landscape of the previous century-has meant that a large proportion of their traditional architecture has remained intact and escaped the "modernisation" that might otherwise have compromised its historic character.In a sense, the history of the area has allowed it to retain, in the centres of these minor towns and villages, a legacy of buildings that, in terms of methods and materials of construction, are characteristic of the places and periods in which they were built, that have a distinct and pronounced character in their own right, and that are integrated harmoniously with the surrounding landscape.
The most recent earthquake damaged a large portion of the region's historic architecture, aggravating-seriously in many cases-the already perilous conditions of many buildings.The reconstruction process, however, offers an opportunity to focus our efforts on recovering, and making better use of, this patrimony.Indeed, we believe it can even serve as the springboard for a long-awaited process of socio-economic regeneration.
Case Study in the Context of Reconstruction
The Municipality of Fossa's reconstruction programme ("Piano di Ricostruzione" or "PdR") was drafted by the universities of Catania (Prof.C. Carocci) and Genoa (Prof.S. Lagomarsino), was approved by the authorities in 2013, and was adopted by the municipal council the same year.The programme coordinates the municipality's various post-earthquake reconstruction initiatives.It identifies a number of "nuclei of urban regeneration" within the designated perimeter, which includes the village centre.One of these nuclei is the area of the Castle-"RU_A", or "nucleo di Riqualificazione Urbana del borgo fortificato"-which includes the various groups of castle buildings (separated into two consortia) and the private properties that lie inside the walls uphill from the castle proper.
The PdR's technical implementation standards ("Norme Tecniche di Attuazione" or "NTA") require not only that public spaces be restored but that they also be made compliant with recent legislation, which entails improved accessibility, additional public facilities and better private and public car-parking provision.They also outline the requirements for the restoration of existing buildings within the nuclei, a process that may include changes to the designated use of a site.In the case of the Castle, it permits changes designed to restore the building to use and increase its usefulness to the wider public.
The buildings in the Castle nucleus have been earmarked for stabilisation and a process of conservation and restoration that will aim to preserve their general structural and typological character in addition to more specific architectural elements and decorations.
In June 2013, following the completion of a general study of the castle, a "Request for the recognition of the cultural importance of the buildings of the Castle of Fossa" was submitted to the Abruzzo region's Authority for Building Conservation.This was the beginning of the process of securing official recognition of the unique historical, artistic and architectural value of the building we describe here as the "Castle".This process culminated in December 2013 when the Abruzzo's Regional Authority for Architectural and Scenic Assets issued planning restrictions that recognised the site's elevated artistic and architectural value.These restrictions cover all of the old fortified "borgo", which is to say all of the buildings and land enclosed by the castle walls.As such it implies that the whole fortified structure is intended to be considered as a single unit.
With the exception of the circular tower, which was subject to restoration work in the 1980s, the Castle buildings are private properties that had been in constant use as residences up until the earthquake of 2009.
Against this backdrop, we note a varying state of conservation between the southern fortifications, which adjoined the occupied buildings, and their northern counterparts, which were unused and had fallen into a state of neglect.Generally, in spite of issues linked to soil erosion, regulations and the general post-earthquake reconstruction timetable, the private residents and owners have expressed an amenableness to making their property available, in part, for such purposes as are conducive to greater public utility, in terms of the conservation and better use of the Castle as a whole.
Archeoseismological Survey of the Castle of Fossa
Against the wider drive to "learn about", "promote" and "derive the greatest value from" the area's cultural heritage assets, Fossa and its surroundings have been identified as the setting for a first, sample investigation whereby a range of requirements and objectives within the context of the area's post-earthquake regeneration are addressed together and treated synergistically as part of a more effective, consensus-driven approach to the process of reconstruction itself.
In addition to traditional concerns with historical accuracy and consistency, the study has to contend with a research context that presents a varied array of considerations with implications in a number of fields.In physical terms, however, it is suitably limited in size, being bordered by Fossa itself, to the north, Stiffe to the south, the slopes of Mount Ocre to the west and Mount Cerro to the east.This area of the basin of the River Aterno was once under the dominion of the Vestini-Roman city of Aveia, and subsequently that of the Lombard gastaldate of Forcona.Thereafter, it was subject to the various processes of population and fortification carried out by the Normans, Swabians, Angevins and, ultimately, the Aragonese.With an eye on this varied history (and the related issue of historical consistency), we find that the area evinces a rich fabric of buildings, structures and urban landscapes that paint a multiplicious and differentiated diachronic picture of region's past.
The project proposes to combine the complexities and demands of the process of post-earthquake reconstruction with a number of aspects more commonly associated with building protection, and specifically with the use of preliminary investigations to facilitate the recovery, re-use and regeneration of the aforementioned "minor" towns and villages and their surrounding territories.
Carrying out preliminary investigations in such densely stratified sites effectively involves a cross-disciplinary approach designed to provide a full and accurate reading of the property and the history of the various iterations of building work carried out on it over time.Marrying these considerations with those of a technical or regulatory nature, and the ultimate aims of the reconstruction project, requires a range of skill sets and professional profiles, or rather a range of professional figures who must be prepared to work together, compare notes and trace the relationships between their respective disciplines.In this way, it is possible to facilitate a better, more effective planning process and improve management of the work and procedures themselves, even after the work has started, thus reducing the risk of disputes and delays.With this approach, the restoration project cannot be fully configured on the basis of procedural and normative specifications alone.Rather, it has to be conceived as a single, unitary project in which a number of different fields of research and documentation are brought together, and through which these areas of knowledge can contribute to the better and more universal use of the property in question and form the basis of an active process of preservation and effective stewardship of historical assets and their surrounding areas.
As such, it is with a view to forming a better idea of issues that may arise in the implementation of the PdR that we have pursued the archeoseismological research project described in these pages.The data provided will be used not only in evaluating the potential vulnerability of historic buildings, but will also serve to provide a better understanding of their seismic history and local geographical context.
At the Castle of Fossa, the research team-which comprises engineers, structural engineers, archaeologists and architects from the University of L'Aquila, the ICT-CNR in L'Aquila and the crater area's Special Office for Reconstruction-is carrying out a number of parallel operations that are designed to: identify likely instances of pervasive cracking, which tends to affect the points at which different stratigraphic elements meet, if these are not well anchored to one another, or areas of stone or brickwork characterised by voids, whether these are filled in or not (e.g., windows, putlog holes, flues, sewer channels etc.); form a picture of the diachronic development of the buildings that make up the Castle complex with a view to assessing each dwelling's vulnerability to further seismic shock in relation to the modified static conditions to which it is currently subject; identify, describe and date historical earthquake damage using stratigraphic data and studying macro-elements within the structures.In this sense, stratigraphic analysis is fundamental in identifying and evaluating those areas that are most prone to kinematic phenomena of this sort since it highlights elements that are potentially vulnerable to the effects of seismic activity, such as areas of cracking that have not been properly restored, interfaces between layers that have not been suitably anchored, areas characterised by changes in materials and construction methods, and so on; assess the current condition of the buildings with a view to conducting subsequent vulnerability analysis and identifying the type of work that needs done on the structure.Clarifying the relationship between stratigraphy and macro-elements is a primary consideration in analysing a building because it makes it possible to attribute certain changes in the overall structure to specific destructive phenomena and subsequent repairs.As such, stratigraphic analysis allows the researcher to speculate as to how a building has responded to earthquakes that may have affected it in specific historical periods, and as to which instance of damage is related to the seismic activity with which the study is primarily concerned; identify, date and describe "anti-earthquake measures", meaning all procedures and methods employed to mitigate, repair or counter the effects of seismic activity, whether they are implemented retrospectively, following an earthquake, or during the building phase in anticipation of earthquakes to come.
Future Scope and Potential of the Study
Taking our lead from the analysis of this single sample area, we have deduced a working reference model of the history of the wider area that leads us to consider the building methods identified in terms of what they reveal both about the role of the various individuals involved in commissioning and implementing the construction of a building (and any other work carried out on it) in one or more periods in history, and about the economic, political and social considerations reflected in the choices they made.Looking to the future, this model could, in fact, constitute a pilot study for investigations over wider geographical areas and exemplify a genuinely multidisciplinary approach to the study of seismic risk, one that is characterised by the exchange of ideas and engagement between humanistic and scientific disciplines.
In the wider landscape of academic research into earthquakes and their implications for our built cultural heritage and the landscape-and the societies that occupy them-archaeoseismology represents a new, innovate approach.By acquiring as much information as possible about a building that is potentially at risk from seismic activity-something that the ministerial guidelines mentioned earlier identify as a fundamental requirement-we can better identify what sort of work is needed in a particular study context.As such, it is an essential process that needs to be implemented before a future earthquake robs us of the opportunity.
Conclusions
We expect the proposed research methodology and the survey process currently under way to yield a range of both quantitative and qualitative data to supplement our understanding of historical seismic activity.These data, if correctly interpreted, are of great potential utility in the context of any preliminary analysis applied prior to carrying out work directly on a particular heritage asset, and more generally in relation to our attempts to understand the history and seismology of the wider territory before beginning the process of restoration.By helping to evaluate certain elements that are key to our understanding of an area's seismic risk, specifically factors of risk, vulnerability and exposure, archaeology can serve as the linchpin of any investigation of the seismological make-up of a particular research context and the effects of past earthquakes on its historic buildings.
There are also significant social implications that push the project into the realms of public archaeology, particularly in regard to the relationship between communities and municipal authorities.Indeed, the process of analysing buildings in terms of an area's seismological history can help in raising awareness among populations who live in areas of elevated seismic risk and informing them of the actual likelihood of future seismic activity and the effects it might have on civil constructions.Ultimately, the project can also add to this wider population's awareness of its own history and the earthquakes that have affected the area in which it lives.As such, it can only help strengthen the cultural identity of the region. | 2019-04-27T13:09:07.945Z | 2017-10-28T00:00:00.000 | {
"year": 2017,
"sha1": "13470f937c68e37ea6aeea497a62c6fafe458b34",
"oa_license": "CCBYNC",
"oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/5a7ac4adf358e.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "13470f937c68e37ea6aeea497a62c6fafe458b34",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
} |
18647334 | pes2o/s2orc | v3-fos-license | Direct detection of extended-spectrum beta-lactamases (CTX-M) from blood cultures by LC-MS/MS bottom-up proteomics
Rapid bacterial species identification and antibiotic susceptibility testing in positive blood cultures have an important impact on the antibiotic treatment for patients. To identify extended-spectrum beta-lactamases (ESBL) directly in positive blood culture bottles, we developed a workflow of saponin extraction followed by a bottom-up proteomics approach using liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). The workflow was applied to positive blood cultures with Escherichia coli and Klebsiella pneumoniae collected prospectively in two academic hospitals over a 4-month period. Of 170 positive blood cultures, 22 (12.9%) contained ESBL-positive isolates based on standard susceptibility testing. Proteomic analysis identified CTX-M ESBLs in 95% of these isolates directly in positive blood cultures, whereas no false positives were found in the non-ESBL producing positive blood cultures. The results were confirmed by molecular characterisation of beta-lactamase genes. Based on this proof-of-concept study, we conclude that LC-MS/MS-based protein analysis can directly identify extended-spectrum beta lactamases in E. coli and K. pneumoniae positive blood cultures, and could be further developed for application in routine diagnostics.
Introduction
Infections caused by antibiotic resistant Gram-negative bacteria are an increasing problem worldwide. In the Netherlands, resistance towards third generation cephalosporins through extended-spectrum beta-lactamases (ESBLs) is the most frequently found antibiotic resistance of medical importance [1]. Unrecognised, infections with ESBL-producing bacteria pose a serious threat, as they are associated with high morbidity and mortality rates [2]. Escherichia coli and Klebsiella pneumoniae are reported among the main representatives of ESBL-producing bacteria. In the Netherlands, ESBL prevalence is approximately 10% among infected patients [3].
Extended-spectrum beta-lactamases are a group of betalactamases which can also hydrolyze third generation cephalosporins. The detection of these ESBL-enzymes is currently provided indirectly by the results of standard susceptibility testing of cultured bacteria, followed by a phenotypic confirmation assay or a genetic test. Direct detection of the enzyme responsible would provide molecular information regarding the phenotype. Protein analysis by way of mass spectrometry has changed microbiological practice in recent years through the introduction of Matrix-Assisted Laser Desorption Ionisation-Time of Flight Mass Spectrometry (MALDI-TOF MS) for species identification [4]. However, the inherent limitations of these instruments such as the limited dynamic range and resolution limit the general applicability to accurately detect the presence of ESBL, particularly in identifying the nature of the underlying enzyme.
Peptide analysis by bottom-up proteomics is commonly used to directly identify proteins and can be used for in-depth proteomic characterisation of resistant bacteria, often using multi-dimensional protein and/or peptide fractionation techniques [5][6][7]. However, straight analysis of proteolytic digests of total cellular protein extracts also allows to directly identify resistance-related proteins such as beta-lactamases [8,9]. This leads to shorter analysis times compared to comprehensive proteome studies while maintaining the inherent specificity of directly identifying the protein of interest. Previously we developed such a proteomic platform for the direct detection of OXA-48 and KPC carbapenemases in bacterial cultures of clinical isolates [10][11][12].
A significant reduction in analysis time would be achieved when bacterial beta-lactamases could be directly analyzed in positive blood cultures. Therefore, the aim of this study was to develop a LC-MS/MS based bottom-up proteomics workflow to identify ESBL-producing E. coli and K. pneumoniae directly in blood cultures and to test the performance of this workflow in a proof-of-principle study using clinical blood culture samples collected in a prospective study.
Design of study
This study was designed to evaluate the use of proteomic analysis by LC-MS/MS for the detection of extendedspectrum beta lactamases directly from positive blood culture bottles that grow E. coli or K. pneumonia, during a prospective study. Two academic centers participated in the study: the Erasmus University Medical Center in Rotterdam and the Leiden University Medical Center in Leiden. During a period of 4 months (July-October 2015), all positive blood cultures with K. pneumoniae or E. coli, were included in the study.
Comparison of sample preparation methods
Two methods were evaluated for the analysis of bacterial proteins in blood cultures, serum separator tubes and a differential lysis protocol using saponin. For this purpose, negative blood culture bottles were spiked with different amounts of liquid broth culture of E. coli BL21(DE3) pLysS to mimic different bacterial densities in positive blood cultures.
Serum separator tubes feature a gel through which red blood cells can migrate while bacteria are pelleted on top of the gel. Four mL from a spiked blood culture bottle were applied to the tubes (Becton Dickinson, Breda, The Netherlands) and the mixture was centrifuged at 6000 g for 10 min. Serum was removed and the pellet was washed twice with 1 mL of phosphate buffered saline (PBS) followed by a 5 min centrifugation at 6000 g. The bacterial pellet on top of the gel was resuspended in 100 μL PBS and transferred to an Eppendorf tube. The gel was rinsed with 100 μL PBS another three times to recover any residual bacteria and this was added to the vial. The resulting bacterial suspension was centrifuged at 10,000 g for 1 min and the supernatant was removed. 100 μL of 50% trifluoroethanol (TFE) solution was added for protein extraction and solubilisation. This suspension was sonicated in an ultrasound water bath for 2 min. Suspensions were heated to 60°C for an hour. The resulting lysates were then subjected to protein digestion.
For the saponin protocol, 4 mL of the spiked blood culture bottle was mixed with 1 mL of a saponin (Sigma-Aldrich, Zwijndrecht, The Netherlands) stock solution (5% w/v, final concentration 1% w/v). The mixture was vortexed, incubated at room temperature for 5 min and centrifuged at 6000 g for 10 min. The cell pellet was washed three times with 1 mL PBS and centrifuged at 10,000 g for 1 min and, following the final centrifugation step, re-suspended in 100 μL 50% TFE solution. The suspension was sonicated using an ultrasound water bath for 2 min. Suspensions were heated at 60°C for an hour. The resulting lysates were stored at −80°C until further analysis.
Blood culture and species identification
Blood cultures were drawn as part of normal clinical routine. A sample of 8-10 mL of blood was used per bottle (Bactec Plus Aerobic and Bactec Plus Anaerobic, Becton Dickinson, Breda, The Netherlands) for blood culturing (Bactec FX, Becton Dickinson, Breda, The Netherlands). Bacterial species in positive cultures were identified directly from 1 mL blood culture by MALDI-TOF MS analysis (Microflex, Bruker Daltonics, Bremen, Germany) according to an in-house developed protocol adapted from literature [13]. All positive flagged blood cultures were stored at 4°C and processed within 48 h using the saponin protocol described above.
In-solution protein digestion
Stored lysates (at −80°C) were thawed for further processing for bottom-up proteomics. Reduction was performed with dithiothreitol (DTT, final concentration 2.5 mM in 25 mM ammonium bicarbonate) at 60°C for 15 min. Alkylation was performed in the dark with iodoacetamide (final concentration 5.5 mM in 25 mM ammonium bicarbonate) for 15 min. Following alkylation the samples were digested overnight using sequencing grade modified trypsin (12.5 ng/μl, Promega, Leiden, The Netherlands). The next day the resulting digests were lyophilized and reconstituted in 0.5% trifluoroacetic acid (TFA) for pre-column trapping during LC-MS/MS analysis.
Molecular characterisation
All ESBL positive isolates (n = 22) were analyzed for the presence of beta-lactamase genes. An in-house real-time multiplex bla CTX-M PCR was used for analysis of the specific CTX-M groups. For primer design, an alignment of the available bla SHV gene sequences from GenBank® was made using the AlignX program (Vector NTI Advance 11, Invitrogen). Primers and probes were developed in-house using Beacon Designer ( Premier Bi osoft, Pal o Alto, U.S.A.). Subsequently, beta-lactamase gene bla SHV was amplified using PCR and further investigated by nucleotide sequence analysis [15]. All primers and probes used in this study are listed in Table 1. These molecular assays have been developed and internally validated at the LUMC and are used in daily routine.
LC-MS/MS analysis and data processing
Peptide mixtures were analyzed using nano reversed-phase liquid-chromatography coupled to tandem mass spectrometry (nano LC-MS/MS). The nano-LC system (Ultimate 3000 RSLCnano, Dionex) combines a 2-cm Acclaim PepMap 100 guard column with an Acclaim PepMap RSLC column (C18, 75 μm × 50 cm with 2 μm particles). A multi-step gradient going from 5 to 55% B in 180 min was used (solvent A being 0.1% formic acid in water and solvent B 0.1% formic acid in 80% acetonitrile) at a rate of 300 nl min −1 . Mass spectrometry analysis was carried out on a maXis Impact UHR-TOF-MS (Bruker Daltonics) in data dependent MS/MS mode, with precursors ranging from m/z 300-1200. After MS/MS analysis precursors were excluded from selection dynamically for one minute.
Raw data were converted to Mascot Generic Files (MGF) and analyzed by database searching using the Mascot algorithm (Mascot 2.5.1, Matrix Science, London, UK) using Mascot Daemon 2.5.1. To ensure a comprehensive search of all beta-lactamases, a custom database was prepared. This database consists of in-silico translated reference genomes for K. pneumoniae (http://www.ncbi.nlm.nih.gov/genome/ 815?genome_assembly_id=168877) and E. coli Carbamidomethylcysteine was set as a fixed modification, with methionine oxidation as variable modification. Trypsin was designated as an enzyme with a maximum allowed number of missed cleavages of two. The False Discovery Rate (FDR) was set at 0.01 at the peptide level based on decoy database searches.
Optimization of sample extraction
Two different sample preparation protocols were compared with respect to the overall number of protein identifications and the ease of use of the method. For this purpose, we used negative blood culture bottles spiked with different amounts of liquid broth culture of E. coli BL21(DE3) pLysS cells to mimic different bacterial densities in positive blood cultures. The number of successful protein identifications was determined at 3.0 × 10 7 and 3.0 × 10 8 CFU using a sample preparation by serum separator tubes and by differential lysis protocol using saponin. As a reference, the protocols were also applied to the bacterial suspensions used for inoculation of the blood cultures, using 1.0 × 10 7 CFU. Results of all analyses were searched independently against the bacterial database and, for human proteins, against the human database. Table 2 summarises the results for the proteomic comparisons of both protocols. The sample containing 3.0 × 10 8 CFU performed better in the saponin protocol. Since this procedure is less laborious we treated all subsequent positive blood cultures by the saponin protocol. A number of variables of the saponin lysis protocol were tested including centrifugation speed and duration, saponin concentration and number of washing steps. No significant improvements were made and the protocol therefore remained unchanged.
Blood culture collection and susceptibility testing
During a period of four months, positive blood cultures with E. coli or K. pneumoniae were collected prospectively (Table 3). In total, 170 positive blood cultures were collected. Of these, 125 (73.5%) contained E. coli and 45 (26.5%) K. pneumoniae. Following susceptibility testing of cultured i s o l a t e s , 2 2 i s o l a t e s ( 1 2 . 9 % , 1 8 E . c o l i a n d 4 K. pneumoniae) were confirmed as ESBL-positive with the combination disk diffusion test.
Results from bottom-up proteomics analysis
All 22 blood cultures with ESBL positive isolates were selected for proteomic analysis, as well as 44 randomly selected ESBL negative blood cultures. Preparation of the 66 blood cultures for LC-MS/MS analysis was performed blind with regards to the results of the phenotypic testing. Following LC-MS/MS analysis, the resulting spectra were searched against the in-house generated database (see materials and methods) featuring a comprehensive list of beta-lactamases as well as K. pneumoniae and E. coli proteomes. In a typical analysis of one positive blood culture bottle, 400-800 bacterial proteins were identified. Table 4 summarises the results for the phenotypically ESBL positive blood cultures (n = 22). In all results obtained by MS, the detected β-lactamase was always in the top 10% of the total number of identified bacterial proteins in a sample, sorted by identification score. In 21 out of 22 of the ESBL positive isolates a cefotaximase (CTX-M) was identified. Protein sequence coverage based on identified peptides varied from 38% to 88%. This coverage allows for the mapping of the identified cefotaximases into one of six established lineages [16], named after their archetypical enzymes. In our collection, only members of groups CTX-M-1 and CTX-M-9 were found. In one K. pneumoniae isolate ( Table 4, number 15) no cefotaximase was found. A SHV type beta-lactamase was identified with 33% coverage (Fig. 1). Like with the cefotaximases, this protein was a top 10% identification among all bacterial proteins identified. From 148 ESBL-negative K. pneumonia or E. coli positive blood cultures, 44 were randomly selected and also analyzed by LC-MS/MS. In none of these samples, extended-spectrum betalactamases were found.
Molecular characterisation of ESBL positive isolates
To confirm the identity of the ESBLs identified with LC-MS/ MS based proteomics, molecular characterisation of all phenotypically ESBL positive isolates was performed (Table 4). All CTX-M identifications were verified with PCR. In isolate 15, the LC-MS/MS identified a SHV-enzyme which was confirmed by PCR as an ESBL, namely, SHV-12. Three non-ESBL SHV beta-lactamases were identified by PCR in the K. pneumoniae isolates.
Discussion
In this study we developed a novel proteomic workflow for the direct identification of ESBLs in positive blood culture bottles. To evaluate the performance of our approach, a proof-of-principle prospective study was performed in two academic hospitals. In 22 positive blood cultures with phenotypically ESBL producing E. coli or K. pneumoniae, we identified 21 isolates containing a CTX-M and one isolate containing a SHV beta-lactamase, although the latter could not unambiguously be identified because the single peptide necessary to discriminate between an ESBL and non-ESBL was not identified. In the set of positive blood cultures with ESBLnegative E. coli or K. pneumoniae, no ESBLs were identified by LC-MS/MS analysis. This demonstrates a 95% sensitivity and 100% specificity of the workflow to directly identify these beta-lactamases in positive blood cultures. Samples were spiked with 3.0 10 7 or 3.0 10 8 CFU obtained from liquid broth culture. Saponin: differential lysis protocol. SST: Serum separator tube protocol. As a reference, a suspension containing 1.0 10 7 CFU was prepared from the same liquid culture that was used to spike the negative blood cultures Of 170 positive blood cultures collected in two academic hospitals, 22 (12.9%) contained ESBL producing bacteria belonging to E. coli or K. pneumoniae. This percentage is higher than previously described in The Netherlands [3], with cefotaxime/ceftriaxone resistances reported to be 5% and 7% for E. coli and K. pneumoniae, respectively. However, these data were based on a larger number of laboratories, including laboratories serving non-university hospitals and general practitioners. All collected samples in our study showed full meropenem susceptibility, in agreement with the low prevalence rate of carbapenemase producing Gram-negatives in The Netherlands [3].
In the proteomic analysis of the clinical isolates, 21 out of 22 (95%) of the phenotypically ESBL-positive isolates [16,17]. In our study, the CTX-M enzymes belonged to group 1 (17 out of 21; 81%) and group 9 (4 out of 21; 19%). This is in accordance with other reports [18]. Specifically, CTX-M-15 (group 1) and CTX-M-14 (group 9) are among the most prevalent enzymes [19]. Notably, in our collection there was no relation between MIC and ESBL type, especially not for ceftazidime. Full sequence coverage is necessary to pinpoint a protein identification to a specific ESBL but in complex samples with the use of only one proteolytic enzyme, this is not feasible. Obviously, peptide fractionation or additional experiments with another proteolytic enzyme could improve the specificity of the identification. However, we opted for a simple sample preparation protocol, which is mostly constrained time-wise by the proteomic digestion step. In our approach, the sequence coverage among ESBLs in phenotypically positive isolates ranged between 38% and 88%. This coverage is in-depth enough to classify the enzymes into phylogenetic groups, such as with the cefotaximases, but single variants cannot be distinguished using this method. This is important in distinguishing beta-lactamases that have reported broad and extended-spectrum activities, based on small permutations. For example, one blood culture sample contained a SHV beta-lactamase. The sequence coverage obtained by LC-MS/MS analysis was not sufficient to distinguish between a broad and extended-spectrum beta-lactamase. The amino acid at position 238 is instrumental in cephalosporin resistance in SHV variants and the tryptic peptide covering this amino acid is therefore necessary for the unambiguous assignment of the ESBL status [20]. The corresponding tryptic peptides of the SHV-1 (broad spectrum) and SHV-2 (extended-spectrum) are TGAGER and TGASER, respectively. While the doublecharged state would be within the mass range of the mass spectrometer, sensitivity in this low mass range is not optimal and short peptides can also be difficult to retain and separate in liquid chromatography. A more targeted approach might be more suitable for such specific peptides [21]. Importantly, the LC-MS/MS identified SHV beta-lactamase in isolate 15 was confirmed to be an ESBL (SHV-12) by our PCR and sequence analysis. The PCR analyses revealed three additional SHV beta-lactamases which were not identified in the proteomics analysis. Sequence analysis demonstrated that the three additional SHV beta-lactamases were non-ESBLs (SHV-1, SHV-11) and could have been missed in our proteomic analysis due to lower abundance as compared to the ESBL-SHV. Therefore, as it stands now, identifying a SHV with high expression combined with the phenotypical results indicates an ESBL-SHV, but our proteomic data was not sufficient to unambiguously draw this conclusion.
In this proof-of-principle study, ESBLs from the CTX-M group were easy to identify. PCR-based methods have been successfully applied for the identification of ESBLs in blood cultures [22,23], and in our study, the CTX-M PCR results fully correlated with the proteomics results. The aim of our prospective study was to demonstrate the applicability in normal routine, and therefore we detected mainly CTX-M ESBL. Larger clinical sample cohorts and spiking experiments with other ESBL/carbapenemase producing bacteria in negative blood cultures are necessary to demonstrate the general applicability of our approach. Based on the results of our previous study, this workflow should also be suitable for the detection of OXA-48 and KPC beta-lactamases directly in blood cultures [11]. The sample preparation is highly similar and overall proteome and protein coverage was significantly higher using this nanoLC platform. Moreover, the mass spectrometric analysis part of our workflow can be easily exchanged for other high-end mass spectrometry analysers (such as Orbitraps) with even higher speed and sensitivities. As with all genetic methods, a positive identification does not guarantee protein expression. More sensitive proteomic analysis could therefore give some insight into our problem to detect the additional non-ESBL lactamases with our proteomics workflow.
Apart from genetic tests, there are alternative methods to detect the presence of ESBLs in blood culture bottles. Oviaño et al. monitored ESBL activity directly from blood cultures using MALDI-TOF MS by measuring antibiotic hydrolysis [24]. Reported sensitivity and specificity are high, suggesting that such an approach can be used as an alternative to traditional susceptibility testing. Moreover, hydrolysis based Fig. 1 Coverage of SHV-1 sequence. Identified peptides by LC-MS/MS analysis are highlighted when they matched to the sequence of the SHV-1 beta-lactamase. The glycine at Ambler position 238 (underlined) is specific for the SHV-1 sequence, while SHV-2 type extended-spectrum beta-lactamases have a serine in this position. This peptide was not observed in LC-MS/MS analysis making it not possible to distinguish between the beta-lactamase types assays using reporter molecules are mentioned in literature and available as commercial kits [25,26]. Even though hydrolysis based tests are useful, interpretation can be difficult in case of enzymes with a lower activity, and they provide no insight in the identity of the ESBL. In comparison with genetic and hydrolysis based methods, our workflow allows the direct identification of the enzyme responsible, providing molecular information about the phenotype.
Overall, in this proof-of-principle prospective study we demonstrate the direct identification of an ESBL in all blood cultures that contained bacteria positive for a CTX-M type ESBL. The method is specific enough to recognise specific groups of CTX-M ESBL. To improve on this proof-ofprinciple study in the future a number of aspects need further exploration. Among these, shortening the time-to-report and automation of the procedure are among the most critical [27,28]. With this in mind, the developed platform can be used in the future for the direct identification of expressed betalactamases in blood cultures which provides detailed insight into the antibiotic resistance mechanism. | 2017-05-16T05:51:57.638Z | 2017-04-10T00:00:00.000 | {
"year": 2017,
"sha1": "81f38a1d129db401b22c0b90e19d49544840c915",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10096-017-2975-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab7565d4873ad158a4dc7166b9cbab0103fd975f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16305089 | pes2o/s2orc | v3-fos-license | Modulation of LAT1 (SLC7A5) transporter activity and stability by membrane cholesterol
LAT1 (SLC7A5) is a transporter for both the uptake of large neutral amino acids and a number of pharmaceutical drugs. It is expressed in numerous cell types including T-cells, cancer cells and brain endothelial cells. However, mechanistic knowledge of how it functions and its interactions with lipids are unknown or limited due to inability of obtaining stable purified protein in sufficient quantities. Our data show that depleting cellular cholesterol reduced the Vmax but not the Km of the LAT1 mediated uptake of a model substrate into cells (L-DOPA). A soluble cholesterol analogue was required for the stable purification of the LAT1 with its chaperon CD98 (4F2hc,SLC3A2) and that this stabilised complex retained the ability to interact with a substrate. We propose cholesterol interacts with the conserved regions in the LAT1 transporter that have been shown to bind to cholesterol/CHS in Drosophila melanogaster dopamine transporter. In conclusion, LAT1 is modulated by cholesterol impacting on its stability and transporter activity. This novel finding has implications for other SLC7 family members and additional eukaryotic transporters that contain the LeuT fold.
crystallography 18 but does not provide an insight into how the interaction between the two components of the heterodimer affect the transport cycle of LAT1. LAT2 (SLC7A8), which has 52.8% identity to LAT1, is stabilised by CD98 surrounding the extracellular regions of the transporter 19 .
Cholesterol is an important component of the plasma membrane of eukaryotic cells and comprises between 20 to 40 mol% of the membrane 20 . It is a sterol with an essential role in maintaining membrane fluidity and can directly interact with integral membrane proteins 21 . Multiple studies with eukaryotic transporters in mammalian cells have shown that both the human serotonin transporter (hSERT) and human dopamine transporter (hDAT) are modulated in activity following either cholesterol depletion or addition [22][23][24] . In the case of hSERT, biochemical analyses have shown that cholesterol binding enhances the fraction of the transporter in the outward open confirmation while with the hDAT it stabilises the transporter in an outward open conformation 23,25 . Corroborating this, the outward open conformation was observed in the crystal structures of cholesterol or cholesteryl hemisuccinate (CHS) bound Drosophila melanogaster dopamine transporter (dDAT) and hSERT [26][27][28] . The interaction with cholesterol or CHS was increased by mutations of dDAT and hSERT that also decreased the transport kinetic parameters while still retaining functional activity 12,27 .
Here we show that cholesterol modulates LAT1 stability and its transporter activity. We provide evidence that cholesterol/CHS interact with LAT1-CD98 and suggest that the LAT1 transporter has two cholesterol/CHS binding sites similar to the binding sites found in the dDAT.
Results
Modulation of cholesterol levels alters the LAT1 transport of L-DOPA. A time course for L-DOPA uptake into cells was performed in both HEK293 control cells and stably transfected HEK293 LAT1 cells, in order to validate the use of L-DOPA as a model LAT1 substrate (Fig. 1A). At all time points tested, a significant increase in uptake of L-DOPA was observed in the LAT1 stably transfected cells compared to the control cells. This finding is in agreement with previous studies that have shown L-DOPA to be a substrate of the LAT1 transporter 4 , by an uptake process that is both sodium independent and inhibited by 2-aminobicyclo-[2,2,1]-heptane-2-carboxylic acid (BCH) 29 . L-DOPA was thus used as a model substrate of LAT1 in experiments investigating the role of cholesterol on LAT1 function. To deplete HEK293 cells of cholesterol, the cells were treated for 1 hour with methyl-β-cyclodextrin (MβCD). The relative amount of cholesterol in the cells was determined by a cholesterol quantification assay. There was observed a significant decrease in total cholesterol (cholesterol and cholesteryl ester) in the MβCD treated cells compared to the untreated cells (Fig. 1B). At 1 μM of L-DOPA, no significant difference in L-DOPA up take was detected between treated and untreated HEK293 LAT1 cells (Fig. 1C). However, at 1 mM L-DOPA a significant reduction in L-DOPA uptake was observed in MβCD treated LAT1 cells compared to the untreated LAT1 cells (Fig. 1D). Sulfobutylether-β-cyclodextrin (SBCD), a derivative of cyclodextrin which can interact with cholesterol but not extract it from the plasma membrane 30 , was utilised as a control. SBCD was rationally designed to improve its safety profile in vivo and part of this was the removal of the compounds ability to dimerises with itself. As such SBCD is unable to form the 2 to 1 molecular ratio with cholesterol that is required for depletion from the plasma membrane 30 . SBCD treatment was found not to alter the L-DOPA uptake significantly (Fig. 1E) which suggests that the cholesterol depletion effects of MβCD are required for its effect on L-DOPA uptake.
Cholesterol depletion alters the kinetics of LAT1 mediated transport. Given the effects of MβCD treatment on the activity of the LAT1 transporter, we investigated the LAT1 mediated kinetics at a range of L-DOPA concentrations at a linear time point ( Fig. 2A). The data showed that MβCD treatment altered the kinetics of LAT1 mediated transport ( Table 1). The V max was significantly reduced in the treated cells (8506 pmoles/ million cells/min) compared to the untreated cells (13925 pmoles/million cells/min), but the K m of L-DOPA uptake between untreated (200 μM) and treated (148 μM) showed no significant difference. This finding could be caused by either alterations of the transport properties of the LAT1 protein or a change in the amount of the transporter at the plasma membrane.
To investigate whether MβCD treatment altered the localisation of the LAT1 transporter, cell surface preparations from untreated and treated cells were immunoblotted. No change in plasma membrane localisation of over expressed LAT1 was observed following MβCD treatment (Fig. 2B). This suggests that the change of uptake kinetics by LAT1 following cholesterol depletion by MβCD is not due to an alteration in the localisation of the transporter complex. From classical Michaelis-Menten kinetics parameters, a change in V max but not K m suggests a non-competitive inhibitor process, more precisely in this case, allosteric modulation 31 . Figure 2B shows an upregulation of endogenous CD98 in response to increased LAT1 expression in HEK293 cells, an observation initially reported by Khunweeraphong et al. 32,33 . Correspondingly, the knock out of LAT1 leads to a reduction of CD98 protein levels 32,33 . This linkage of CD98 to LAT1 expression levels enables us to utilise cell lines that are only ectopically expressing the LAT1 transporter for protein studies of the LAT1-CD98 heterodimer.
Detergent solubilised LAT1-CD98 complex is stabilised by a cholesterol analogue (CHS). To
investigate if cholesterol interacts with human LAT1 directly, ectopically expressed protein from HEK293 GnTIcells was solubilised in detergents with and without the water-soluble cholesterol analogue CHS before immunoaffinity purification of LAT1-CD98. Distinct size-exclusion chromatography (SEC) profiles were observed with and without CHS (Fig. 3A). Peaks 1 and 3 had increased peak heights in presence of CHS. Additionally, peak 3 with CHS had a shifted retention time compared with peak 3 in the purification without CHS. Each of the three peaks from the purification with CHS were immunoblotted for CD98 in non-reducing conditions and an over exposed immunoblot is shown (Fig. 3B). Despite the denaturing conditions of SDS-PAGE, a smear above 220 kDA was observed in peaks 1 and 2 that was not present in peak 3. This is suggestive of aggregation of the purified proteins in peaks 1 and 2, thus all following experiments utilised peak 3 from purifications that contained CHS. Peak 3 was immunoblotted in non-reducing conditions for CD98 and His 6 tagged LAT1. Both proteins of the heterodimer were detected and had the characteristic smear of a glycosylated protein (Fig. 3C). The immunoblots and coomassie stained SDS-PAGE gel reveal a band consistent with the 123 kDa molecular weight of the heterodimer (Fig. 3C). In order to further biochemically characterise the purified complex in near to native conditions, the complex was run on analytical SEC. A monodisperse peak corresponding to the LAT1-CD98 heterodimer was observed on day 1 (Fig. 3D). The SEC analysis was repeated 3 and 7 days after purification in order to determine stability of the purified LAT1-CD98 stored at 4 °C over time. The complex was kinetically stable in the SEC buffer for 3 and 7 days at 4 °C with no detectable aggregation or denaturation by analytical SEC (Fig. 3D). Taken together, the data from the immunoblots and chromatographic profiles produced from the analysis of peak 3, it can be concluded that CHS is necessary for successful purification of the LAT1-CD98 heterodimer.
To determine the thermal stability of the LAT1-CD98 complex, samples in SEC buffer were heated to temperatures between 4-100 °C and analysed by HPLC-SEC. The peak height and monodispersity of the LAT1-CD98 samples decreased with increasing temperature (Fig. 3E). To quantify this, the normalised absorbance relative to the protein sample at 4 °C was plotted against temperature. A dose response curve to heating was obtained and the thermal stability of the LAT1-CD98 in SEC buffer was defined with the T m found to be 47 °C (Fig. 3F).
CHS stabilised LAT1-CD98 complex interacts with leucine.
Ligand studies were undertaken to determine the thermal and conformational stability of the purified LAT1-CD98, as an interaction with ligand can be an indirect indicator that the protein is correctly folded. Leucine was chosen for this purpose, as it absorbs ultraviolet light at 280 and 220 nm negligibly, and is stable in solution during the experimental time frame. The addition of leucine resulted in a 39% increase in the peak height at 10.4 mL (peak 3) in the SEC profile (Fig. 4A). To determine whether this increase was due to increased stability of LAT1-CD98, samples of LAT1-CD98 purified in the presence and absence of leucine were heat stressed at 60 °C for 10 mins and then analysed by HPLC-SEC (Fig. 4B). This showed a significantly decreased monodispersity in the absence of leucine and a 9% higher normalised absorbance in the presence of leucine, indicative of a thermal stabilising effect (Fig. 4C). The melting curve for LAT1-CD98 in the presence of leucine showed that the T m increased to 57 °C ( Fig. 4D) compared to 47 °C without leucine. This suggests that not only is CHS required for successful protein purification but that the purified protein retains its ability to interact with substrate compounds. However, it should be noted that this assay does not distinguish between specific and nonspecific interactions of leucine with the LAT1-CD98 heterodimer. methodologies. An alignment of LAT1 and dDAT was performed using PROMALS3D, which allows for the alignment of distantly related sequences by taking into account structural information, predicted or otherwise 34 . The two transporters have a low sequence identity of 19.6% but are predicted to share the same LeuT structural fold. Most of the cholesterol interacting residues in binding sites I (91%) and II (70%) of dDAT are identical, equivalent or have similar physico-chemical properties (conservation value > 5) to corresponding residues in LAT1 (Fig. 5A). To test whether these residues may have functional importance for the LAT1 transporter and are thereby conserved during evolution, we performed a multiple sequence alignment of LAT1 orthologues from a diverse group of 8 metazoans. Ten residues were identical and seven similar out of 17 residues, across both binding sites in the 8 orthologues (Fig. 5B). An alignment of LAT1 and LAT2 (Fig. 6) found residues comprising both of the cholesterol/CHS putative binding sites were conserved, with all residues having a conservation scores >7.
The conserved residues were located on adjacent helices in the predicted 3D structure of LAT1 (Fig. 7B), which is consistent with the requirement for the cholesterol interacting residues in the binding sites to be in close proximity, as seen in the dDAT structures (Fig. 7A) 26,27 .
Discussion
LAT1 was first cloned in 1998, found to interact with CD98 via a disulphide bond and is the transport component of the heterodimer 1,2,17,35 . To date our mechanistic knowledge of how LAT1 functions and interacts with substrates is derived from in silico models generated using low sequence similarity homologues from prokaryotes that have the LeuT fold 14,15 . These studies have been successful in identifying novel ligands and suggesting that the transporter acts by the alternative access mechanism 17 . However, to our knowledge, there are no experimental or modelling studies in the literature concerning the LAT1 transporter that take into account structural knowledge gained from eukaryotic transporters with the conserved LeuT fold. As a result, structural insights to LAT1 function resulting from its eukaryotic origin, have been missed. This is the first study to both model and test this experimentally for the human LAT1 transporter. Two well-characterised eukaryotic transporters, that have the LeuT fold, are the serotonin and dopamine transporters. Both have established interactions with cholesterol, and so we tested to see if LAT1 also has this feature. We found, following acute cholesterol depletion that LAT1 mediated kinetics of L-DOPA uptake were altered, with a decrease in V max but no change in the K m . This is consistent with a reduced maximal transport activity but without discernible change in substrate affinity. The depletion of cholesterol has been found to also result in reduced activity for the hDAT 24 which led us to further investigate the mechanism of this modulation, to determine whether it is common between the two transporters.
Biochemical studies have suggested that cholesterol can either stabilise or induce the outward conformation for the hDAT and hSERT 23,25 . In the crystal structures of dDAT and hSERT, both transporters are bound to cholesterol and or CHS in the outward open conformation [26][27][28] . To determine whether the modulation of LAT1 activity we observed was mediated indirectly by changes in membrane fluidity, for example, or whether it was through a direct interaction, LAT1 was purified. We found that the LAT1-CD98 heterodimer in the presence of CHS was stable and was further stabilised by a ligand, suggesting the complex was natively folded. In contrast, the heterodimeric complex could not be purified in the absence of CHS. The HPLC-SEC based thermostability assay utilised here, is a similar approach to that used for the GABA receptor to determine the binding constants of known ligands and identify a novel agonist for this receptor 36 . This approach may prove useful in establishing interactions of LAT1-CD98 with known LAT1 ligands and novel compounds, with the intention of inhibiting LAT1 for cancer treatment or to enhance brain penetration of compounds 37,38 .
Extensive work on GPCRs has revealed two mechanisms by which CHS potentially stabilises membrane proteins, namely through the modulation of the geometry of detergent micelles or through direct interaction with the protein 39 . CHS is required for purification of a stable LAT1-CD98 complex suggesting either or both of the mechanisms above are relevant. A sequence comparison of dDAT and LAT1 reveals conservation of residues that could form two putative cholesterol/CHS binding sites. The conservation of these sites across LAT1 orthologues lends credence to the functional importance of these putative binding sites. Taken together, we conclude from these data that cholesterol modulates the activity of LAT1, and does so most likely through a direct interaction.
For some membrane proteins, a cholesterol/CHS interaction has been proposed to occur through cholesterol interacting domains referred to as CRAC/CARC domains 40 . CRAC and CRAC-like motifs, are very different when compared with the putative cholesterol/CHS binding sites identified in LAT1 through homology with dDAT. Unlike the CRAC motif, in which the residues essential for cholesterol interaction are contiguous, the cholesterol/CHS interacting residues of dDAT are separated in sequence space and can be proximal in 3D space 26 . Annotation of our predictive model of the LAT1 structure shows that the residues of the putative binding sites are on adjacent transmembrane helices which is similar in arrangement to dDAT. Furthermore, the recently solved structure of hSERT 28 found CHS bound through a motif that conforms neither to the CRAC motif nor the two dDAT binding sites.
When LAT1 was overexpressed in the HEK293 cells we found that CD98 was upregulated. This has been noted by a number of previous studies and occurs by an unknown mechanism 32,33,41 . Our study found no evidence to suggest that LAT1 exists as a monomer in mammalian cells as the three fractions isolated and tested from the purification on a western blot all contained CD98. This shows that a lack of CD98 was not the reason for the instability found in SEC peaks 1 and 2. This is an important aspect to consider as LAT2 shares 52% amino acid Figure 6. Comparison of the putative cholesterol binding site residues of LAT1 versus the LAT2 transporter sequence. Sequence alignment of the putative cholesterol/CHS binding sites I (purple) and site II (red) from human LAT1 and LAT2. Identical residues are scored with an asterix (*), equivalent residues with a plus (+) and similar residues scored on a scale from 1-9, with 1 being variable and 9 being the maximal level of similarity.
identify with LAT1 and has been shown to be stabilised by its interaction with CD98 19 . A report concerning the LAT2-CD98 complex found that the addition of lauryl maltose neopentyl glycol and CHS during the purification stabilised the complex by an unknown mechanism 42 . However, the individual importance or otherwise of the LMNG and CHS was not individually tested 42 . Given the conservation between LAT1 and LAT2, it would be interesting to determine the effects of CHS on thermal and kinetic stability of LAT2 and of cholesterol depletion in cells on the kinetics of LAT2 uptake.
The closest LAT1 homologue in prokaryotes is the antiporter for serine/threonine SteT that has had mechanistic information determined through the use of single molecule force spectroscopy 43 . This has found that ligand binding enhanced the kinetic stability and increased the flexibility of the transporter compared to the unbound form. In our study the addition of leucine enhanced the thermostability of the LAT1-CD98 complex but how this effects the rigidity of the complex is unknown. This is difficult to predict for the unbound or substrate bound forms of LAT1-CD98 due to the additional complexity of the heterodimeric nature of the transporter compared to SteT.
In this present study we have taken advantage of the cholesterol interacting properties of cyclodextrin derivatives. Methyl-β-cyclodextrin forms dimers each of which interacts with a cholesterol molecule, extracting it from the plasma membrane 44 . Sulfobutylether-β-cyclodextrin is unable to homodimerise and as a result does not remove cholesterol from the plasma membrane. In our experiments the transport of L-DOPA in LAT1 transfected cells was unaffected by incubation with sulfobutylether-β-cyclodextrin. Given the hydrophilic succinate moiety of the CHS, we presumed that methyl-β-cyclodextrin was unsuitable for depletion of CHS from the LAT1-CD98 complex purified in the presence of CHS. Cyclodextrins have a number of proposed pharmacological uses. A cyclodextrin derivative has recently been proposed as a treatment option to promote atherosclerosis regression by removal of plaques 45 and they are used as pharmacology excipients. In the current study we have not investigated in vivo but it would be interesting to do so in the future due to the potential effect on nutrient transport of these cyclodextrin derivatives utilised in treatment regimens.
In summary, we have been able to address some fundamental questions involving LAT1-CD98 interaction with cholesterol and how this affects stability and kinetic values. This should facilitate structural studies of LAT1 that will ultimately be able to define how the transport cycle progresses at atomic/chemical level and the role of cholesterol in these processes.
Methods
Materials. Tritium labelled L-DOPA was acquired from Moravek (California, USA) with a specific activity of 3.6 Ci/mmol. The V5 resin and V5 peptide were obtained from Biotool (Houston, USA). Cholesteryl hemisuccinate tris (CHS) and n-dodecyl beta maltoside (DDM) detergents were purchased from Generon (Maidenhead, UK). All other regents and chemicals, unless otherwise stated, were purchased from Sigma (Poole, Dorset, UK). MβCD treatment, cholesterol quantification and cell surface preparation. To deplete cholesterol, the HEK293 cells were treated with serum free DMEM ± 10 mM methyl-β-cyclodextrin (MβCD) for 1 hour at 37 °C with 5% CO 2 . Cells were used for drug uptake assay, cell surface preparation or total cholesterol quantification (cholesterol/cholesteryl ester).
The cholesterol quantification assay (ab65359, Abcam, Cambridge, UK) was used with fluorometric parameters according to manufacturer's protocol. In brief, lipids were extracted from cells with chloroform: isopropanol: NP-40 (7:11:0.1), pelleted and organic phase air dried. Assay was performed in the presence of cholesterol esterase with florescence measured on a microplate reader with 535 nm excitation and 595 nm emission filters.
To isolate cell surface proteins, a cell surface protein isolation kit, (Thermo Scientific) based on biotinylation and affinity binding to NeutrAvidin resin was used as previously described 46 . Drug uptake. Functional drug uptake assays were performed using tritium labelled levodopa ( 3 [H]-L-DOPA) as a tracer at 0.15 μCi/mL in transport medium (0.01 μM to 2 mM unlabelled L-DOPA; 25 mM HEPES, pH 7.4; Hank's buffered saline solution (HBBS); 0.1% w/v BSA). The various HEK293 cell-lines were seeded 24 hours before the assay. Medium was aspirated and cells washed with HBBS, before transport medium, warmed to 37 °C was added. At the end of the assay, the transport medium was aspirated off and transport stopped by washing cells with ice cold HBBS three times. Cells were lysed by incubating at 37 °C in 5% w/v SDS for 30 minutes. The amount of radiation in the lysates was measured by liquid scintillation in disintegrations per minute which were used to calculate the amount of L-DOPA taken up as pmoles/million cells.
Immunoaffinity purification. HEK293S GnTIˉ LAT1 cell pellets were retrieved from cryostorage and thawed on ice in lysis buffer (10% v/v glycerol; Dulbecco's phosphate buffered saline, pH 7; protease inhibitor tablets (Pierce)). Thawed cells were lysed with a TissueRuptor (Qiagen) followed by sonication. Cell nuclei and debris were removed by centrifugation at 23,500 g for 20 minutes. The supernatant was then ultracentrifuged at 100,000 g for 1.5 hrs to isolate the membrane fraction from the soluble cytosolic fraction. The membranes were suspended by dounce homogeniser for solubilisation in TBS1 (1.5% w/v DDM, 20 mM Tris-Cl, 300 mM NaCl, and 10% w/v glycerol at pH 8 in the absence or presence of CHS). Solubilisation was performed overnight, concomitantly with V5 affinity gel incubation. The fraction of the solubilisation suspension not bound to the affinity gel, was removed by decanting the supernatant after centrifugation at 1500 g for 2 minutes. Resin was then washed in SEC buffer with DDM above CMC, 100 mM Tris-Cl, 300 mM NaCl, and 10% w/v glycerol at pH 8 in the absence or presence of CHS. LAT1-CD98 was eluted by incubating the washed resin with V5 peptide. Further purification was performed by size exclusion chromatography on a Superdex 200 10/300 column (GE Healthcare). The purified protein was concentrated using a 100 kDa cut-off polyethersulfone centrifugal filter and the pure protein Scientific RepoRts | 7:43580 | DOI: 10.1038/srep43580 concentration calculated as follows; A 280 /ε with ε = 1.331, calculated from combined primary sequence of LAT1 and CD98 by ExPASy ProtParam 47 .
Immunoblotting. Protein samples were incubated at room temperature in SDS loading buffer (0.5 M Tris-Cl; 30% v/v glycerol; 10% v/v SDS; 0.012% w/v bromophenol blue) before loading on to 12.5% Tris-glycine SDS-PAGE gels. Dithiothreitol or β-mercaptoethanol was included in the loading buffer when reducing conditions were required. 20-40 μg total protein was loaded per well for cell-lysates and 5 μL from analysis of LAT1-CD98hc purifications. Gels were run at 200 V for 1 hr and proteins transferred from gel to PVDF membranes by wet blotting at 100 V for 1.5 hrs. The membranes were blocked in 5% w/v semi-skimmed milk. Immunoblots were incubated with anti-His 6 mouse monoclonal antibody (1:1000; Abcam) or anti-CD98 rabbit polyclonal antibody (1:1000; H300 Santa Cruz) or α 1 sodium potassium ATPase mouse monoclonal antibody (1:2000, clone 464.6 Abcam). Visualisation was done using anti-mouse or anti-rabbit HRP conjugated secondary antibodies by chemiluminescence. Thermostability experiments. LAT1-CD98 was purified in SEC buffer and the fractions corresponding to the heterodimer were pooled, concentrated and analysed by HPLC-SEC. To determine the thermal stability of LAT1-CD98, the protein in the desired buffer conditions, was incubated at a range of temperatures for 10 minutes. 3 to 5 μg of protein from each sample were injected and analysed using the BIO-SEC5 HPLC column (Agilent). The peak height at A 220 was normalised by dividing by the peak area and expressed as a fraction of the normalised peak height at 4 °C. This is referred to as the normalised absorbance. Normalised absorbance was plotted against temperature and the data analysed by a non-linear regression to the Boltzmann sigmoidal function using GraphPad Prism 6 (GraphPad Software, Inc., La Jolla, USA).
In silico protein structure modelling and conservation analysis. The amino acid sequences for human LAT1 and the Drosophila melanogaster dopamine transporter (dDAT) (NCBI accession numbers NP_003477.4 and NP_523763.2 respectively) were aligned using PROMALS3D 34 . The alignment was used to identify residues in cholesterol binding sites I & II of the dDAT that were conserved in LAT1, thus defining putative cholesterol binding sites in LAT1. An alignment of LAT1 orthologous sequences and LAT2 (Uniprot idenitfier:Q9UHI5-1) was performed using Clustal O 48 to determine whether the putative cholesterol binding sites are conserved between orthologues. Orthologues were chosen from Canis lupus familiaris, Bos Taurus, Rattus norvegicus, Mus musculus, Gallus gallus, Danio rerio, Drosophila melanogaster, and Xenopus tropicalis. (NCBI accession numbers XP_850176.2, NP_777038.1, NP_059049.1, NP_035534.2, NP_001025750.1, NP_001121830.1, NP_001245996.1, and NP_001135465.1 respectively). Annotation and scoring of conservation at each position was done in Jalview 49 , residues in binding site I & II were annotated in purple and red respectively. A model of LAT1 was generated in order to determine the proximity in space, of the residues in the putative binding sites. The best I-TASSER server 50 generated 3D model of LAT1 was optimized using Modrefiner 51 to improve backbone stereochemistry as assessed by Ramachandran plot analysis using RAMPAGE. Residues in putative binding sites were annotated on the resulting predicted structure in purple and red for sites I & II respectively.
Statistical tests & kinetic calculations.
All statistical tests were performed with GraphPad Prism 6. For two different conditions, a two tailed t-tests was performed while for multiple comparisons an ANOVA with post hoc Tukey's test was carried out.
Kinetics of L-DOPA uptake were determined by selecting a time point when linear transport was occurring and then subtracting the drug accumulation in HEK293 control cells from drug accumulation in HEK293 LAT1 cells. This provides the LAT1 mediated fraction. V max was calculated by plotting the rate of drug transport by LAT1 (pmoles/min/million cells) against L-DOPA concentration (μM). GraphPad Prism 6 was used to calculate Michaelis-Menten values for LAT1 mediated L-DOPA uptake. | 2017-10-28T01:36:39.745Z | 2017-03-08T00:00:00.000 | {
"year": 2017,
"sha1": "077703cd178ee4c29d57549ace0565bc1a79c0b6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep43580.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "077703cd178ee4c29d57549ace0565bc1a79c0b6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
261645020 | pes2o/s2orc | v3-fos-license | Streptococcus zooepidemicus Meningitis in an HIV-Positive Horse Breeder Patient: A Case Study and Literature Review
Streptococcus equi subsp. zooepidemicus is a rare etiologic agent of bacterial meningitis in humans. The disease is a zoonotic infection and is transmitted through close contact with domestic animals, mainly horses. Only 37 cases of Streptococcus zooepidemicus meningitis have been reported in the literature until July 2023. The aim of this study is to present a rare clinical case of S. zooepidemicus-related meningitis in a human immunodeficiency virus (HIV)-positive patient and analyze the literature. We present a 23-year-old horse breeder patient with advanced immunosuppression due to acquired immunodeficiency syndrome (AIDS) and S. zooepidemicus meningitis, admitted at the Clinic of Infectious Diseases, St. George University Hospital, Plovdiv. The course of meningitis was severe since the beginning, with significant cerebral edema, disturbances in consciousness, persistent fever, and the development of complications against the background of AIDS-related conditions. S. zooepidemicus was microbiologically detected from cerebrospinal fluid culture. After prolonged treatment and a long hospital stay, the patient’s condition improved, and eventually he was discharged and recovered from the acute neuroinfection. Although extremely rare, S. zooepidemicus should be considered in patients with clinical and laboratory evidence of bacterial meningitis who have contact with animals, especially horses, other domestic animals, and their dairy products, as well as in immunocompromised patients. To the best of our knowledge, the current clinical case is the first report of S. zooepidemicus-related meningitis in a patient with HIV/AIDS.
Introduction
Streptococcus equi subsp.zooepidemicus (S. zooepidemicus) belongs to a group of βhemolytic streptococci with Lancefield group C antigen, along with S. dysgalactiae subsp.dysgalactiae, S. dysgalactiae subsp.equisimilis, S. equi subsp.equi.Group C streptococci are not considered part of the normal human flora.They are recognized as either commensals or pathogens in a wide array of domestic animals and are uncommon causes of infections in humans [1].
S. zooepidemicus is a commensal in the skin and mucous membranes of the upper respiratory tract of horses but can cause rhinitis, bronchitis, pneumonia, arthritis, submandibular lymphadenitis, and wound infection as an opportunistic pathogen [2].In humans, it causes uncommon zoonotic diseases with very few reported cases.Infections range from mild diseases such as pharyngitis and skin and soft tissue infections to a severe clinical presentation of epiglottitis, pneumonitis, septic arthritis, osteomyelitis, peritonitis, sepsis, endocarditis, and meningitis [3].
S. zooepidemicus is a rare etiologic agent of bacterial meningitis in humans.Nearly all of the meningitis cases have been attributed to zoonotic exposure.Horses are implicated in most cases of direct animal contact.Other domestic animals like dogs, swine, cattle, guinea pigs, and sheep can also serve as sources of infections in humans.Whenever possible, the suspect animal is tested for S. zooepidemicus from nasal or nasopharyngeal secretions [1].Another transmission mechanism is through the consumption of unpasteurized milk and dairy products.The incubation period varies from 1 to 21 days (median 7 days).Benzylpenicillin and third-generation cephalosporins are the antibiotics of choice for group C streptococcal infections.Gentamicin or rifampin can be added for a synergistic effect against this pathogen [3,4].
Case Presentation
A 23-year-old male was admitted to the Clinic of Infectious Diseases at the St. George University Hospital-Plovdiv in Bulgaria with fever, vomiting, diarrhea, rhinitis, sore throat, and cough over the last 2 days.According to his relatives, he had episodes of nosebleeding and bloody urine.On the day of admission, he became aggressive and confused, and soon after, he progressed to a comatose state.Relevant comorbidities included Human Immunodeficiency Virus (HIV) infection stage C-Acquired Immunodeficiency Syndrome (AIDS), chronic hepatitis B, hepatitis C, liver cirrhosis, esophageal varices, and wasting syndrome.The infection with HIV was established 7 years ago when antiretroviral therapy was initiated.The therapy included Combivir (lamivudine, zidovudine) and Kaletra (lopinavir, ritonavir).A serious problem in our patient was his non-adherence to antiretroviral therapy (ART), with frequent and long-term interruptions in medication intake, which resulted in very poor control of HIV infection.Also, he had not taken ART for about a year before the onset of meningitis.In addition, he was an intravenous drug user and had also recovered from a middle ear infection (otitis media) 2 months before.On physical examination, the patient was febrile (39 • C), dehydrated, comatose, had nuchal stiffness, and had a positive Babinski sign.He had tachycardia with a heart rate of 120 beats per minute and hypotension of 100/60 mmHg.On auscultation of the chest, crackles were heard over both lungs.Splenomegaly, catarrhal angina, skin petechiae, and trophic changes on the legs were also present.
Results of routine laboratory tests showed anemia with hemoglobin of 90 g/L (reference for males 140-180 g/L), leukocytosis with white blood cell count (WBC) of 15.8 × 10 9 /L (reference 3.5-10.5× 10 9 /L), followed by severe leukocytopenia (WBC varied from 1.4 to 2.8 × 10 9 /L), thrombocytopenia with platelets varying from 31 to 60 × 10 9 /L (reference 140-440 × 10 9 /L), elevated C-reactive protein of 79 mg/L (reference 0-10 mg/L), an erythrocyte sedimentation rate of 44 mm/h (reference for males below the age of 50 < 15 mm/h), and a low sodium level of 126 mmol/L (reference 136-151 mmol/L).An emergency computed tomography (CT) scan of the head revealed no abnormalities.A lumbar puncture (LP) was performed immediately.The cerebrospinal fluid (CSF) analysis for the whole hospital stay is presented in Table 1.The microscopic evaluation showed blood contamination of the CSF sample obtained from the second LP.Unfortunately, technical difficulties occurred during the second lumbar puncture.It is likely that a small blood vessel was accidentally ruptured, causing unwanted blood contamination of the cerebrospinal fluid.
Gram and methylene blue staining of the CSF specimen revealed no inflammatory cells or microorganisms.An overnight incubation on 5% sheep blood agar revealed white colonies with β-hemolysis that tested catalase-negative.The bacterium was also recovered from blood cultures.Both strains were identified as S. equi subsp.zooepidemicus by Vitek-2 Compact (bioMerieux, France).The strain was susceptible to benzylpenicillin, cefotaxime, cefepime, erythromycin, teicoplanin, and linezolid but resistant to clindamycin.A diagnosis of S. zooepidemicus meningitis was made.Because of the patient's HIV-positive status, additional tests were performed.Sputum cultures were positive for Acinetobacter baumannii, and stool cultures were negative.Serology showed negative IgM antibodies to the Ebstein-Barr virus and Cytomegalovirus but positive IgG antibodies for both viruses.The patient tested positive for hepatitis B surface antigen and also for hepatitis C virus antigens and antibodies.Parasitology tests were negative for toxoplasmosis and pneumocystosis (Giemsa stain).HIV viral load was 2785 copies/mL and CD4+ T-cells were only 35/mm 3 .
Regardless of the initiated treatment, the patient continued to be somnolent and confused; fever persisted at over 38.5 • C for 15 days; and anemia worsened over the first 7 days of admission.Skin and mucous hemorrhages, hematuria, jaundice, and generalized edema appeared as a result of his decreased platelet numbers as well as chronic liver failure.The latter was supported by severe laboratory abnormalities such as increased aspartate transaminase of 157 U/L (reference 0-36 U/L), alanine transaminase of 69 U/L (reference for males 0-49 U/L), total bilirubin of 60 µmol/L (reference 3.4-21 µmol/L), decreased serum cholinesterase of 1180 U/L (reference 2100-5000 U/L), albumin 21 g/L (reference 35-55 g/L), prothrombin time varied from 16.6 to 51.7% (reference 70-120%), fibrinogen to 1.18 g/L (reference 2-4.5 g/L).Three weeks after the admission, chest radiography revealed pneumonia (Figure 1), and an abdominal ultrasound showed signs of liver cirrhosis.A second control CT of the brain was performed, which revealed suspicion of a small subarachnoid hemorrhage (Figure 2).CSF-cerebrospinal fluid; WBC-white blood cells; LP-lumbar puncture.* CSF glucose/serum glucose ≤ 0.4 is suspicious for bacterial meningitis.
Unfortunately, technical difficulties occurred during the second lumbar puncture.It is likely that a small blood vessel was accidentally ruptured, causing unwanted blood contamination of the cerebrospinal fluid.
Gram and methylene blue staining of the CSF specimen revealed no inflammatory cells or microorganisms.An overnight incubation on 5% sheep blood agar revealed white colonies with β-hemolysis that tested catalase-negative.The bacterium was also recovered from blood cultures.Both strains were identified as S. equi subsp.zooepidemicus by Vitek-2 Compact (bioMerieux, France).The strain was susceptible to benzylpenicillin, cefotaxime, cefepime, erythromycin, teicoplanin, and linezolid but resistant to clindamycin.A diagnosis of S. zooepidemicus meningitis was made.Because of the patient's HIV-positive status, additional tests were performed.Sputum cultures were positive for Acinetobacter baumannii, and stool cultures were negative.Serology showed negative IgM antibodies to the Ebstein-Barr virus and Cytomegalovirus but positive IgG antibodies for both viruses.The patient tested positive for hepatitis B surface antigen and also for hepatitis C virus antigens and antibodies.Parasitology tests were negative for toxoplasmosis and pneumocystosis (Giemsa stain).HIV viral load was 2785 copies/mL and CD4+ T-cells were only 35/mm 3 .
Regardless of the initiated treatment, the patient continued to be somnolent and confused; fever persisted at over 38.5 °C for 15 days; and anemia worsened over the first 7 days of admission.Skin and mucous hemorrhages, hematuria, jaundice, and generalized edema appeared as a result of his decreased platelet numbers as well as chronic liver failure.The latter was supported by severe laboratory abnormalities such as increased aspartate transaminase of 157 U/L (reference 0-36 U/L), alanine transaminase of 69 U/L (reference for males 0-49 U/L), total bilirubin of 60 µmol/L (reference 3.4-21 µmol/L), decreased serum cholinesterase of 1180 U/L (reference 2100-5000 U/L), albumin 21 g/L (reference 35-55 g/L), prothrombin time varied from 16.6 to 51.7% (reference 70-120%), fibrinogen to 1.18 g/L (reference 2-4.5 g/L).Three weeks after the admission, chest radiography revealed pneumonia (Figure 1), and an abdominal ultrasound showed signs of liver cirrhosis.A second control CT of the brain was performed, which revealed suspicion of a small subarachnoid hemorrhage (Figure 2).Due to the lack of clinical improvement and the persistence of fever, bacteremia, and pneumonia, the therapy was changed to benzylpenicillin 6 × 4,000,000 IU i.v. and teicoplanin 2 × 0.4 g i.v.The combination of cefotaxime and vancomycin was administered for an overall period of 14 days.After starting the new antimicrobial therapy, the patient's condition slowly began to improve.Benzylpenicillin was continued for 14 days and teicoplanin for 21 days.During acute bacterial meningitis, the permeability of the bloodbrain barrier is increased, so we could speculate that teicoplanin could have reached sufficient concentrations within the subarachnoid space.Massive amounts of erythrocyte and platelet concentrates, fresh-frozen plasma, human albumin 20%, and symptomatic drugs were needed until liver functions stabilized.The patient recovered after a hospital stay of 41 days.He was eventually discharged with no neurological sequelae.
Discussion
By reviewing the medical literature, we were able to find 32 cases of meningitis due to S. zooepidemicus until April 2022 (Table 2) .S. pneumoniae, N. meningitidis, L. monocytogenes, and Staphylococcus spp.are common etiologic agents of acute bacterial meningitis in humans, whereas group C streptococci can rarely cause inflammation of the meninges [31].Recently, Bosica S et al. published a study about the S. zooepidemicus outbreak associated with the consumption of unpasteurized dairy products in Italy.It involved 37 people who were infected, of whom 35 were symptomatic.A wide range of clinical manifestations were observed, including septicemia, pharyngitis, arthritis, uveitis, and endocarditis.Five of the patients developed severe meningitis and subsequently died [32].Unfortunately, there were no more clinical details about the reported meningitis cases in this study.Due to the lack of clinical improvement and the persistence of fever, bacteremia, and pneumonia, the therapy was changed to benzylpenicillin 6 × 4,000,000 IU i.v. and teicoplanin 2 × 0.4 g i.v.The combination of cefotaxime and vancomycin was administered for an overall period of 14 days.After starting the new antimicrobial therapy, the patient's condition slowly began to improve.Benzylpenicillin was continued for 14 days and teicoplanin for 21 days.During acute bacterial meningitis, the permeability of the bloodbrain barrier is increased, so we could speculate that teicoplanin could have reached sufficient concentrations within the subarachnoid space.Massive amounts of erythrocyte and platelet concentrates, fresh-frozen plasma, human albumin 20%, and symptomatic drugs were needed until liver functions stabilized.The patient recovered after a hospital stay of 41 days.He was eventually discharged with no neurological sequelae.
Discussion
By reviewing the medical literature, we were able to find 32 cases of meningitis due to S. zooepidemicus until April 2022 (Table 2) .S. pneumoniae, N. meningitidis, L. monocytogenes, and Staphylococcus spp.are common etiologic agents of acute bacterial meningitis in humans, whereas group C streptococci can rarely cause inflammation of the meninges [31].Recently, Bosica S et al. published a study about the S. zooepidemicus outbreak associated with the consumption of unpasteurized dairy products in Italy.It involved 37 people who were infected, of whom 35 were symptomatic.A wide range of clinical manifestations were observed, including septicemia, pharyngitis, arthritis, uveitis, and endocarditis.Five of the patients developed severe meningitis and subsequently died [32].Unfortunately, there were no more clinical details about the reported meningitis cases in this study.
Meningitis caused by S. zooepidemicus presents with clinical and laboratory findings characteristic of purulent meningitis [3].However, the CSF parameters from the initial LP in our patient were within normal limits, which can be explained by the severe immunosuppression due to AIDS, and the inflammatory response within the subarachnoid space is not manifested so vividly in the course of the disease.We consider that the marked increase in CSF WBC of 6826 × 10 6 /L can be explained by the blood contamination of the second CSF sample.
According to the reported cases in the literature (n = 32), the average age of the patients was 49.8 years, ranging from 1 day to 83 years.The majority (84%) were adults (over 18 years old), with an equal gender distribution (1:1).An exposure to the pathogen was reported in 90.4% of the patients, such as contact with horses (53%), consumption of unpasteurized cow or goat milk and milk products in 28%, and contact with symptomatic dogs (6%).Comorbidities were present in 52%, most often cardiovascular diseases (33.3%), arterial hypertension (14.8%), and diabetes mellitus (7.4%).
To the best of our knowledge, there were no other reported patients with HIV/AIDS or S. zooepidemicus meningitis.Similarly to our patient, concomitant bacteremia (68%) and pneumonia (13%) were established by other authors.Furthermore, otogenic disorders such as otitis, sinusitis, mastoiditis (16%), endocarditis (10%), and endophthalmitis (7%) were observed in these reports.An interesting fact is that 68% of patients have microbiologically proven bacteremia, although only a few cases meet the clinical and laboratory criteria for sepsis [5,18].The most commonly reported antibiotics for S. zooepidemicus meningitis were 3 rd generation cephalosporins (ceftriaxone, cefotaxime), benzylpenicillin, ampicillin, and vancomycin.Gentamycin and rifampicin were rarely administered in the reported cases.Currently, there are no guidelines for the antimicrobial treatment of meningitis in HIV patients caused by S. zooepidemicus.We consider the duration of the antimicrobial treatment to be determined by the clinical course and laboratory parameters.The lethality rate was 22.6%.Residual neurological sequelae such as deafness (16.1%) and visual disturbances (9.7%) were registered in the survivors.
Conclusions
In summary, the presented case report confirms the role of S. zooepidemicus as a possible zoonotic pathogen in patients with acute bacterial meningitis.Although extremely rare, S. zooepidemicus should be considered in patients with clinical and laboratory evidence of bacterial meningitis who have had contact with animals, especially horses, or consumed unpasteurized milk.In addition, this case of S. zooepidemicus meningitis in a patient with HIV infection is the first reported in the literature, and it may extend our knowledge on the role of this pathogen in immunocompromised patients.With the current case report, we will update the epidemiological data on the etiology of bacterial meningitis in HIV individuals.
Figure 2 .
Figure 2. Control head CT.No parenchymal lesions.Tentorium and falx cerebelli appeared slightly thickened with a density of 61 Hounsfield units (suspicion of a small subarachnoid hemorrhage).
Figure 2 .
Figure 2. Control head CT.No parenchymal lesions.Tentorium and falx cerebelli appeared slightly thickened with a density of 61 Hounsfield units (suspicion of a small subarachnoid hemorrhage).
Table 1 .
CSF analysis on admission (on day 1) and control LPs (on days 8 and 16). | 2023-09-10T15:26:34.241Z | 2023-09-07T00:00:00.000 | {
"year": 2023,
"sha1": "d581d48111c27b4ca45d8605c90e6ff36d1dc05f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a33488337cd71397793638b15e57aefaa2fbbc84",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
216239869 | pes2o/s2orc | v3-fos-license | The frequency of defective genes in vif and vpr genes in 20 hemophiliacs is associated with Korean Red Ginseng and highly active antiretroviral therapy: the impact of lethal mutations in vif and vpr genes on HIV-1 evolution
Background We have reported that internal deletions in the nef, gag, and pol genes in HIV-1–infected patients are induced in those treated with Korean Red Ginseng (KRG). KRG delays the development of resistance mutations to antiretroviral drugs. Methods The vif-vpr genes over 26 years in 20 hemophiliacs infected with HIV-1 from a single source were sequenced to investigate whether vif-vpr genes were affected by KRG and KRG plus highly active antiretroviral therapy (ART) (hereafter called GCT) and compared the results with our previous data. Results A significantly higher number of in-frame small deletions were found in the vif-vpr genes of KRG-treated patients than at the baseline, in control patients, and in ART-alone patients (p < 0.001). These were significantly reduced in GCT patients (p < 0.05). In contrast, sequences harboring a premature stop codon (SC) were more significant in GCT patients (10.1%) than in KRG-alone patients, control (p < 0.01), and ART-alone patients (p = 0.078 for peripheral blood mononuclear cells). The proportion of SC in Vpr was similar to that in Vif, whereas the proportion of sequences revealing SC in the env-nef genes was significantly lower than that in the pol-vif-vpr genes (p < 0.01). The genetic distance was 1.8 times higher in the sequences harboring SC than in the sequences without SC (p < 0.001). Q135P in the vif gene is significantly associated with rapid progression to AIDS (p < 0.01). Conclusion Our data show that KRG might induce sΔ in the vif-vpr genes and that vif-vpr genes are similarly affected by lethal mutations.
Introduction
It is presumed that the Korean subclade of human immunodeficiency virus type 1 (HIV-1) subtype B (KSB) was introduced to Korea by the founder effect in the mid-1980s [1,2]. During late 1989 to early 1990, 20 hemophiliacs (HPs) were diagnosed with KSB infection 1e2 years after exposure to domestic clotting factor IX manufactured using plasma from two cash-paid plasma donors in Korea [2e4]. They all were infected with a single source of HIV-1 [3,4]. Of the 20 HPs, about 70% did not progress to AIDS up to 10 years after HIV-1 infection [5]. They had been treated with Korean Red Ginseng (KRG) for a significant period before the introduction of highly active antiretroviral therapy (ART) in 2002, after which most of them were treated with KRG plus ART combination therapy (hereafter called GCT). Our previous studies have shown that KRG treatment induces nonspecific internal deletions (IDs) over the full genome of HIV-1 [5e10] and clinically beneficial effects [5,11].
G-to-A hypermutation by APOBEC3 proteins (A3G) is represented as 5e12% of a collection of sequences [12,13] although HIV genetic variation is directed and restricted by DNA precursor availability. However, hypermutants are recovered from 1e2% of resting or activated peripheral blood mononuclear cells (PBMCs) in therapy-naive patients [12].
Recently, we reported that the proportion of premature stop codons (SCs) in the pol gene was 8.5% in the 20 HPs undergoing ART [5]. The median proportion of sequences harboring SC in the reverse transcriptase (RT) due to G-to-A hypermutation by A3G was 21% in the sample of patients on successful long-term ART [14]. A3G counteracts Vif proteins. It was recently discovered that Vpr also counteracts A3G [15]. A3G contributes 88.4% of the total mutation rate of HIV-1 in viral DNA sequences from PBMCs, whereas HIV-1 RT only contributes 2.0% [16]. Furthermore, the sublethal and lethal mutations caused by A3G have the potential to contribute significantly to HIV-1 evolution, pathogenesis, immune escape, and drug resistance [17]. HIV-1 has a significantly higher mutation rate and G-to-A hypermutation caused by A3G than HIV-2 [18], resulting in the asymptomatic periods of HIV-1 infection being shorter than those of HIV-2. Complete viral suppression by ART decreases the pool of replication-competent viruses and, consequently, thereby increases the detection of lethal mutations [14], promoting viral eradication. To date, there have been few studies on the effects of A3G on vif and vpr genes during ART, although there are a few studies on the effects of A3G on the pol gene during ART and on the vif gene in therapy-naive patients.
Thus, through analysis of lethal mutations due to A3G, we investigated whether variations in vif-vpr genes were equally affected and whether genetic defects were associated with KRG and GCT; we determined sequences of vif-vpr genes from baseline to GCT in 20 HPs infected with the same source of HIV-1 [3e5] and compared the results with the findings from our previous reports. Here, we first report on sequential changes in the vif-vpr genes over a period of 26 years before the commencement of ART and during ART.
These findings might provide us with significant implication of KRG and GCT in the treatment of AIDS as well as the importance of A3G in the pathogenesis of HIV-1 infection.
Study population
The twenty patients with hemophilia (HP), identified in this study as HP 1eHP 20, were diagnosed with HIV-1 infection between 1990 and 1994 (Table S1) [2e5,19]. They had been treated primarily with imported clotting factors before the start of local domestic clotting factor production. The control patients (n ¼ 80) for KRG-treated patients infected with subtype B had not been exposed to KRG or any antiretroviral therapies (e.g., zidovudine) at the time of sampling, and their PBMCs were available for gene amplification. As the control patients for GCT, 43 ART-alone patients were included. Informed written consent was obtained from the HPs. This study was approved by the Institutional Review Board of the Asan Medical Center (Code 2012-0390).
DNA preparation and vif and vpr gene amplification
Viral DNA was isolated from PBMCs using a QIAamp DNA Mini Kit (Qiagen, Hilden, Germany), and viral RNA was extracted from 300-ml serum samples using a QIAamp Ultra sense Viral RNA Kit (Qiagen, Hilden, Germany) as described in the study by Rawson et al [18]. The vif gene was amplified by nested polymerase chain reaction (PCR) using the TaKaRa R-Taq kit (Takara Bio Inc., Shiga, Japan). First and second PCR tests were performed in a 20-ml reaction mixture and 50-ml reaction mixture, respectively. The outer primer pairs were 545 (5 0 -GCAGTACAAATGGCAGTATTCATC-3 0 ) and LA 106K (5 0 -TGRTAGAGRAACTTGATGRTYCTT-3 0 ) or 545 and KMK2 (5 0 -ATGGGAATTGGTTCAAAGGA-3 0 ) and the inner primer pairs were 547 (5 0 -GCTCCTCTGGAAAGGTGAAGG-3 0 ) and LA100 (5 0 -AGTATCCCCGTAAGTTTCA-3': targeting at 758 bp) for the amplification of the vif gene [3] and 547K (5 0 -GCTTCTCTGGAAAGGT-GAAGG-3 0 ) and LA 102K (5 0 -TACAAGGAGTCTTGGGCTGACTTC-3`: targeting at 948 bp) and 547 and 566 (5 0 -GGCCCAAA-CATTATGTACCTCTGTA-3': targeting at 1,240 bp) for the amplification of vif and vpr genes [20]. The first PCR cycling conditions were as follows: 95 C for 2 min, 35 cycles of 30 s at 95 C, 30 s at 52 C, 2 min and 30 s at 72 C, and a final extension at 72 C for 10 min. The second PCR was performed with 1 mL of the first PCR product; the cycling conditions were as follows: 95 C for 2 min, 35 cycles of 30 s at 95 C, 30 s at 57 C, 1 min and 30 s at 72 C, and a final extension at 72 C for 10 min. The subsequent sequences were directly sequenced using Applied Biosystems 3730XL (Applied Biosystems, Foster City, CA, USA).
ART and therapy with KRG
Outpatient-based KRG treatment in HIV-1einfected patients was initiated at the Korean National Institute of Health in late 1991. The daily dose of KRG for men was 5.4 g (six 300-mg capsules, three times per day) [5,9]. KRG has been supplied since November 1991, although the supply of KRG was not consistent before 2000 [5]. Most of our study patients had taken KRG for a variable period before the commencement of ART. The total amount of KRG used before the start of ART was 3,507 AE 5,468 g for 28 AE 36 months. The annual decrease of CD4þ T cells (AD) in the 20 HPs was 43 AE 27 per mL. There was a significant inverse correlation between the total amount of KRG and the AD (p < 0.01) [5]. The ART regimen has included integrase strand transfer inhibitors (INSTIs) in all living patients since 2014 in Korea. During ART, four HPs (1, 5, 9, and 13) did not take KRG (Fig. S1). The amount of KRG supplied to 16 HPs was 11,678 AE 9,593 g. The duration of ART was 171 AE 51 months in 18 HPs.
Statistical analysis
Data were expressed as mean AE standard deviation. Statistical significance was estimated by the Student two-tailed t test, the Chisquare test, Fisher's exact test, or correlation analysis and survival curve, using MedCalc software (Ostend, Belgium). Statistical significance was defined as p < 0.05.
Patient demographics
The clinical characteristics of all patients were described in previous studies [2e5 ,19]. They all were infected with KSB from Plasma Donors O and P [2e5, 19], and the earliest vif genes from Plasma Donors O and P were all wild-type (WT) ones [3]. In addition, we had obtained vif-vpr genes in the two plasma donors. In total, 28 sequences were obtained: 24 from seven sequential samples in donor O and four from two samples in donor P (Fig. S1). The 28 vif-vpr genes were all WT ones.
Distribution of defective genes at the patient, sample, and sequence level
At the patient level, irrespective of therapy, in-frame small deletion (SD), premature SCs, and ID including insertion or deletion of one or two nucleotides (hereafter called indel) in the vif gene were found in three (15%), 14 (70%), and seven patients (35%), respectively (Fig. 1A). The HPs revealed significantly higher proportions of SD, ID, and SC than the control group (p < 0.01) and the ART-alone group (p < 0.05) (Fig. 1A). Three patients (HPs 2, 3 and 9) did not reveal any defects in the vif gene (Fig. S1). There was no occurrence of SD or SC in two patients treated with the smallest amount of KRG (1,200 g in HP 9 and 1,800 g in HP 2) [5] although HP 3, treated with 3,980 g of KRG, revealed SC in the vpr gene only. However, they all revealed defects in the 5 0 LTR/gag and nef genes (ID in HP 2 and HP 9) [7,9].
SDs are associated with KRG intake
We obtained 182 and 403 vif sequences from 20 HPs during the treatment with KRG and GCT, respectively. Thirty-nine SDs were found in 24 vif sequences (13.2%) from HP 5 and HP 8 during KRG treatment, whereas 15 vif sequences (3.7%) from HPs 8 and 18 were obtained during the GCT period (p < 0.001) (Fig. 1C). In detail, HP 1 and HP 5 had not taken ART. Detection of SDs was also significantly inhibited during GCT even when nine SDs of 28 sequences in the KRG intake in HP 1 and HP 5 were excluded and compared (9.7%) with GCT (3.8%) (p < 0.01). In addition, SDs were significantly higher during KRG than at baseline, in control patients, and in ARTalone (Fig. 1C) patients.
In contrast, compared with HIV-1 subtype B consensus, the same type of 12 and 21 SDs at AA 185-187 was detected among 366 and 148 sequences in control patients and ART-alone patients, respectively (Fig. S2A). Two patients in control patients (HJiH and JSH) and two patients in ART-alone patients (HSHn and LHS) revealed it at least two samples per patient. All these 33 SDs were obtained from the first sample (Fig. S2A). Thus, all these 33 sequences revealing SDs could be considered as WT ones in view of personal baselines, suggesting that the two patients might be infected with the deleted viruses. This was quite different from the SD in KRG-treated patients. Thus, we can conclude that there was a significant difference in the frequency of SDs between patients being treated with KRG and ART-alone/control patients (Fig. 1AeC). Despite the absence of ART in HP 1 and HP 5, all defects were significantly higher in HPs treated with KRG and/or GCT than in the control and ART-alone groups. The proportion of patients with SC was significantly higher in the ART patients than in the control patients. (B) At the sample level, the proportion of amplicon with SC in GCT patients was significantly higher than in the control and KRG patients, and the proportion of SD in KRG patients was significantly higher than in the control, GCT, and even ART-alone patients. (C) SD ( 15-bp) in the vif gene on KRG intake was significantly higher than at baseline and in the control patients (p < 0.001). Its detection was significantly decreased during GCT than on KRG treatment (p < 0.001). (D) The proportion of the sequences harboring SC on GCT was significantly higher than that on KRG treatment and in control patients, whereas it was similar to ART. The proportion was similar among control patients, at baseline, and in KRG patients. ART, antiretroviral therapy; GCT, KRG plus highly active antiretroviral therapy; HP, hemophiliac; KRG, Korean Red Ginseng; PBMC, peripheral blood mononuclear cell.
The proportion of SC depends on the duration of ART
Of the 721 sequences, 124 sequences obtained by RT-PCR were excluded in denominator for the proportion of sequence harboring SC. The proportion of SC was also significantly higher in GCT patients (10.1%, 40/397) than in control patients (p < 0.01) (Fig. 1D).
The earliest detection of SC in each patient was 69 AE 41 months (range; 10e137) since the introduction of ART. Regarding the time point of the first occurrence of SC during GCT, the proportion of sequences harboring SC significantly depended on the duration of ART. It was 2.3% (3/128) within three years and 13.8% (37/269) after three years in the vif gene (p < 0.001). The same result was obtained for the vpr gene (3.1% vs 13.6%, respectively) (p < 0.001). In other words, most SCs occurred three years after the introduction of ART. Three sequences harboring SC within three years were obtained at 10, 29, and 30 months in HP 15, HP 12, and HP 10, respectively (Fig. S1).
In addition, 1.9% of sequences obtained from control patients revealed SC [20], while 7.4% of sequences obtained from ART-alone patients revealed SC (p < 0.01, Fig. 1D). Thus, the proportion of sequences with SC in the GCT group was mildly higher in PBMCs or similar in the sequence level than in the ART-alone group (Fig. 1B and D). Taken together, our data suggest that KRG treatment might have had some effect on the occurrence of SC by additive antiviral effect.
ID and indel are not associated with KRG treatment
The proportion of sequences harboring ID (n ¼ 8) or indel (n ¼ 4) in the vif gene was 1.5% at baseline, 1.7% (3 IDs in HP 14) during KRG treatment, and 1.7% (4 IDs and 3 indels) during GCT (Fig. 1C). The size of eight IDs was D394 bp in HP 1 (before KRG treatment), D426 bp (off ART for 1 month) and D594 bp (compliance with 60e70%) in HP 11, D257 bp in HP 12 (five months since the introduction of ART), 3 D1,000 bp in HP 14 by 547/566 (positions of deletion not defined), and D210 bp in HP 20 (RNA copy ¼ 14,700/ml by poor compliance) (Fig. S2A). Except for the 3 D1000 bp, all IDs were obtained by the inner primer 547K/LA102K. The ID in the D594 bp in HP 11 and D257 bp in HP 12 spans the vif-vpr genes ( Fig. S2A and S2B). Seven of eight IDs and three of four indels were obtained during KRG treatment including the GCT period (1.8%; 10/566), whereas one ID and an indel were obtained in patients before KRG treatment or off-KRG treatment (1.2%; 2/161). There was no significant difference in the frequency of ID among the groups (Fig. 1C). However, the detection of ID was affected by the size of sequences. Actually, there was a significant difference in ID including data obtained in ARTalone patients (p < 0.001) (Fig. S3).
Q135P in the vif gene is associated with rapid progression to AIDS
Twenty HPs revealed all lysine (K) at Position 22 in the earliest sample [3]. There was K22H in HP 9 (JQ066931-32), which is associated with low CD4þ T-cell counts and higher viral loads [21]. A report showed that mutation K22H was more frequent in patients failing to ART [22]. However, HP 9 consistently showed WT sequence in the pol gene [10]. Instead of the very rare K22H, we found an evolution from K22K to K22N in six HPs (3, 7, 8, 10, 11, and 19) (11.5%; 81/706 sequences) whose samples were not exposed to any antiretroviral drugs (except HP 19) (Fig. S2A). All first K22N was detected in AIDS or low CD4þ T-cell count, whereas it was also reported in long-term nonprogressors (LTNPs) [23]. Q135P [24] was detected in nine patients including Plasma Donor O (Fig. S1). Of note, in HP 17, Q135P developed during GCT. Interestingly, survival analysis revealed that Q135P was significantly associated with fast progression to AIDS, although K22N was not associated with it (Fig. 2). We did not find any specific changes in the nucleotide or amino acid sequences including insertion/deletion due to KRG intake.
Effect of ART on the vpr gene
We obtained 470 vpr genes. Except 75 sequences obtained by RT-PCR, 395 sequences were divided into pre-ART (n ¼ 135) and ART (n ¼ 260). Although there was the same deletion of 6 bp at the same position in six LTSPs and the insertion of amino acids "RAR" between AA 90 and AA 91 in a LTSP LSK [20] (Fig. S2B), 96 AAs of Vpr proteins were well conserved in all patients except HP 5, HP 8, and HP 18, who also revealed the same deletion of 12 bp and 9 bp in Vif proteins (AA [12][13][14][15] and AA [13][14][15] (Fig. S2B). The deletion is the same position as the 12-bp and 9-bp deletion in the vif gene previously mentioned. In addition, there was an indel in 92LCS3-6867 (JF957938). Except two sequences with only initial isoleucine instead of methionine in HP 7, the proportion of SC-containing sequences during ART (9.6%; 25/260) was significantly higher than the proportion of 2.3% found during the pre-ART period (p < 0.01) (Fig. 3). In our study of 470 Vpr proteins, there was no Q65R [25] and F72L as shown in LTNPs [26]. In addition, R77Q was reported in Western LTNPs [27,28]. However, it was found in 83 of Fig. 2. Association of Vif Q135P with rapid progression. Q135P was found in Donor O and eight HPs (Fig. S1) and significantly associated with accelerated progression to AIDS. HP, hemophiliac. Fig. 3. The proportion of the sequences harboring in-frame stop codons in the pol, vif, and vpr genes was significantly and similarly increased during ART, whereas there was no such increase in the env and nef genes [7]. ART, antiretroviral therapy. 102 KSB-infected Korean patients (Fig. S2B). Interestingly, the vpr gene was also affected by A3G almost to the same extent as the vif gene (Fig. 3) as the two genes show the same extent of genetic diversity [29]. In HP 20, there was a single-nucleotide deletion in the vif-vpr gene in the same amplicon.
Interestingly, 11 sequences of 40 amplicons with SC in the vif gene did not reveal SC at W residue in Vpr proteins, whereas two sequences of 25 amplicons with SC in the vpr gene did not reveal SC at W residue in Vif proteins (Table S1). Considering nine sequences with not determined (ND) in the vpr gene and a SC not at the W residue, there was a significant difference in the proportion of SC between two genes (11/33 versus 2/24, respectively; p < 0.05). Thus, to investigate whether there is a difference between Vif and Vpr proteins affected by A3G, we compared the proportion of SC of all tryptophan sites (W) on ART. All SCs occurred in W except one in Vpr in HP 18 (JQ067036). Vif and Vpr proteins had W at eight and three sites, respectively. The proportion of SC at W site in the same 260 sequences was similar in Vif proteins (3.9%; 82/2080) and in Vpr proteins (5.0%, 39/780) (Table S1).
Focusing on the sequences harboring SC during ART, Pol, Vif, and Vpr proteins revealed SC at 30.5% of position W (124/407W) with an additional SC at the site of lysine (AAA/TAA; HQ026608), 29.4% (87/296W), and 36% (38/105W) with an additional SC at arginine (AGA/TGA; JQ067036), respectively (Table S1). In contrast, there were only four SCs at the TGG (W) site in the 3 nef sequences. Thus, considering the nef gene containing 7e8 TGG sites, the proportion of SC at TGG was 0.23% (4/ 1768)e0.26% (4/1547) of all W sites of 221 nef genes. This is significantly lower than in the Vif proteins.
The frequency of the signature pattern nucleotide was also analyzed on the association of Plasma Donors O and P with the 20 HPs, compared with local controls (LCs). Regarding the epidemiological association, our previous report showed seven signature pattern nucleotides in the vif gene [3]. In the vpr gene, there were significant differences in the frequency at three nucleotide positions between Clusters O and P and LCs; synonymous change was at Position 5621 (GAG, all in Cluster O; GAA, all in Cluster P; and 78 GAG, 2 GAA, and 1 GGG in LCs), and two nonsynonymous changes were at 5633 (GAA/GAC/T) and 5741 (ATA/ACT). Thus, E25D and I61T showed a significantly high frequency in Cluster P, compared with that in the LCs and Cluster O (p < 0.0001) (Fig. S2B).
The effect of SCs on virus evolution
To analyze the effect of lethal mutations on virus evolution, we compared the sequences revealing SC and WT sequence alone with the earliest sequences of Donors O and P, respectively. The genetic distance was calculated based on January 1990, when Donors O and P were diagnosed as infected. Genetic distance in the sequences harboring SC or initial I was significantly higher (5.4 AE 1.7%) than that (2.9 AE 0.8%) in the WT sequences (p < 0.0001) (Fig. 4). The elapsed time from January 1990 to sampling time was 232 AE 33 months and 213 AE 21 months, respectively. When it was translated into annual mutation rate, they were in the order 0.28 AE 0.09% and 0.16 AE 0.04%, respectively. Thus, the genetic distance was 1.78 times higher in the sequences with SC than in the WT sequences. These factors lead to faster development of mutant strains that are resistant to therapeutic agents [22]. Intrapersonal genetic discrepancy in the earliest two sequences (October 1991 and February 1993) from Donors O and P was 0.37% and 0%, respectively. Sequence variation of WT sequences without SC or I showed a significant correlation with elapsed time (r ¼ 0.45, p < 0.01 for the sequences obtained during ART) but that with SC showed no correlation. The correlation was more significant when the earliest sequences before ART were included (r ¼ 0.70, p < 0.001).
Effect of ART on pol genes
In the present study, the proportion of pol sequences harboring SC was 9.6% during GCT or ART, which was higher than that (1.2%) found during pre-ART (p < 0.01) (Fig. 3). However, the proportion of the sequences harboring SC in the env-nef genes increased a little during ART. These findings show that env-nef genes are significantly less affected by A3G than the pol-vif-vpr genes.
Discussion
Here, the frequency of the SC-containing vif gene in the KRG group was similar to the control group and baseline. However, it increased significantly to 9.0% during GCT. The proportion with SC depended significantly on the duration of GCT. Moreover, the genetic distance in the sequences harboring SC was significantly higher than that in the WT sequences (p < 0.001). The proportion of SDs was significantly higher during KRG treatment than in the control patients and at baseline, but the proportion was significantly inhibited during GCT. However, the level was still significantly higher than in the control group or at the baseline (Fig. 1Ae C). Pol-vif-vpr genes were shown to be similarly affected by A3G, whereas env and nef genes were significantly less affected by 3G (Fig. 3). It is well known that hypermutation is not equally distributed along the HIV-1 genome [30], as shown in a report that the env gene is significantly less affected by A3G than the integrasevif-vpr genes [31]. For this reason, unlike previous studies [6e10], ID was not affected by taking KRG, but SC may be affected to some extent, as shown in Fig. 1B. It might result from the synergistic effects of reductions on virus concentration by both KRG and ART. The second reason the proportion of SC is higher in GCT than in ART alone ( Fig. 1B and D) might be a lower WT virus concentration as a result of KRG treatment, and thus, the possibility of amplifying the defective virus is increased. In the previous study, we did not include the ART-alone group, making it impossible to compare with the GCT group [20].
Consequently, despite increasing DNA concentration (2-to 4fold), the success rate of PCR amplification significantly decreased 10 years after ART (data not shown) as shown in the pol gene [10] because we have not used the primer set designed to target at the hypermutated virus. In brief, it decreased from 63% before ART to 29% after 6 years of ART [10]. These findings support the view that that the longer the ART period, the lower the WT DNA concentration. Consequently, the ratio of defective DNA including SC is increased, and therefore, the PCR success rate is significantly decreased by primer mismatch. In addition, for this reason, it was very difficult to obtain PCR products in a few patients (HPs 6 and 14). Moreover, we could not obtain sequences even when we could rarely obtain PCR products in HP 14, suggesting that the reason might be the mismatch of primers due to G-to-A hypermutations. Thus, the proportion of sequences harboring SC might be underestimated. Irrespective of the kinds of the primer set, the success rate of PCR at pre-ART was 73% (101/138). However, it was significantly reduced to 41% (189/458) during the ART period (p < 0.0001). Furthermore, there was a significant inverse correlation between the duration of ART and the success rate of PCR (r ¼ À0.20, p ¼ 0.01).
Actually, the proportion of SC during GCT (10.1%) was similar to the proportion (7.4%) during ART alone. However, the proportion was significantly higher than that (1.4%) in the nef gene in the same patients (p < 0.001) (Fig. 3) [7]. In contrast, the proportion with ID in the vif gene on KRG treatment (1.7%) was significantly lower than that (20.6%; 62/301) in the nef gene (p < 0.0001) [7]. Probably, the first reason for this is significantly higher genetic stability in the vif gene than in the nef gene [32]. Another reason might be related to the size of the gene: the vif gene is 579 bp, whereas the nef gene is about 620 bp. Compared with the previous study with the same 1.2 kb of the integrase region (11.9% as of 84/704) [10], the proportion of IDs (4.9% as of 6/122) was significantly lower in the vif-vpre containing 1,248 bp (p < 0.05) (Fig. S3).
Here, the two cysteines at Positions 113 and 132 were well conserved as shown in another report [31,33] except for two vif genes: each one in HP 8 and HP 15 revealed S 132 and Y 113 , among 624 sequences, respectively. It is known that changes such as K22H in Vif protein are associated with the development of resistance to antiretroviral drugs [22,34]. In this study, however, there was no K22H in patients resistant to antiretroviral drugs [5], although two of six patients with K22N revealed resistant viruses (HPs 9 and 19). Further study is needed to determine whether this is associated with subtype difference (KSB versus subtype B) or KRG treatment.
In Korea, major resistance mutations (RMs) to INSTIs have been introduced since 2014. RMs to INSTIs [35] and the Q151M complex in 2014 [36] are frequent in KRG-naive patients (22%). In contrast, in the present study, despite further follow-up >four years than in the previous report in 2015 [5], most of these HPs had already developed RMs to previous monotherapy and two-drug combination therapy [5], and we could not find RMs to INSTIs (data not shown). These results are probably the result of the synergistic effects of taking KRG as shown in reversal to the WT sequence [37].
Regarding potential mechanism for SD occurrence, it is difficult to point out which components of KRG are involved because we applied whole ginseng for patients. It contains many active components such as many kinds of ginsenosides and acid polysaccharides; some components of ginseng have inhibitory effects on HIV-1 RT [38e40]. It is possible that these inhibitory effects on RT might decrease its fidelity and result in a high frequency of genetic defects [41]. In addition, A3G disrupts the synthesis of cDNA [42] and is also targeted to the proteasomal degradation pathway by Vpr and Vif [15,43]. This finding might be the basis on which vifvpr genes are similarly affected by lethal mutations.
The present study has the following limitations. There was a significant difference in the use of samples between the patients on GCT and ART alone. In other words, compared with the GCT group, a limited number of samples were used in most patients from the ART group.
Taken together, these data show that vif-vpr genes revealed similar proportions of sequences with SC due to G-to-A hypermutation on ART. Thus, the gene with SC showed about 1.8 times faster evolution than in WT sequences. This faster evolution can facilitate the emergence of some antiretroviral RMs. Further studies will be needed on the link between KRG treatment and SD.
Conflicts of interest
The authors declare no conflicts of interest. | 2020-04-09T09:11:56.023Z | 2020-04-08T00:00:00.000 | {
"year": 2020,
"sha1": "d835eb2ab53302563df848b6bb5d56745b506baf",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jgr.2020.03.003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "507a9f833cad5942a1b90cc59dbe6111c8fe364a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
30370603 | pes2o/s2orc | v3-fos-license | Immunopathology of allergic contact dermatitis
Allergic contact dermatitis is the consequence of an immune reaction mediated by T cells against low molecular weight chemicals known as haptens. It is a common condition that occurs in all races and age groups and affects the quality of life of those who present it. The immunological mechanism of this disease has been reviewed in recent decades with significant advance in its understanding. The metabolism and pathway of the haptens as well as the activation and mechanism of action of the cells responsible for both the immune reaction and its completion are discussed in this article.
INTRODUCTION
The skin is the organ that separates the human body from the external environment.This function exposes it to physical, chemical, and biological aggression that determines diseases such as eczema.Eczema is a form of dermatitis characterized by the presence of erythema, edema, vesicles and exudation (acute eczema); pink erythema and desquamation (subacute eczema) and lichenification (chronic eczema).Eczema caused by exogenous agents whether contactant or endotantes are called contact eczema or contact dermatitis (CD).Contact dermatitis can be caused by irritants, irritant contact dermatitis (ICD), or sensitizers, allergic contact dermatitis (ACD).ICD results from exposure to agents that cause direct tissue damage, such as acids and alkalis.As for ACD, it results from a specific immune response against the contactant in people previously sensitized.The immune reaction against the antigen, which is generated to destroy it, causes tissue damage. 1 When triggered by exposure to sunlight, CD can be classified as phototoxic CD and photoallergic CD.Phototoxic CD presents the same mechanism as ICD, but requires sun exposure for the contactant to become an irritant and then trigger the dermatitis.Similarly, in photoallergic CD, sun exposure turns the inert contactant into an allergen, thus triggering the immune process. 1
EPIDEMIOLOGY
.4 The socioeconomic impact of CD is great, but difficult to quantify.Occupational CD represents one of the most prevalent occupational diseases and is considered a public health problem. 5][8] Besides frequent, CD affects the quality of life of the individuals who suffer from it..9 The discovery of the agent responsible for CD changes its evolution and prognosis, thus improving quality of life. 9,10 ere are over 3,700 substances that can trigger ACD.Prevalence of ACD by a particular antigen depends on its sensitizing potential and the frequency and duration of exposure to it.The conditions of exposure are also important, since they may favor the development of sensitization.Occlusion, moisture and contact of the allergen with the damaged skin favor its penetration and sensitization. 11,12Prevalence of ACD in various populations differs, since it results from the peculiar antigenic exposure of each region. 13urthermore, the rate of sensitization of a given population is constantly changing, as presence and exposure to sensitizers change over time. 14
IMMUNOPATHOLOGY
The sensitization mechanism is quite complex and, despite being the object of numerous studies, it is only partly known.Recent decades have witnessed a rapid advancement in the understanding of allergic contact response, this progress occurred in parallel with findings about the immune system and the development of tools to study it.Various research techniques, applied mostly in experimental models with mice, such as the development of monoclonal antibodies (which allowed the identification of cells and cytokines by different methods), cell cultures, administration of cytokines, inactivation of genes ("knockout"), among others are responsible for the advances to be discussed.
ACD is an inflammatory disease triggered by haptens and mediated by T cells. 15Haptens are small reactive molecules with molecular weight below 500 Da which are not immunogenic by themselves, but which bind to peptides and proteins, thus becoming recognized by the immune system. 16In 1935, Karl Landsteiner and John Jacobs 17 wrote about the existence of reactive chemicals of low molecular weight that bind to proteins and then determine the formation of antibodies or antibody-like substances, supposedly responsible for ACD.Only 40 years later, Shearer 18 demonstrated that hapten-specific T lymphocytes (TL) also respond to these hapten-protein complexes.
ACD occurs as a result of a cascade of physicochemical and immune processes that can be didactically divided into two phases: induction, also called afferent, and elicitation or efferent.The induction phase involves all of the steps, from the initial contact with the allergen to the development of sensitization.Elicitation begins after contact with the hapten in a previously sensitized individual and results in ACD. Figure 1 summarizes the events involved in these phases. 19,20
AFFERENT PHASE
The afferent phase develops over time as a result of repeated exposure to environmental agents.Most of the contactants are too large to pass the stratum corneum, but haptens, due to their low molecular weight, penetrate this layer and spread toward the basal layer without being recognized by the immune system.During this diffusion process, they bind to tissue proteins and become immunogenic.The inherent reactivity of haptens is due to the non-pairing of electrons in the last layer of these molecules.They usually bind through covalent bonds to the amino acids of tissue proteins to stabilize them. 12Several nucleophilic (electron rich) amino acids react with electrophilic (electron poor) haptens, donating electrons to these molecules.Among these, some notable amino acids are lysine and cysteine, but others such as histidine, methionine and tyrosine also perform this action. 12Proteins, to which the haptens bind, can be derived from keratinocytes, components of Langerhans cells (LC) or peptides previously processed and bound to MHC class I or II. 21The nature of the hapten, the type of binding of the hapten to its carrier and the final three-dimensional configuration of the complex formed influence the immunogenicity of the hapten-protein complex. 22ipophilic haptens can penetrate LC and bind to cytoplasmic components of these cells, which are processed by proteosomes and bind to MHC class I to be presented to CD8 TL.In contrast, hydrophilic haptens tend to combine with extracellular tissue proteins and are captured by LC, processed and bound to MHC class II molecules to be presented to CD4 TL. 21][24] LC originate from CD34 + cells derived from bone marrow and reach the skin through the bloodstream. 25These cells remain in the epidermis in an immature state, extremely able to capture and process antigens but unable to present them and form effector cells. 26Capture of antigens promotes a series of morphological and functional changes in the DC.They become more dendritic, increase the number of Birbeck granules and costimulatory molecules, produce greater amounts of cytokines and change the profile of chemokine receptors in their membranes.Among the cytokines are the pro-inflammatory cytokines IL-1β and TNF-α. 27,28.30IL-1β and TNF-α transform the DC, which change from cells that are ready to capture and process antigens to cells that specialize in antigen presentation. 31IL-1β increases the expression of costimulatory molecules such as ICAM-1 and CD86 in DC, which are needed for the activation of haptenspecific effector TL. 32,33.34-37These changes lead to the migration of DC towards the endothelium of the afferent lymphatic vessel in response to the gradient of chemokines produced by these cells..39The interaction between Sialyl Lewis X, a selectin whose expression is increased in DC during allergic reactions, and its ligand, E-selectin of endothelial cells, promotes the passage of DC to lymph nodes. 15,34,40,41ithin twenty-four hours after contact with the antigen, DC migrate to regional lymph nodes for antigen presentation. 42In the paracortical area of regional lymph nodes, DC meet and touch several naive T lymphocytes that are in the process of recirculation.The nature of these dendritic cells enables multiple cellular contacts favoring cell activation.Naive lymphocytes also express CCR7 which directs them to the same place. 43DC remain in the paracortical zone with the aid of EBI1-ligand chemokine, which is produced by resident mature DC and also binds to CCR7. 44To activate naive T cells, DC must pass two signals. 45If the lymphocyte has the complement receptor to the pep-FIGURE 1: Mechanism of sensitization and elicitation of allergic contact dermatitis.In the afferent phase, haptens penetrate the skin and bind to tissue proteins becoming complete antigens (Ag).These antigens are captured and processed by dendritic cells (DC), which present them bound to MHC molecules on the surface of the cell membrane.DC migrate to regional lymph nodes where they present the antigen to TL.The TL that recognize the antigen presented are activated.The penetration of antigens into the skin also determines the release of endogenous glycolipids, which are presented by DC to NK T lymphocytes (NK TL).The NK TL release IL-4, which stimulates type 1 B lymphocytes to produce IgM.Faced with a new contact, the interaction of the IgM with the antigen-protein complex leads to activation of the complement, which induces the release of inflammatory and chemotactic factors of mast cells and endothelial cells.The activated TL migrate to the skin and interact with DC and keratinocytes that carry the antigen, thus leading to ACD Adapted source: by Gober and Gaspari 19 and Campos et al 20 tide-MHC complex, it will receive the first signal.The first signal determines conformational changes in the co-stimulatory molecules of the naive TL making them more eager for their ligands that are present in DC.Also, the first signal leads to transcription of IL-2 mRNA; however, the mRNA formed is unstable. 46The second signal is given by the connection between costimulatory molecules of DC, ICAM-1, CD80 and CD86, and their respective ligands, LFA-1 and CD28 (which binds to both CD80 and CD86) in TL.CD86, when bound to CD28, stabilizes the IL-2 mRNA inducing TL to produce large amounts of this cytokine.7][48] Clonal expansion forms a large number of hapten-specific T cells which will respond to a future contact with the allergen.In the activation of T cells, in addition to the first and second signals, the cytokines produced by DC and present in the microenvironment where antigen presentation occurs also have an essential role.
Afferent phase
.50After clonal expansion in the regional lymph nodes, the TL go to the thoracic duct and enter the bloodstream.These hapten-specific T cells express the cutaneous lymphocyte antigen (CLA), which preferably directs these lymphocytes to cutaneous inflammatory processes.
EFFERENT PHASE
To find the allergen, T cells must pass through the dermal microvasculature, the dermis and reach the keratinocytes modified by the antigen where they will act.This whole passage is regulated by chemokines and adhesion molecules expressed in tissues and recognized by TL.
1][52][53][54] CLA binds to the E-selectin expressed in the endothelial cells stimulated by the presence of antigen in the overlying skin, thus beginning the process of rolling..55- 57 As the expression of ICAM-1 in the endothelium only increases 16 hours after contact with the antigen, period during which much of the influx of lymphocytes has already occurred, E-selectin and VCAM-1 appear to be particularly important early in the process and ICAM -1 in its amplification. 57ce in the dermis, the VLA-4 and VLA-5 of TL bind to fibronectin, an extracellular protein of the dermal matrix, which facilitates the transit of these cells in this medium. 58Chemokines direct lymphocytes to the epithelium, and the connection between ICAM-1, expressed in keratinocytes, and the LFA-1 of leukocytes promotes interaction between these cells. 42ymphocytes produce a vigorous inflammatory response to eliminate the keratinocytes modified by the antigen.Only a small fraction of TL found in the ACD are hapten-specific. 59.61Besides the Fas-FasL pathway, it has been shown that perforins also participate in the destruction of cells in contact dermatitis. 62Keratinocytes undergo apoptosis, occurring cleavage of E cadherin, which results in loss of cell cohesion demonstrated by spongiosis and vesicles. 63Tissue destruction and desquamation removes the antigen of the tissue, decreasing the inflammatory process. 15
INVOLVED CELLS Dendritic cells
The application of haptens to the skin induces successive extension and retraction of the dendrites of Langerhans cells (LC), as well as it induces the migration of these cells to regional lymph nodes. 64These movements are stimulated by IL-1 and TNF alpha, cytokines produced by keratinocytes and by LC themselves after contact with the antigen. 658][69] The maturation process is necessary for activation of naive hapten-specific T cells in regional lymph nodes for effector and memory TL. 70 The maturation described occurs upon exposure to haptens, whereas irritants, when applied to the skin, induce migration of LC, but not maturation, preventing the formation of a specific effector response. 69The stimulated DC are attracted to the afferent lymphatic vessels, since they begin to express CCR7, which responds to the chemokines of the lymphoid tissue CCL19 and CCL21. 71The afferent lymphatic vessels express CCL21 and the paracortical zone of lymph nodes express both CCL21 and CCL19, attracting DC to this region of the lymph nodes by the gradient of cytokines. 72The role of DC in ACD has been recently revised, as we shall see below.
Dendritic cells in the afferent phase
There is controversy about the role of LC in the induction phase of ACD.Langerin is a transmembrane protein that leads to the formation of Birbeck granules, a specific marker of LC.Using a murine model where the injection of diphtheria toxin leads to selective depletion of cells expressing langerin, Bennett et al. 73 showed in May 2005 that the absence of LC decreases the chance of sensitization to haptens.The authors credited the sensitization of some of the mice exposed in the trial to dermal DC and found that these cells work together with LC in the sensitization process and that their absence affects this mechanism.However, in the same month, Kissenpfennig et al. 74 using a similar model found the same response to haptens between mice with depletion of LC and control groups of mice and concluded that LC are dispensable in the presentation of haptens, leaving this function to dermal DC only.In December of that same year, Kaplan et al. 75 demonstrated that mice that constitutionally do not present LC have increased response to haptens, that is, according to this model LC have a regulatory role.Until then the role of LC in sensitization had been defined as inducer, indifferent or suppressor.In 2007, Bennett et al. 76 returned to their model and demonstrated that antigens are not properly transported to lymph nodes in the absence of LC and concluded that it decreases sensitization to antigens, confirming their findings in the first trial.They attributed the following to the differences found in the other studies: 1) perennial absence of LC in the model of Kaplan et al. and 2) the use of high concentrations of allergens in the other studies..78The absence of these cells may have prevented the development of this mechanism and led to a state of hyperreactivity, responsible for increasing sensitization to haptens..78 Later, Fukunaga et al. 79 demonstrated that mice with a defect in migration of LC to regional lymph nodes but with normal migration of dermal DC present normal response to haptens, thus suggesting that dermal DC are more important in generating an effector response against haptens than LC.All these data clearly demonstrate that dermal DC are able to determine an effector response to haptens, but they do not allow a proper conclusion about the role of LC.
1][82] Langerin dermal DC also capture and present antigens.Using a model of selective ablation of LC and langerin dermal DC, Wang et al. 83 demonstrated that an attempt of sensitization immediately following depletion, a time period characterized by the absence of both LC and langerin dermal DC, is frustrated, but when performed some days after ablation, when part of the langerin dermal DC have already returned but LC have not, response to sensitization is normal.That trial indicates that langerin dermal DC and not LC and langerin dermal DC are the main responsible for the development of ACD.However, Bursch et al. 80 and Bennett et al. 76 used a similar system and failed to induce ACD with low concentrations of oxazolone in the fourth week after ablation, time period during which only langerin dermal DC have returned to normal, indicating that LC are necessary for sensitization.A possible conciliatory explanation lies in the concentration of haptens applied to sensitization.Bacci et al. 84 and Bennett et al. 76 suggest that at higher concentrations the antigen is captured by both LC and dermal DC, which induce the generation of an effector response in regional lymph nodes, and that at lower concentrations the antigen is especially captured by LC, which induce the process by themselves.
Dendritic cells in the efferent phase
Clear evidence indicates that LC are not required in the elicitation phase.Induced depletion of these cells by topical corticosteroids, UVB radiation or their selective ablation in experimental models with previously sensitized mice did not result in reducing the allergic response. 73,74,76,85.87The role of these presenting cells, including LC, in the effector phase of ACD is still under study.
LYMPHOCYTES Effector lymphocytes
ACD was considered the prototype of delayed type hypersensitivity (DTH) for a long time; however, the subpopulations of lymphocytes and the antigens involved in ACD present peculiarities which individualize this reaction. 21In DHT, the antigens are relatively large and soluble proteins, whereas they are small, reactive and lipophilic compounds in contact sensitization. 21The primary effector cell in DHT is CD4 TL while the main effector cell in ACD is CD8 TL, which has its action supported by type 1 auxiliary TL and suppressed by other CD4 T cells. 47,88 series of findings, described below, led to these conclusions.
Gocinski et al. 89 demonstrated that mice with depletion of CD8 TL induced by anti-CD8 monoclonal antibody are unable to develop ACD.However, mice with induced depletion of CD4 TL develop more intense and prolonged clinical response to the allergen.Similar results were obtained with mice with "knockout" (inactivation) of MHC class I and II.The absence of these molecules prevents the activation of CD8 TL and CD4 TL respectively, leading to the same consequences as those of selective absence of these subpopulations of lymphocytes. 90n a sequential evaluation of the inflammatory infiltrate of ACD, Okazaki et al. 91 showed that the lymphocytes found at the beginning of the process are CD8 TL that produce IFN-γ followed by CD4 TL.The highest proportion of CD8 TL was found 12 hours after contact while the highest proportion of CD4 was found after 24 hours.
In 1998, Cavani et al. 92 showed that only individuals who are allergic to nickel present antigen-specific CD8 TL (Tc1).However, antigen-specific CD4 T cells are found in allergic and non-allergic individuals, differing only in relation to highest proportion of suppressor cells, producers of IL-10, in the sound group. 50L-10 inhibits the differentiation and maturation of DC, blocking the release of IL-12, which is necessary for generating an allergic response.92 Despite the mounting evidence that the main effector cell of ACD is CD8 TL, it is possible that the nature of the antigen and/or its access pathway may contribute to determining the cell type involved in the response that will be formed.21.93 Bedes the IFN-γ-producing CD8 TL and CD4 TL, Th17 lymphocytes also exert an important effector role in ACD.Th17 cells are effector T lymphocytes that express factor ROR-γt (a variant of the orphan receptor related to retinoic acid) in mice and its equivalent (RORC) in humans.These cells produce proinflammatory cytokines such as IL-17, IL-21 and IL-22 and the chemokine receptor CCR6, which directs these cells to the epithelium for defense against bacterial and fungal infections.59 When stimulated by contact with haptens, human keratinocytes produce IL-23, which together with IL-1 beta leads to the development of Th17 lymphocytes.94 Individuals with contact sensitivity present Th17 lymphocytes in peripheral blood that respond to the antigen-presenting cells that carry the allergen.94 In addition to Th17 lymphocytes, Tc17 lymphocytes were also found in the tissue cellular infiltrate.59 The main actions of IL-17 produced by these cells are induction of proinflammatory cytokines (such as IL-1, IL-6 and TNF-α), chemokines (CXCL1, CXCL2, CXCL5 and CXCL8) and adhesion molecules (ICAM-1 and VCAM-1) by epithelial and endothelial cells, thus leading to the recruitment of inflammatory cells and interaction of these cells with the epithelium.59.95 Th[96][97] An experimental model showed that the absence of IL-17 in mice compromises the development of contact hypersensitivity reaction, reinforcing the importance of these cells in contact sensitivity.98 Surprisingly, the NK cell was identified as the effector cell of ACD through dinitrofluorobenzene in mice with knock-out of Rag-2 gene, essential for the development of B and T lymphocytes.99 This finding is notable, since it suggests that NK cells, despite not having T-cell receptors, are able to recognize specific antigens and develop memory.
REGULATORY T LYMPHOCYTES
Although ACD is a common condition, its occurrence is not the usual response resulting from the interaction of the cutaneous immune system with environmental chemicals.Most individuals are daily exposed to various chemicals and still do not develop contact allergy.The reaction is actually an uncontrolled response of the immune system to haptens. 100 The control of immune response to environmental chemicals is a priority task of the immune system and a series of mechanisms ensure homeostasis. 100The interaction between DC loaded with hapten and antigen-specific TL usually results in apoptosis, anergy or induction of T cells with regulatory activity. 101Loss of these mechanisms of tolerance leads to ACD.
The knowledge of regulatory T cells has been reviewed in recent years.These cells comprise a heterogeneous subfamily of T lymphocytes that suppress immune response by releasing inflammatory cytokines, especially IL-10, or by inactivating effector T cells through cell-cell contact via CTLA-4 (cytotoxic T lymphocyte antigen-4). 102There are three types of regulatory cells that have been well studied in terms of contact sensitivity; CD4CD25 Treg cells, regulatory T cells 1 (Tr1) and Th3 lymphocytes.
Tr1 cells produce large amounts of IL-10, moderate amounts of IL-5 and TGF-β and do not produce IL-4 and IFN-Á..103These effects are mediated by IL-10 and result in the suppression of hapten-specific CD4 and CD8 T cells.Cavani et al. 92 showed that peripheral CD4 TL in individuals not allergic to nickel express greater amount of IL-10 and less of IFN-γ compared with allergic patients, that is, non-allergic individuals have a higher amount of hapten-specific Tr1 cells in the blood..103The second type of regulating cell that has been well studied expresses the CD4 molecule, the alpha chain of the IL-2 receptor (CD25), cytotoxic T lymphocyte antigen-4 (CTLA-4) and the transcription factor Foxp3.These cells are called CD4CD25 Treg lymphocytes or Tregs. 104They can also express CLA, presumably after an encounter with DC in regional lymph nodes, and migrate to the skin. 105.107The mechanism of suppression induced by these cells is still under debate.In vitro studies suggest the need for cell-cell contact by interaction of the CTLA-4 of the regulatory cell with CD80 and CD86 for inactivation of effector TL. 108On the other hand, in vivo models indicate suppression mediated by the action of cytokines, especially IL-10. 109t is possible that regulatory T cells work in a cooperative system, since it has been shown that Treg cells induce the production of IL-10 in Tr1 cells. 110.112Thus, they act both in the afferent and efferent phase, preventing the emergence of allergy and minimizing the intensity and duration of the process when it has already been developed.
The mobilization and maturation of DC are promoted by exposure to signs of danger such as cell damage, UVB radiation, bacterial and viral products. 100he state of maturation of DC determines the ability of these cells to direct naive hapten-specific TL to effector, memory or suppressor hapten-specific TL. 101 Naive CD4 T cells when stimulated with immature or partially immature DC in the presence of TGF-β may transform into CD4CD25 Treg lymphocytes. 113.114The coexistence of signs of danger and exposure to chemicals appears to be an important factor in the loss of tolerance to haptens.This way, the irritant effect, a characteristic of the most sensitizing contact allergens, may help break the mechanism of tolerance and together with the allergen determine the complete maturation of DC and induce the development of ACD. 115A recent study confirms the importance of the irritant potential of a chemical in its sensitizing capacity. 116The irritating action leads to high levels of IL-1b, IL-6 and low levels of IL-10, promoting the maturation of DC. 116 Haptens at low concentrations have their irritant effect reduced and may lead to the formation of hapten-spe-cific T cells that produce IL-10, generating tolerance. 117he route of contact with allergens also determines the response pattern presented..119But total and persistent oral tolerance is only achieved in individuals who are not sensitive to the antigen in question and encounter the antigen for the first orally. 120A simple prior contact with the antigen, even if there is no development of ACD, can prevent formation of tolerance. 121Oral contact with the allergen leads to antigen presentation by other cells than the DC of the skin, favoring the formation of regulatory cells (Th3 lymphocytes), anergic T cells or apoptosis of hapten-specific T cells due to absence of an appropriate second signal. 59n addition to these controlling mechanisms, Gorbachev et al. 122 demonstrated that there is apoptosis of dendritic cells in lymph nodes and that this mechanism also suppresses ACD.The authors showed that mice with depletion of CD4 cells or knockout of the gene responsible for FasL show higher permanence of dendritic cells in lymph nodes compared with naive mice.The loss of apoptosis of the presenting cells resulted in intense and sustained activation of IFN-γ-producing CD8 TL in experimental mice.Besides these laboratory findings, the mice with depletion of CD4 cells or with altered FasL presented more intense and persistent clinical response to the allergen tested.
KERATINOCYTES
They are critical cells in the immune response of the skin due to their numerical dominance.keratinocytes are important for both inducing and controlling the response to haptens.The IL-1 receptors of keratinocytes respond to IL-1β released by LC exposed to the antigen producing TNF-α, which results in maturation and migration of LC. 123.124.126Furthermore, IFN-γ increases the expression of MHC class II in keratinocytes. 126This way, the keratinocytes can present the antigen to CD8 TL, since they constitutionally express MHC class I, and to CD4 TL, for they are induced to express MHC class II by IFN-γ.Under normal conditions, keratinocytes express low levels of the costimulatory molecules CD80 and CD86. 19These molecules bind to their receptors in T cells (CD28/CTLA-4 -cytotoxic TL-associated antigen-4) and are necessary for leading to a second effective signal. 19.128These anergic TL express a large amount of IL-2 receptors and therefore compete with effector and memory T cells for this growth factor.The contact of keratinocytes with allergens and irritants causes human keratinocytes to increase the expression of CD80, favoring the development of allergic contact response. 129Moreover, as described above, keratinocytes promote the generation of Th17 lymphocytes by producing IL-1β and IL-23, which amplify the inflammatory process.
.131 Exposure to allergens also induces the production of IL-16, which is involved in the chemotaxis of CD4 TL, which suppress the inflammatory response. 132They also produce PGE 2 and TGF-b..134TGF-b, in turn, blocks the action of activated T cells and prevents additional infiltration of leukocytes, since it reduces endothelial adhesion molecules. 135Furthermore, keratinocytes in an inflammatory environment express high levels of the receptor activator of nuclear factor KB ligand (RANKL) that induces the expression of CD205 and CD86 in LC when it binds to the receptor activator of nuclear factor KB (RANK). 136The expression of CD205 is associated with the induction of CD4CD25 cells, which suppress the immune response. 137
Mast cells
Along with keratinocytes and endothelial cells, mast cells are an important source of TNF-α and act both in the afferent and efferent phase of contact hypersensitivity. 138TNF-α is important to maturation of DC and passage of these cells through the endothelium.It also promotes infiltration of T cells, increasing the inflammatory reaction.Just as keratinocytes, mast cells have dual function, for they suppress ACD by producing IL-10. 139
B lymphocytes and NK T cells
ACD was considered for a long time a process independent of the participation of B cells, but recent studies indicate an essential role of these cells in ACD in mice.In humans, this function is not yet established.B lymphocytes (BL) associated with ACD are type 1, which are B cells independent of T cells, do not form germinal centers, do not generally undergo DNA rearrangement and are a source of antigen-specific IgM. 140This IgM is produced during the afferent phase of ACD, when type 1 BL proliferate rapidly. 141ice with depletion of type 1 BL have decreased ACD, which is restored with reposition of antigen-specific monoclonal IgM by transfer of type 1 BL from antigenallergic donors or serum of mice 24 hours after their sensitization to the antigen in question. 141IgM cleaves complement, thus forming C5a, which degranulates mast cells that release TNF-α among other factors.][146] In turn, type 1 BL are activated by NK T cells, a subtype of lymphocyte that is part of the innate immune system.Despite presenting T-cell receptor, these cells do not undergo gene rearrangement and are able to connect, through this TCR, to highly conserved glycolipid bound to CD1d molecules, a molecule similar to MHC class I molecule, which is found in antigen-presenting cells. 19,147,148The nature of this glycolipid remains unknown..149
Molecular pattern-recognition receptors
The innate immune system uses different families of molecular pattern-recognition receptors to detect microorganisms and signs of danger. 150There is already some evidence of the involvement of these receptors in ACD, since their mutations interfere with contact hypersensitivity response.No one knows for sure whether the haptens bind directly to NOD receptors or whether they induce the formation of endogenous ligands, but when this route is compromised, the efferent phase of ACD is also affected. 151There is evidence that Toll-like receptors are also associated with ACD.Sensitization of mice deficient in TLR 2 and 4 or with concomitant deficiency of TLR 4 and of the function of IL-12, but not of TLR 4 or IL-12 alone, for 2,4,6-trinitro-1-chlorobenzene allergen is frustrated. 152
Mechanism of allergic contact dermatitis by transition metals
Haptens can be classified as classical haptens, pro-haptens and transition metals.Classical haptens follow the route of lymphocyte activation described..154Some authors prefer to divide these chemicals into pro-haptens, when the transformation occurs by an enzymatic process, and pre-haptens, when the process is not enzymatic and occurs through contact with environmental agents such as oxygen, heat and light.Transition metals are metals that tend to form compounds containing complex ions, compounds formed by a central metal ion surrounded by various ligands. 155.155Unlike classical haptens, transition metals form ionic bonds with their carriers.Interactions between chemicals result from electrical connections between their atoms and are characterized by the energy needed to break them, which reflects its stability.Ionic bonds are considered weak because they need less energy to be undone and thus form less stable complexes compared with covalent bonds, a form of strong connection.
.156The lower stability of these complexes causes them to fall apart when they come into contact with another protein that has higher affinity with the metal, thus forming a new complex that is also reversible.This dynamic transfer and consequent formation of different complexes have hampered the characterization of the epitopes of metals. 157nother particularity of transition metals is the possibility of activating TL without processing the antigen, which was demonstrated by Moulon et al. using LC fixed with glutaraldehyde. 158For activation of hapten-specific TL without antigen processing, the T cell receptor needs to approximate the MHC molecule of the antigen-presenting cell.This proximity forms a binding site with high affinity for nickel, which binds and stabilizes the complex by activating the T cell 153
The skin barrier
Besides the nature and concentration of hapten, duration and frequency of the contact with the hapten, skin condition is also relevant in the process of sensitization. 159As the haptens need to cross the stratum corneum, skin integrity is important to maintain homeostasis. 160Presence of solution of continuity and local inflammatory process may contribute not only to the penetration of allergens but also to DC maturation, due to the presence of signs of danger in the affected skin..162
CONCLUSIONS
ACD is a complex process mediated by T cells, which is due to loss of tolerance to environmental chemicals.The advance in understanding cellular and molecular events seen in recent decades is dramatic.The mechanisms involved in loss of tolerance, the discovery of regulating and effector cells, the possible involvement of B cells, as well as the unveiling of the role of dendritic cells and cytokines involved in the process will enable the development of new therapies.
Papers
Information for all members: The EMC-D questionnaire is now available at the homepage of the Brazilian Annals of Dermatology: www.anaisdedermatologia.org.br.The deadline for completing the questionnaire is 60 days from the date of online publication.b) The hapten-protein complex formed by transition metals is less stable than that formed by the other haptens.c) There is evidence of the involvement of molecular pattern-recognition receptors in ACD.d) Mutations in structural proteins of the skin do not seem to predispose to allergic contact.
10 .
It is correct to state the following: a) Contact sensitization is a typical form of delayed hypersensitivity reaction.b) Delayed hypersensitivity reactions have CD8 + TL as their main effector cell.c) Mice with inactivation of the genes of the MHC class I show increased contact hypersensitivity reaction.d) The first cells found in the inflammatory process of contact sensitivity are IFN-α-producing CD8 + TL. 11.It is incorrect to state the following: a) It is possible that the route of contact with the antigen and its nature determine the kind of effector cell.b) In addition to IFN-γ-producing CD4 + and CD8 + TL, Th17 lymphocytes, but not Tc17 lymphocytes, are important in ACD.c) IL-17 induces the expression of proinflammatory cytokines.d) Mice unable to produce IL-17 show reduced contact response.12. Mark the correct statement: a) ACD is the usual response of the skin to repeated exposure to environmental chemicals.b) Apoptosis, anergy and formation of regulatory T cells are irrelevant mechanisms in maintaining tolerance to haptens.c) Sound individuals differ from ACD patients for having a higher proportion of IL-10-producing antigen-specific CD4 + TL and not presenting hapten-specific CD8 + TL. d) The presence of signs of danger concurrent with antigen exposure favors the formation of suppressor cells.13.Mark the incorrect statement: a) There are three types of regulatory T cells involved in ACD: CD4 + CD25 + TL, Tr1 lymphocytes and Th3 lymphocytes.b) Tr1 lymphocytes produce large amounts of IL-4.c) Foxp3 is an important transcription factor in the formation of CD4 + CD25 + cells as well as it is used as their marker.d) Regulatory T cells seem to work in a cooperative system.14.Mark the incorrect statement: a) The state of maturation of DC determines the kind of response that will be formed.b) Fully matured DC express IL-10.c) The irritating effect of chemicals seems to play an important role in loss of tolerance.d) At low concentrations, haptens can lead to the formation of regulatory cells.15.Mark the incorrect statement: a) Oral contact with the antigen determines tolerance.b) Oral contact with the antigen may determine tolerance only in non-sensitized individuals.c) The apoptosis of DC in lymph nodes is also a controlling mechanism of contact response.d) Mice with inactivation of the FasL gene present more intense and lasting ACD.16.Mark the incorrect alternative in relation to keratinocytes: a) They are cells of little relevance in the pathogenesis of ACD.b)They respond to the IL-1b produced by LC producing TNF-α, which is important for maturation and migration of DC. c) They respond to IFN-γ with increased expression of ICAM-1, which interacts with the LFA-1 of TL. d) Keratinocytes present antigens by both MHC class I and II.17.Mark the incorrect alternative in relation to keratinocytes: a) In the absence of an irritant/allergic stimulus, the expression of the molecules CD80 and CD86 on the surface of keratinocytes is high, favoring the development of anergic hapten-specific T cells.b) Anergic TL compete with effector and memory T cells for IL-2, an important growth factor of lymphocytes.c) Keratinocytes promote the formation of Th17 cells by producing IL-1‚ and IL-23.d) Keratinocytes release PGE 2 , which inhibits the production of inflammatory cytokines.18.Mark the incorrect alternative: a) Antibodies have an essential role in ACD in mice.b) Antibodies related to ACD belong to the IgM class.c) The antigen-specific antibody cleaves the complement.d) C5a deficiency increases the response to haptens.
19 .
Mark the incorrect alternative: a) In mice, NK TL recognize endogenous glycolipids by their TCR.b) Glycolipids are presented bound to CD1d molecules present in antigen-presenting cells.c) The IL-4 released by NK T cells stimulates type 1 BL.d) NK T cells present the antigen directly to naive TL. 20.Mark the incorrect alternative: a) Pro-haptens are molecules that need to be metabolized to become reactive. | 2017-09-07T06:10:34.379Z | 2011-05-01T00:00:00.000 | {
"year": 2011,
"sha1": "38dfdf2a80098b268dcebc1cca09e974b89bb771",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/abd/a/rkVPJqdYr4CYfyfFzBLhkKD/?format=pdf&lang=pt",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fec888aef888a875200f27897753b4175b83b3d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220308309 | pes2o/s2orc | v3-fos-license | A Multifunctional Single-Phase EV On-Board Charger With a New V2V Charging Assistance Capability
This paper presents the design and implementation of a single-phase multifunctional electric-vehicle (EV) on-board charger with an advanced vehicle-to-vehicle (V2V) functionality for emergency roadside charging assistance situations. Using this function, an EV is able to charge from another EV in case of an emergency when the battery is flat and there is no access to a charging station. The designed EV charger can support the proposed V2V function with rated power and without the need for an additional portable charger. It can also provide conventional functions of vehicle-to-grid (V2G), grid-to-vehicle (G2V), the static synchronous compensators (STATCOM) and active power filter (APF) (i.e. reactive power support, and harmonics reduction). All the functions are addressed in the control part through the sharing of existing converters in an all-in-one system. The proposed EV charger is designed and simulated in MATLAB/Simulink, and a laboratory prototype is also implemented to validate its key functions.
I. INTRODUCTION
The number of plug-in electric vehicles (EVs) is growing rapidly in developed countries and, accordingly, they have drawn widespread attention due to their various potential functions on the grid [1]. Conventionally, EVs are charged from the grid (G2V), possibly through single-phase or threephase chargers. They can also operate as back-up energy units for the grid (V2G). Accordingly, EVs can discharge during peak hours when the total energy demand is high.
G2V and V2G, the main functions of EV chargers, are supported by bidirectional power electronic converters as the key components of EV chargers [2]- [5]. Moreover, these converters can be utilized to provide ancillary functions for The associate editor coordinating the review of this manuscript and approving it for publication was Ton Do .
the grid, such as reactive power support, voltage regulation and/or harmonics reduction, which are conventionally provided by static synchronous compensators (STATCOMs) and/or active power filters (APFs). For example, an EV charger is designed to supply the reactive power demanded by non-linear loads while it charges or discharges an EV's battery [6]- [10]. In another design, an EV charger works as an active power filter to reduce the harmonics of the gridside current in a household network [11]- [13]. A single-phase EV charger is also able to regulate the voltage at the point of common coupling (PCC) by circulating leading or lagging VAR into the grid [14].
Recently, multifunctional EV chargers have been designed to support more than one ancillary function. A multifunctional EV charger in [15], [16] is able to support both reactive power and harmonics reduction while operating in G2V/V2G mode. In a later design, a single-phase EV charger is utilized to address the main functions as well as three ancillary functions (reactive power support, harmonics reduction and voltage regulation) simultaneously [17].
Single-phase EV chargers are also designed to support some functions in grid-isolated mode. Vehicle-to-home (V2H) operation is presented in [18], [19] for supporting linear electrical appliances in a house and then extended for supporting non-linear loads [15]. Accordingly, the EV operates as an uninterruptible power supply (UPS) that is off-line during the grid-connection mode and on-line during the gridisolated mode. In another approach, the EV charger can perform a traction-to-auxiliary (T2A) battery function. In [20] and [21], a non-isolated dc-dc converter is used between the traction and the auxiliary batteries so that, during the gridisolated mode, the auxiliary battery can be charged from the traction battery. Nevertheless, this design does not follow the IEC 61851-1 standard, which mandates having galvanic isolation for the auxiliary battery [22]. The design is modified in [23], [24] by using a unidirectional dc-dc converter, which enables the charging of the auxiliary battery from the traction battery while galvanic isolation is provided.
Nowadays, one of the main barriers to EV market growth is the insufficient number of charging stations. Even with significant progress in building charging stations in some countries, there is still a concern among many customers about an emergency situation in which their EV battery becomes unexpectedly flat and they do not have access to a charging station. Although such an emergency situation can also happen for existing fuel-based cars, the driver still has a chance to receive fuel from either roadside assistance or occasionally from another car. Such solutions need to be devised for EVs as well. If the EV owners have the opportunity to charge their cars from other EVs, which is called vehicle-tovehicle (V2V) in this paper, their concerns about having a flat battery will be diminished significantly, paving the way for development of the EV market.
In recent years, a few aspects of V2V operation have been addressed in the literature, mostly about charging scheduling schemes for multiple EVs [25], [26] and charging strategies and energy management protocols for cooperative EV-to-EV charging [27]- [30] at a charging station. To address the problem of EVs becoming disabled because of an empty battery, roadside assistance trucks are proposed that use a huge master battery and a separate converter infrastructure to assist the disabled EV [31]. Although this solution is effective, there are still problems associated with cost, service fees, arrival times and availability.
As another interesting approach, a portable customer-used EV charging device is proposed in [32]. The portable charger, which is shown in Fig. 1(a), includes an internal dc-ac inverter and a set of connector cables to connect the dc battery terminals of the first vehicle, which has an internal combustion engine, to the ac charging port of the second vehicle, which is an EV. The portable device can enable V2V operation; however, this device is an extra off-board charging device that would need to be purchased by EV owners as a separate unit. Moreover, the V2V charging power is limited (up to 325 W) due to the circuit configuration of the portable charger and the limited output power of the first vehicle's battery as reported in [32]. As a consequence, the design of an EV on-board charger that is able to address the V2V function without using extra charging infrastructure and a higher charging power ratio is not addressed. This paper proposes a design for an EV on-board charger that is able to charge the EV via V2V operation. Using the proposed design, two EVs can be connected via a low-cost charging cable (referred to as a V2V cable in this paper) without the need for an additional portable charger ( Fig. 1(b)). Moreover, the V2V operation can be accomplished with the maximum power ratio of the EV charger since the same charging infrastructure is used for transmitting the power between the EVs. The proposed V2V function is addressed in the control part, thus no additional converter needs to be added to the EV charger. Finally, to present a complete design, the proposed EV charger is also utilized to cover the conventional grid-connected functions such as G2V, V2G, reactive power support, harmonics reduction and voltage regulation for a household network.
The rest of the paper is organized as follows: Section II presents the proposed multifunctional EV charger. Section III shows the design considerations, mainly focusing on the proposed V2V function. The performance evaluation of the designed EV charger is presented in Section IV. This is followed by the conclusion of this research in Section V. Fig. 2 shows the proposed multifunctional EV on-board charger, which includes two converters. The back-end converter, which includes switches S 5 , S 6 and inductor L on the battery side, has a single operational mode, operating as a bidirectional boost dc-dc converter during both the gridconnected and grid-isolated operational modes of the EV charger. The front-end H-bridge (HB1) shares the switches S 1 , S 2 , S 3 , S 4 and inductors L f1 , L f2 for the two operational modes. This converter can be utilized to operate as either a dc-ac converter in V2V operation [Function I] or as a dcac converter during the grid-connected mode to provide G2V and V2G operation [Function II] as well as ancillary functions of STATCOM and/or APF [Functions III and IV].
A. PROPOSED V2V OPERATION OF THE EV CHARGER
The V2V operation between two EVs is shown in Fig. 2. During the V2V operation, the control system of HB1 in EVa is supposed to be rearranged from a grid-connected mode to V2V mode to set the amplitude and frequency of the ac voltage that is shared by the two EVs. As a result, HB1 in EVa operates with a different control strategy, and HB2 uses the same control algorithm to enable EVa to participate in the V2V operation. This rearrangement is accomplished using a V2V cable between the two connected EVs as shown in Fig. 2. On the other side, EVb uses the same control system of its grid-connected charging operation for its HB1 and HB2 to receive power from EVa. In other words, EVa acts as a new source of charge for EVb.
As shown in Fig. 2, a galvanic isolation can be placed between HB1 and HB2 via adding a dual-active-bridge (DAB) dc-dc converter controlled by a phase-shift controller [33]. Recently, galvanic isolation is required by some established standards to prevent current flow between the two stages of the charging system, thereby increasing safety and reliability. However, the DAB does not interfere with the proposed V2V operation since, as already explained, the V2V functionality is completely performed by changing the operational control mode of the front-end converter of EVa, HB1, while the other stages work with their previous operational mode. Accordingly, in this paper, the details of the design of the DAB and its control system are omitted and referred to [33].
The advantage of the proposed V2V approach is that the same components (switches, inductors and capacitors) are used for both the grid-connected mode and V2V operation. Furthermore, the V2V cable is interfaced between the existing charging ports of the two EVs. Therefore, no additional component and/or separate charging port is required for the V2V operation. This topology meets the mandatory standard SAE J1772, a North American standard for electrical connectors for EVs. As a result, the EV owners' safety during the V2V operation will be ensured. The only additional structure is the V2V cable. This includes two SAE J1772 sockets and an interface cable, which has a simple structure and can be manufactured at the same price as a normal charging cable.
The designed EV charger in this paper uses the most basic and common converter topology (H-bridge), which is used in most existing EV chargers [2], [34]. In the case of different topologies, only the control algorithms may need to be updated, which is supposed to be devised by the car manufacturers. It should be noted that the front-end converter (HB1), regardless of its topology, requires the L and N ports (and G port for grounding) to be connected to the grid. In addition, all the EV chargers are obliged to use the standard SAE J1772 sockets. Therefore, the structure of the V2V connection using the V2V cable will be similar for EV chargers with different converter topologies.
At present, there are some EV chargers on the market with a unidirectional front-end converter (HB1 in this paper). The proposed V2V operation requires the EV chargers to have bidirectional converters, and this seems to be a limitation for the proposed V2V operation in this paper. However, the EV chargers with a unidirectional structure are still able to participate in V2V operation while receiving charging assistance from another EV with a bidirectional charger. Moreover, the existing unidirectional EV chargers will almost certainly need to be replaced with bidirectional chargers in the future. The reason is that, in the near future, energy providers and utilities will demand to use the stored energy of the parked EVs as backup units for the power network, and if an EV charger has a unidirectional structure, it cannot deliver power into the grid, contradicting industrial demands. As a result, the manufacturing of unidirectional EV chargers will be discouraged by energy providers. Therefore, in this paper, the two EVs are assumed to be bidirectional.
The proposed V2V operation in this paper does not require any revision on the design of the power circuit of both HB1 and HB2. The V2V function only demands revising the control part through the sharing of existing converters. Therefore, using the same charging port installed on each EV, the V2V operation can be enabled. It should be noted that there is a possibility to bypass HB1 and create interlink between the dc-bus (C D ) of EVa and EVb to perform the V2V power exchange. While this alternative can be investigated, it is not technically sound and cost effective. Creating the interlink between the dc buses of EVa and EVb requires adding an extra charging port on EVs to have access to their dc buses. This obviously demands extra work and extra cost on the design of the EV chargers, which are not desired by car manufacturers. Another alternative is to make HB1 be a dc-dc converter during the V2V operation instead of operating as a voltage source dc-ac inverter. This option also requires restructuring the power circuit of HB1, which again adds extra cost and challenges for the car manufacturers.
III. DESIGN CONSIDERATION
In this section, the dynamic model and small-signal approximations of HB1 and HB2 in EVa are analytically presented and followed by presenting the design procedure of the proposed control systems during the V2V operation.
A. MODELING AND CONTROL OF AN EV CHARGER DURING V2V OPERATION 1) DYNAMIC MODEL AND CONTROL OF HB1
A cascaded control strategy, which is presented in [35], is used to regulate the front-end ac voltage v s through controlling the ac current, i c . As shown in Fig. 3, the control algorithm includes an inner current controller and outer voltage controller operating based on the DQ transformation technique. L f , C f , and ω are the inductance, the capacitance and the angular frequency of PWM, respectively. M and N represent the controllers of the current and voltage control loops, respectively. P i−vc (s) is the transfer function between the inductance current and the input voltage of the LC filter, and P vs−i (s) is the transfer function between the output voltage of the LC filter and the inductor current. Based on the equivalent circuit model presented in Fig. 4, P i−vc (s) and P vs−i (s) are given by
2) DYNAMIC MODEL AND CONTROL OF HB2
The control method presented in [36] is used for HB2. As shown in Fig. 5, a current controller H including a PI controller, is used to control the charging current, while its reference signal is generated by an outer voltage controller C. In the outer voltage control loop, the dc-link voltage V D is measured and compared with V * D (400 V in this paper), and a reference signal is generated for the inner current control loop.
Using the equivalent circuit model of the boost converter ( Fig. 6) presented in [37], the transfer function of the output voltage is derived as follows: where V , I and D are the dc quiescent values of the dc-bus voltage, the inductor current and the duty cycle of the converter, respectively; andv,î andd are their small ac variations. v B is the small ac variation of the source voltage and D = 1 − D. Accordingly, the transfer function between the duty cycle and the output voltage of the boost converter is given by And the transfer function between the inductor current and the output voltage is expressed by The transfer function between the duty cycle and the inductor current is
B. DESIGN PROCEDURE
Based on the dynamic models in Section III-A and the parameters summarized in Table 1, the control systems of HB1 and HB2 are designed in this section. Root locus analysis is used to design the controllers for HB1 and HB2.
1) DESIGN OF THE CURRENT CONTROLLER, M, FOR HB1
As already mentioned, a cascaded control algorithm including an outer voltage control loop and an inner current control loop ( Fig. 3) is designed for HB1. To do so, first, the inner control loop needs to be designed. The plant, which is presented in (1), is used for the design, and K PM K IM is set to 0.43, where K PM and K IM are the proportional gain and integral gain of the controller M , respectively. Since the EV charger is designed with a 4-kW power ratio, the rlocus of the current controller for a 4-kW load (or R = 12 ) is shown in Fig. 7. According to the trajectory of poles toward zeros and the damping factors, the optimum value of 350 is selected for K IM . Because of the zero-pole cancellation, the closed loop response of the current controller is very fast and can be considered as unity for the rest of the analysis.
2) DESIGN OF THE VOLTAGE CONTROLLER, N, FOR HB1
A similar process is used to design the outer voltage controller, N, as shown in Fig. 3 for HB1. The rlocus of the voltage control loop for the load R = 12 is shown in Fig. 8 proportional and integral gains of N , respectively. Using the rlocus analysis, K IN = 5 is selected to achieve a settling time of less than 0.3∼s and a minimum overshoot (∼ 0%).
3) DESIGN OF THE CURRENT CONTROLLER, H, FOR HB2
The design process of the inner current controller, H, which is shown in Fig. 5, for HB2 is presented in this section. The transfer function of (6) is used, while the load is considered as R = 40 corresponding a 4-kW load. Figure 9 shows the rlocus of the current control loop via setting K PH K IH = 0.014 where K PH and K IH are the proportional and integral gains of H , respectively. The design criterion in this paper is to find the fastest response with the lowest overshoot. Therefore, 0.4 is selected for K IH to achieve a 0.3-s settling time and the minimum overshoot (∼ 0%).
4) DESIGN OF THE VOLTAGE CONTROLLER, C, FOR HB2
To tune the outer voltage controller, which is shown in the LTI model in Fig. 5, the rlocus of the model is plotted (Fig. 10) for R = 40 , and K PC K IC is selected as 0.05 where K PC and K IC are the proportional and integral gain of C, respectively. The outcome of the design is to select 116816 VOLUME 8, 2020 K IC = 10, which results in achieving the settling time of less than 0.3 s and 3% overshoot. Table 2 summarizes the designed control parameters for HB1 and HB2. These parameters are used for the rest of the analysis. Fig. 11 shows the control block diagram of the designed EV charger during the V2V operation. In Fig. 11(a), the bus voltage V D and the charging current I B are regulated by HB2 using a busvoltage controller and a battery-current controller, respectively. Meanwhile, HB1 with its control algorithm, which is shown in Fig. 11(b), regulates the amplitude and frequency of the front-end ac voltage v s . As a result, HB1 operates in the constant-voltage charging mode. The design of the proposed EV charger controller in the grid-connected mode has been presented in [17], and this section is omitted in this paper to avoid repetition. However, to make the paper self-explanatory, the control block diagram of the system in this mode is shown in Fig. 12. Comparison of the control algorithms of Fig. 11 and Fig. 12 shows that HB2 has a similar control algorithm during both the V2V operation and the grid-connected mode, whereas HB1 uses two different control algorithms. It is concluded that switching the control algorithm of HB1 from V2V operation, as shown in Fig. 11, to the grid-connected mode, as shown in Fig. 12, (and vice versa) is required, which is explained in the following section.
C. COMMUNICATION REQUIREMENTS
Switching from the grid-connected operational mode to V2V mode can be performed through communication and messaging between the two EV chargers. The communication between the EVs can be enabled through the V2V cable. Figure 13 shows the SAE J1772 charging connector used to connect the V2V cable. According to the mandatory standard SAE J1772, this connector includes three power lines (L, N and G), a proximity detector that hinders the movement of the cars while they transmit the charging power, and a control pilot used for messaging between the chargers. During the V2V operation, a 12-V, 1-kHz signal is generated by each EV charger on the control pilot to detect the successful connection between the two EVs. When the V2V connection is detected and admitted by the EV chargers, the control system of the grid-connected mode (Fig. 12) is automatically switched to the control diagram of the V2V operation (Fig. 11). Thus, the V2V operation can be enabled. Using the proximity detection port and the control pilot, the V2V operation can be performed while the mandatory safety requirements are met. If the V2V connection is not FIGURE 13. SAE J1772 charging connector of the V2V cable. VOLUME 8, 2020 successful, the power pins of the SAE J1772 have no voltage and no power to flow through the V2V cable. This ensures the safety of people. Optionally, the 1-kHz signal on the control pilot can also be used to adjust the V2V charging level. Such an interaction between the EV charger and the EV owner is addressed by the newly established standard SAE J2836/5. It should be noted that specifications such as physical communication infrastructure, communication methods and protocols are not within the scope of this paper and are targeted as future work.
D. PROTECTION REQUIREMENTS
During both the grid-connected and V2V operation, both front-end phase and neutral conductors are facilitated by hardware-based relays and software-based comparators. During a short-circuit event, the software-based comparators first sense the over-current incident and quickly block the gate signals to shut down the whole system. In case of any failure in software-based protection, which is unlikely to happen, the relays will disconnect the EV charger from the rest of the system. The same hardware-based and software-based overcurrent protection facilities are used at the battery-side of the EV charger system. Accordingly, any short-circuit at the battery side can be removed quickly. Three software-based and hardware-based over-voltage protections (a total of six) are active at the front-end side, dc-link capacitor (C D ) and battery side (C B ). These three over-voltage protections shut down the whole system during any over-voltage phenomena that cause abnormal voltage variations at the front-end side, dc-link side or battery side.
An additional software-based phase-variation detector is used for the grid-side ac voltage. Using such fast and advanced protection, the phase-offset of the grid voltage is continuously compared with a phase reference, and during any short-circuit incident that may cause phase deviation on the front-end voltage, the fault is sensed in less than micro seconds and the system is shut down. This protection system operates faster than the above-mentioned over-current and over-voltage protections.
E. DESIGNING THE FRONT-END INDUCTORS FOR BOTH GRID-CONNECTED AND V2V OPERATIONAL MODES
In the proposed EV charger, the front-end inductors L f1 and L f2 are shared by both the grid-connected and V2V operational modes. L f1 and L f2 are designed by considering the ripple factor (RF) of the front-end current i c [21]. Since the grid-voltage is sinusoidal, the maximum inductor's current ripple can be expressed by where V D is the dc-link voltage, D is the duty cycle, T s is the time period of a switching cycle, and v s,avg is the fixed average output voltage of HB1. v s,avg is equivalent to v s in a switching cycle due to the chosen switching frequency f sw , which is sufficiently higher than the line frequency f . When D = m a sin (ωt), the maximum fluctuation of the inductor current is given by where m a is the modulation index. If m a = 1, the inductor current becomes triangular so that the RMS value of the inductor current in one period can be obtained by which is I Lf1,rms = I Lf1,max √ 3 after solving and simplifying. Then, substituting (8) into (9) results in The fundamental component of the inductor current is where V s is the nominal value of the grid voltage and Z is the line impedance. L b , which demonstrates the impedance of one period in an ac grid, is obtained by where P is the input power and V s,rms is the RMS value of the grid voltage. Now, using (9), (10) and (11), the ripple factor (RF) can be calculated by (13) and from (13), the inductance value can be calculated as
IV. PERFORMANCE EVALUATION OF THE EV CHARGER
Due to the lack of equipment in building the prototype of two EV chargers, the validity of the designed EV charger during the proposed V2V operation is evaluated using MAT-LAB/Simulink. The operation of the EV charger during the grid-connected mode is evaluated using the laboratory prototype of an EV charger. In this section, first, the proposed V2V operation [Function I] is evaluated, then the ability of the proposed EV charger to operate in the grid-connected mode by supporting the V2G, G2V, STATCOM and APF functions [Functions II, III and IV] is verified. The parameters summarized in Table 1 are used for both simulation and experimental implementation.
A. PERFORMANCE EVALUATION DURING THE PROPOSED V2V OPERATION
The aim of this section is to show the performance of the EV charger during the proposed V2V operation using simulation results. According to the design, during the V2V power transfer, V B is set at 150 V, and the dc-link voltage V D is regulated at 400 V. The front-end ac voltage v s during the V2V operation is set to achieve 220 RMS voltage to be consistent with the value of the grid RMS voltage during the grid-connected operation. The control algorithm in Fig. 11 performs the voltage regulation while controlling the charging/discharging current. Fig. 14 and Fig. 15 show the simulation results of the proposed V2V function through a case study in which two EVs are exchanging power through the V2V cable shown in Fig. 2. As shown in Fig. 14(a), the front-end voltage v s , which is shared by the two EVs, is regulated using the control strategy of EVa presented in Fig. 11. As a result, the RMS of the front-end ac current i c is increased to 15.2 A, showing the power flow of 3.34 kW between the two EVs. The dc-link voltage V D in Fig. 14(b) is also regulated at 400 V. Fig. 14(c) and Fig. 14(d) show the battery voltage (V B ) of 151 V and the Fig. 14(e), which indicates the discharging operation of EVa. Fig. 15 shows the operation of EVb while receiving the charging power from EVa during the V2V operation. Fig. 15(a) shows the ac current i c and the ac voltage v s . Fig. 15(b) shows the dc-link voltage V D , which is regulated at 400 V using the control system of EVb's converters. Fig. 15(c) and Fig. 15(d) show the battery voltage (V B ) of around 151 V and the battery current (I B ) of -20 A corresponding to about 3.02 kW of power received at the battery side of EVb. The HB2 in EVb uses a battery's current regulator; therefore, the current ripples are significantly lower than the current ripples of I B in EVa. The V B in EVb also exhibits lower ripples compared with V B in EVa. This is because the lower double-frequency ripples (100 Hz) of V D in EVb. In the simulation condition, the efficiency is calculated as 87% since the receiving power of EVb's battery is 3.02 kW and the sending power of EVa's battery is 3.47 kW. The SOC of the battery is estimated using (15) as given by where i B (t) is the instantaneous battery current and Q B is the nominal capacity of the battery. Fig. 16(a) shows the voltage across the switch S 5 . Fig. 16(b) shows the voltage across S 6 . The inductor current of i L is also shown in Fig. 16(c). Figure 17 presents a case study showing three discharging, standby and charging modes of EVa during the V2V operation. During the first time interval of t = 0s to t = 0.5s, the power is transferred from EVa to EVb using the proposed V2V operation. As a result, as Fig. 17(a) shows, the battery of EVa discharges with 20 A current. This reduces the SOC of EVa's battery as shown in Fig. 17(c). Between t = 0.5s to t = 1s, there is no power assigned by the control system. Therefore, as Figures 17(a) and 17(b) show, no current is transmitted between EVa and EVb. This results in a stable SOC on the batteries of both EVa and EVb as can be seen in Figures 17(c) and 17(d) for EVa. Between t = 1s to t = 1.5s, the V2V operation is conducted via transferring power from EVb to EVa. Accordingly, the directions of currents and SOCs are reversed during this time interval. Figure 18 shows the prototype of the EV charger. A SEMISTACK-IGBT is developed to implement HB1 and HB2 converters. The control systems presented in Fig. 11 and Fig. 12 are implemented on a TMS320F28335 DSP.
B. PERFORMANCE EVALUATION DURING THE GRID-CONNECTED MODE
Sorensen XG-600 programmable dc power supply and a Chroma 63800 programmable load were employed to model a load. An MI 2883 EU Class S power-quality analyzer is also used for monitoring the power quality. This section evaluates the performance of the proposed EV charger (EVa), while the front-end converter of the charger operates as a voltage source converter to provide the grid-connected functions of V2G/G2V, STATCOM and APF as Functions II, III and IV of the proposed EV charger. Figure 19 shows the experimental results of the EV charger's operation to support grid-connected functions. Figure 19(a) verifies the operation of the proposed EV charger to provide Functions II and III, which are V2G and the STATCOM operation in inductive mode, respectively. In this test, the EV charger injects 165 W into the grid and circulates 150 VAR lagging. As shown, during this operation, the battery voltages V B and V D are maintained at 150 V and 220 V, respectively, which validate the steady-state stability of the control system shown in Fig. 12. Fig. 19(b) shows the same test but in a capacitive mode. Figure 20 shows the dynamic performance of the EV charger during several active and reactive power levels. Fig. 21 validates the performance of the multifunctional EV charger to supply Functions III and IV when the system works as a STATCOM for power-factor correction as well as an APF for harmonics reduction. For this test, the THD of the load current (i L ) is 25%, and the performance of the system is evaluated via comparing the source current i s and i L . Moreover, the power factor is set to 0.85, as shown by the phase shift between v s and i L . The reduced harmonics of i s (3.37%) and its synchronization with the grid voltage v s verifies that the EV charger can perfectly reduce the THD of i s while it is also regulating the PF at almost unity. Fig. 21(b) compares the harmonic spectrum and the THD of i s before and after the EV charger's operation measured by an MI 2883 EU Class S power analyzer. The THD (3.37%) is less than 5%, complying with the mandatory standard IEEE 1547.
V. CONCLUSION
This paper proposes a multifunctional EV charger with a novel V2V function that enables the charger to provide VOLUME 8, 2020 roadside charging assistance. As shown in Table 3, unlike the conventional V2V portable charger, the proposed V2V does not require additional charging infrastructure. In addition, the proposed V2V operation meets the standard SAE J1772 and can be accomplished with the maximum power ratio of the EV charger. The V2V function enables an EV to be charged from another EV during an emergency when there is no access to a charging station. This function will become vital when the number of EVs grows and the demand for roadside charging assistance subsequently increases. The proposed EV charger is also able to supply the conventional main and ancillary functions such as V2G, G2V, reactive power support, and harmonics reduction. The design procedure of the proposed V2V function is analytically verified and validated through simulation in MATLAB/Simulink. The gridconnected functions are also tested and validated through experimental results. The future scope of this research is to build the prototype of a second EV charger so that the proposed V2V operation can be validated experimentally. NOUSHIN POURSAFAR (Graduate Student Member, IEEE) received the M.Res. degree in engineering from Macquarie University, Sydney, NSW, Australia, in 2018, where she is currently pursuing the Ph.D. degree in electrical engineering. Her current research interests include smart grid, renewable energy, and power system stability and control.
JUNWEI LU (Senior Member, IEEE) received the degree in electrical engineering from Xian Jiaotong University, China, the M.Eng. degree in electronic and computer engineering from National Toyama University, Japan, and the Ph.D. degree in electrical and computer engineering from National Kanazawa University, Japan, in 1991. From 1976 to 1984, he worked with the Electrical Power Industry (now is called State Grid) in China, where he was involved in the various national research projects for electrical power industry. In 1985, his academic study and research was in the area of computational electromagnetics at the Laboratory of Electrical Communication Engineering, Toyama University, Japan. In 1988, he has worked on the applied computational electromagnetics and was involved in the development of magnetics devices with the Laboratory of Electrical Energy Conversion, Kanazawa University. He joined the new School of Microelectronic Engineering, Griffith University, Brisbane, QLD, Australia, in 1992, and moved to Gold Coast campus to establish a new Department of Electrical and Electronic Engineering as a Foundation Professor, since 2011. He has published over 250 journal and conference papers and three coauthored books in the area of computational electromagnetics for nonlinear electromagnetic fields, EMC computer modeling and simulation, and V2G linking smart grid, and holds over ten international patents related smart antennas and high frequency transformers. His fields of interest are computational electromagnetics, EMC computer modeling and simulation, high-frequency magnetics for power electronics, and renewable energy systems. His current research interests include smart transformer and V2G with built-in d-statcom inverter and smart hybrid AC/DC microgrid.
GEORGIOS KONSTANTINOU (Senior Member, IEEE) received the B.Eng. degree in electrical and computer engineering from the Aristotle University of Thessaloniki, Thessaloniki, Greece, in 2007, and the Ph.D. degree in electrical engineering from the University of New South Wales (UNSW), Sydney, NSW, Australia, in 2012. From 2012 to 2015 he was a Research Associate with UNSW. He is currently a Senior Lecturer with the School of Electrical Engineering and Telecommunications, UNSW Sydney, and an Australian Research Council (ARC) Early Career Research Fellow. His main research interests include multilevel converters, power electronics in HVDC, renewable energy, and energy storage applications. He is an Associate Editor of the IEEE TRANSACTIONS ON POWER ELECTRONICS and IET Power Electronics. VOLUME 8, 2020 | 2020-07-02T10:08:26.439Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "e893cd4cc3dc8c2b6f878b5594ccb799bc172988",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1109/access.2020.3004931",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "57272e39f62efa320866f12e855b56368b6b3c2c",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
252642291 | pes2o/s2orc | v3-fos-license | In Vitro Modulation of Complement Activation by Therapeutically Prospective Analogues of the Marine Polychaeta Arenicin Peptides
The widespread resistance to antibiotics in pathogenic bacteria makes the development of a new generation of antimicrobials an urgent task. The development of new antibiotics must be accompanied by a comprehensive study of all of their biological activities in order to avoid adverse side-effects from their application. Some promising antibiotic prototypes derived from the structures of arenicins, antimicrobial peptides from the lugworm Arenicola marina, have been developed. Previously, we described the ability of natural arenicins -1 and -2 to modulate the human complement system activation in vitro. In this regard, it seems important to evaluate the effect of therapeutically promising arenicin analogues on complement activation. Here, we describe the complement-modulating activity of three such analogues, Ar-1[V8R], ALP1, and AA139. We found that the mode of action of Ar-1[V8R] and ALP1 on the complement was similar to that of natural arenicins, which can both activate and inhibit the complement, depending on the concentration. However, Ar-1[V8R] behaved predominantly as an inhibitor, showing only a moderate increase in C3a production in the alternative pathway model and no enhancement at all of the classical pathway of complement activation. In contrast, the action of ALP1 was characterized by a marked increase in the complement activation through the classical pathway in the concentration range of 2.5–20 μg/mL. At the same time, at higher concentrations (80–160 μg/mL), this peptide exhibited a complement inhibitory effect characteristic of the other arenicins. Peptide AA139, like other arenicins, exhibited an inhibitory effect on complement at a concentration of 160 μg/mL, but was much less pronounced. Overall, our results suggest that the effect on the complement system should be taken into account in the development of antibiotics based on arenicins.
Introduction
Nowadays, the problem of resistance development of pathogenic microorganisms to conventional antibiotics has become increasingly serious [1]. In light of this problem, the task of searching for prototypes of a new generation of antibiotics is urgent for modern medicine. Natural antimicrobial peptides (AMPs) can be a promising source of novel antibiotics [2,3]. AMPs are short, predominantly cationic polypeptide molecules that possess toxic activity against bacteria and other pathogens. They have been described in Figure 1. Amino acid sequences of the three natural arenicins and three analogues studied in this work. The color coding is as follows: blue, differences in primary structure of natural arenicins taking Ar-1 as the reference; red, amino acid substitutions or deletions (indicated by the "-" symbol) in Ar-1[V8R] and ALP1 compared to Ar-1; purple, amino acid substitutions in AA139 compared to Ar-3. Cysteine residues are highlighted in yellow. Two invariant cysteines are involved in the disulfide bond common to all arenicin peptides and are shown by the solid black line in the upper part; the disulfide bond shared only by Ar-3 and its analogue, AA139, is shown by the dashed line in the bottom. All peptides have an unmodified amine and carboxyl at the N-and C-terminus, respectively. Some physicochemical properties of the peptides (net charge at pH 7.0 and hydrophobicity as the GRAVY index) are shown on the right.
One of the limitations of the application of AMPs as antibiotics is the possible side effects. The biological activity of AMPs is not limited to their action on microorganisms these peptides can exhibit cytotoxic activity against host cells, display a variety of immunomodulatory effects, and participate in the pathogenesis of various diseases, for example, autoimmune diseases [13][14][15][16]. Since AMPs are considered as prototypes of a new generation of antibiotics, a comprehensive study of the possible consequences of their introduction into the human body including their immunomodulatory effects is necessary Among the immunomodulatory effects of AMPs, one should take into account their influence on the activation of the complement system. Complement is part of the innate immune system, which can be activated by one of three pathways: the classical pathway (CP), the alternative pathway (AP), and the lectin pathway. Complement contributes to defense against infection by opsonizing microbes with C3b, C4b, and their derivatives; by production of anaphylatoxins, C3a and C5a which attract and activate phagocytes; and by the direct lysis of Gram-negative bacteria by the membrane attack complex (C5b-9) [17][18][19].
Previously, we found that Ar-1 and -2 as well as Ar-1-(C/A), an arenicin-1 analogue devoid of a disulfide bond due to cysteine substitutions with alanine, are able to modulate complement activation [20,21]. All three peptides affect both the classical and alternative pathways (CP and AP) of complement activation. The aim of this work was to evaluate the effect of three arenicin analogues (Figure 1), which were designed to improve the properties important for their potential as antibiotic drug progenitors. Amino acid sequences of the three natural arenicins and three analogues studied in this work. The color coding is as follows: blue, differences in primary structure of natural arenicins, taking Ar-1 as the reference; red, amino acid substitutions or deletions (indicated by the "-" symbol) in Ar-1[V8R] and ALP1 compared to Ar-1; purple, amino acid substitutions in AA139 compared to Ar-3. Cysteine residues are highlighted in yellow. Two invariant cysteines are involved in the disulfide bond common to all arenicin peptides and are shown by the solid black line in the upper part; the disulfide bond shared only by Ar-3 and its analogue, AA139, is shown by the dashed line in the bottom. All peptides have an unmodified amine and carboxyl at the N-and C-terminus, respectively. Some physicochemical properties of the peptides (net charge at pH 7.0 and hydrophobicity as the GRAVY index) are shown on the right.
One of the limitations of the application of AMPs as antibiotics is the possible side effects. The biological activity of AMPs is not limited to their action on microorganisms; these peptides can exhibit cytotoxic activity against host cells, display a variety of immunomodulatory effects, and participate in the pathogenesis of various diseases, for example, autoimmune diseases [13][14][15][16]. Since AMPs are considered as prototypes of a new generation of antibiotics, a comprehensive study of the possible consequences of their introduction into the human body including their immunomodulatory effects is necessary. Among the immunomodulatory effects of AMPs, one should take into account their influence on the activation of the complement system. Complement is part of the innate immune system, which can be activated by one of three pathways: the classical pathway (CP), the alternative pathway (AP), and the lectin pathway. Complement contributes to defense against infection by opsonizing microbes with C3b, C4b, and their derivatives; by production of anaphylatoxins, C3a and C5a, which attract and activate phagocytes; and by the direct lysis of Gram-negative bacteria by the membrane attack complex (C5b-9) [17][18][19].
Previously, we found that Ar-1 and -2 as well as Ar-1-(C/A), an arenicin-1 analogue devoid of a disulfide bond due to cysteine substitutions with alanine, are able to modulate complement activation [20,21]. All three peptides affect both the classical and alternative pathways (CP and AP) of complement activation. The aim of this work was to evaluate the effect of three arenicin analogues (Figure 1), which were designed to improve the properties important for their potential as antibiotic drug progenitors.
The Ar-1[V8R] peptide differs from Ar-1 by only one amino acid residue (arginine instead of valine in position 8), but compared to natural arenicins, it exhibits a low dimerization ability in the lipid environment, which seems to largely determine the cytotoxic properties of arenicins [22,23]. The ALP1 peptide is a shortened Ar-1 analogue with reduced hydrophobicity compared to natural peptides [24]. The AA139 peptide has been developed by Adenium Biotech based on the Ar-3 sequence with three amino acid substitutions, making the peptide less hydrophobic and more positively charged [25]. The AA139 peptide is currently in preclinical development.
All of these three arenicin analogues showed an increased selectivity of action (high antibacterial and low cytotoxic activity) compared to their natural prototypes. In particular, Ar-1[V8R] exhibited bactericidal activity comparable or even slightly higher than that of natural Ar-1, but an order of magnitude higher concentrations of Ar-1[V8R] were required for similar levels of human erythrocyte lysis and cytotoxic action on human embryonic fibroblasts compared to Ar-1 [23]. ALP1 showed approximately a twofold increase in the antibacterial activity compared to Ar-1 with negligible cytotoxic activity toward various human cells (erythrocytes, embryonic fibroblasts, normal astrocytes) [24]. AA139 was nontoxic to four human cell lines, although Ar-3 showed a weak cytotoxic activity against three of them. For equal hemolytic activity toward human erythrocytes, an order of magnitude higher concentration of AA139 was required compared with Ar-3. However, in terms of antimicrobial activity, AA139 was several times more effective than Ar-3 [25,26]. Such differences in the activity pattern can be explained by the greater selectivity of AA139 to highly negatively-charged membranes characteristic of bacterial cells [26]. The AA139 peptide is also highly resistant to the inhibitory action of blood serum, in contrast to Ar-3 [25].
In this work, we investigated the immunomodulatory effects at the level of the complement system of these three arenicin analogues. by using recombinant peptides. We demonstrated that these arenicin analogues are able to affect the complement system activation in vitro.
Results
In this study, the peptides ALP1, Ar-1[V8R], and AA139 were produced as a part of the fusion proteins that included an octahistidine tag and modified thioredoxin A (M37L). The proteins were expressed in Escherichia coli BL21 (DE3) cells, and the obtained total cell lysates were fractionated by affinity chromatography. After the purification and cleavage of the fusion proteins, reverse-phase high performance liquid chromatography (RP-HPLC) was used to isolate mature AMPs. MALDI mass spectrometry analysis of recombinant AA139 showed that the measured m/z value matched that of the calculated molecular mass of the corresponding peptide, indicating the formation of two disulfide bonds and the absence of any other modifications ( Figure S1). More evidence that all of the cysteine residues are involved in disulfide bridging was obtained from the alkylation experiment ( Figure S2). After the repurification step ( Figure S3), the final yield of the AA139 was about 4 mg per 1 L of the culture, which is comparable to that of ALP1 or Ar-1[V8R], as described previously [22,24].
To evaluate the ability of arenicin peptides to modulate the human complement system, we utilized two experimental in vitro models with animal erythrocytes as targets of complement activation. Antibody-sensitized sheep erythrocytes (E sh ) and rabbit erythrocytes (Er rab ) were used to study the CP and AP activation in the normal human serum (NHS), respectively. In both cases, complement activation was estimated by the hemolysis level and by C3a and C5a anaphylatoxin accumulation. The results were expressed in values of H and E coefficients ( Figure 2). Importantly, none of the peptides themselves led to hemolysis in experimental models since the lysis level did not differ from the baseline when NHS was replaced by the heat-inactivated serum (Table S1). All of the peptides demonstrated the ability to modulate complement activation albeit in different modes. In the presence of antibody sensitized sheep erythrocytes E sh (the CP model), the dose-dependent action of the peptides on C3a accumulation was observed. However, Ar-1[V8R] and AA139 only led to the inhibition of C3a production at concentrations of 80 and 160 μg/mL (E coefficient values below zero), whereas ALP1 displayed a bidirectional effect. The addition of the latter peptide at 2.5-20 μg/mL resulted in the elevated C3a level in the experimental samples, but at higher concentrations of ALP1, the C3a level was decreased compared to the control without peptides (Figure 2A). Similar patterns persisted in the analysis of C5a accumulation and hemolysis level ( Figure 2B, C). Of note is the extremely high complement activation in the presence of ALP1, with a sixfold increase in the C5a level (corresponding to an E coefficient value of 5).
In the AP model, the AA139 peptide had no apparent effect on the complement activation, only the C5a level at a peptide concentration of 160 µ g/mL was significantly lower than that of the control. The two Ar-1 analogues mainly showed an inhibitory effect at high concentrations (160 µ g/mL for the assessment of the C3a and C5a levels, or starting from 40 µ g/mL for the level of hemolysis). At the same time, increased C3a levels were All of the peptides demonstrated the ability to modulate complement activation albeit in different modes. In the presence of antibody sensitized sheep erythrocytes E sh (the CP model), the dose-dependent action of the peptides on C3a accumulation was observed. However, Ar-1[V8R] and AA139 only led to the inhibition of C3a production at concentrations of 80 and 160 µg/mL (E coefficient values below zero), whereas ALP1 displayed a bidirectional effect. The addition of the latter peptide at 2.5-20 µg/mL resulted in the elevated C3a level in the experimental samples, but at higher concentrations of ALP1, the C3a level was decreased compared to the control without peptides (Figure 2A). Similar patterns persisted in the analysis of C5a accumulation and hemolysis level ( Figure 2B,C). Of note is the extremely high complement activation in the presence of ALP1, with a six-fold increase in the C5a level (corresponding to an E coefficient value of 5).
In the AP model, the AA139 peptide had no apparent effect on the complement activation, only the C5a level at a peptide concentration of 160 µg/mL was significantly lower than that of the control. The two Ar-1 analogues mainly showed an inhibitory effect at high concentrations (160 µg/mL for the assessment of the C3a and C5a levels, or starting from 40 µg/mL for the level of hemolysis). At the same time, increased C3a levels were observed in the presence of Ar-1[V8R] and ALP1 at 5-20 µg/mL, which were not reflected in the C5a levels or Er rab lysis ( Figure 2E,F).
Quantification of the inhibitory effect of arenicin analogues on complement activation, presented as IC 50 values (concentrations corresponding the E or H value of −0.5), is shown in Table 1. Table 1. IC 50 values, µg/mL (µM). "-"-50% inhibition is not achieved in the concentration range of 0-160 µg/mL.
Peptide
Classical The data in Table 1 show that Ar-1[V8R] was the most efficient complement inhibitor among these three peptides as it had the lowest IC 50 values (in µM), whereas AA139 was only a weak CP inhibitor.
Discussion
The development of new therapeutic drugs including antibiotics should be accompanied by a thorough investigation of all aspects of their possible effects in the body, in particular, their action on complement activation. Although the complement system, as part of the immune system, contributes to the microbe clearance, its excessive activation is generally undesirable. As a complex multifaceted system, complement performs a variety of immune and non-immune functions and may be involved in the development of many pathological processes if it is dysregulated. In this regard, the therapeutic inhibition of complement, rather than its stimulation, is currently more urgent in medical practice [27][28][29]. Thus, the property of a drug candidate to enhance complement activation can be regarded as an unfavorable side effect. In particular, this refers to the increased production of proinflammatory factors such as C3a and C5a. On the other hand, the inhibition of complement by an antibiotic may impair the antimicrobial response of the immune system and thus reduce the efficacy of the antibiotic in vivo. Nevertheless, the anti-inflammatory and immunosuppressive activity of antibiotics is beneficial in conditions of excessive inflammation [30,31]. Therefore, the ability of a drug candidate to enhance or inhibit complement activation needs to be taken into account when developing therapeutic protocols. It should be noted that the concentration-dependent bidirectional effect of the drug candidate on complement is a critical issue, as it can lead to unpredictable effects in vivo.
In previous works, we found that Ar-1 and Ar-2 are able to modulate both the CP and AP of the complement, leading to the enhancement or inhibition of activation depending on their concentrations [20,21]. It has also been shown that Ar-1 is able to interact with two complement proteins, C1q and C3b, which may explain its action on the two activation pathways [32,33]. Despite the generally similar mode of action on complement activation and the difference in a single amino acid residue, some details of the effects of Ar-1 and Ar-2 differed markedly. In particular, the ability to enhance complement CP activation at relatively low concentrations is much weaker for Ar-2 than for Ar-1 [21].
Arenicins, as biologically active peptides, have attracted the attention of researchers. A number of works are devoted to obtaining modified analogues with altered functional activities [21,[34][35][36]. Three previously described arenicin analogues, Ar-1[V8R] [22], ALP1 [24], and AA139 [25], became the subject of study in the present work. Their high selectivity makes these peptides promising prototypes of new antibiotics. In particular, comprehensive preclinical studies are being conducted with AA139 peptide, which has shown good effectiveness against Gram-negative bacteria including antibiotic-resistant strains both in vitro and in animal models [25,26,37,38].
In our work, we investigated the effect of these three analogues on the activation of the human complement system in vitro. We used recombinant peptides for the experiments. Two rounds of HPLC purification assured the absence of bacterial contaminants.
We showed that the action of Ar-1[V8R] and ALP1 was similar to that of Ar-1 and Ar-2, as previously described. However, although both peptides are derivatives of Ar-1, Ar-1[V8R] is more similar to Ar-2 in its action. Moreover, of five of the highly similar arenicin isoforms we studied (Ar-1, Ar-2, and their analogues), Ar-1[V8R] was the only one whose enhancing effect on complement CP activation was negligible. If the weak enhancement of C3a production in the AP model is not taken into account, this peptide can be called a pure complement inhibitor. Interestingly, of all the arenicin peptides studied thus far, Ar-1[V8R] is the most effective in terms of complement inhibition. In hemolytic assays, its IC 50 values are 3.2 and 23.8 µM for CP and AP, respectively. In contrast, ALP1 was the strongest enhancer of the CP complement activation of all of the natural isoforms and their analogues studied. At the same time, at high concentrations (80-160 µg/mL), this peptide exhibited the complement inhibitory effect characteristic of the other arenicins. The inhibitory action of the AA139 peptide on complement was much weaker compared to other arenicins. It is difficult to say whether these results for AA139 are due to sequence differences between Ar-3 and other arenicins or with structural features of this particular analogue.
The mechanisms of modulation of the complement system activation by arenicins remain unclear, especially the reasons for the bidirectional action of some isoforms. Apparently, this is at least partly due to the interaction of arenicins with complement proteins (C1q, C3b), but other mechanisms are also possible, for example, heparin binding, as discussed in [21]. The reasons for the differences in the action of the different arenicin isoforms are also elusive, but they seem to be more related to specific amino acid residues or sequences rather than to differences in the physicochemical properties. In this regard, it can be noted that of the six arenicin peptides shown in Figure 1, the least hydrophobic (AA139) and one of the most hydrophobic (Ar-1) peptides exhibited the greatest ability to enhance complement activation. It is possible, however, that the peptide size may be important. Thus, ALP-1 was designed to mimic tachyplesins, AMPs from horseshoe crabs Tachypleus spp., with the same polypeptide chain length. As we describe here for ALP-1, tachyplesin-1 was shown to be an enhancer of complement activation [39]. Further studies are needed to understand which structural features of arenicins determine their action on complement activation and, consequently, how they can be modified to alter these properties in a targeted manner.
In terms of the potential applications of the studied arenicin analogues as therapeutic antibiotics, ALP1 needs further modification to remove its bidirectional effect on complement, and first, its ability to significantly enhance the production of C5a anaphylatoxin, which has potent proinflammatory activity. It seems that for Ar-1[V8R], and especially for AA139, their possible effects on complement activation should not be a significant obstacle to their therapeutic use. However, high local concentrations of the peptides in the bloodstream should be avoided in order to prevent undesirable complement inhibition.
It should be noted that our results were obtained in experiments in vitro and it is possible that they will not be fully reproducible in vivo, and, therefore, the recommendations given are tentative. Particular details of the experimental conditions (use of diluted serum, presence of gelatin in the buffers, etc.) can affect the interaction of peptides with complement proteins. One reason to be cautious about extrapolating the in vitro results to the in vivo conditions is the presence of proteinases in the serum, which can reduce the stability of the peptides. In our work, we incubated the peptides with serum for 30 min, which allows us to consider such risks as minimal. These considerations imply the need for further investigations including in clinical trials.
Peptides
The recombinant Ar-1[V8R] and ALP1 were obtained as described previously [22,24]. The peptide AA139 was obtained using the same procedures. Briefly, the gene encoding AA139 was obtained by the annealing of two primers followed by one-round of DNA-polymerase extension and then cloned into the pET-based vector as described previously [22,24]. The target peptides were expressed in E. coli BL21 (DE3) as chimeric proteins that included the octahistidine tag, the E. coli thioredoxin A with the M37L substitution (TrxL), methionine residue, and a mature peptide. The transformed cells were grown at 37 • C in Lysogeny broth (LB) medium supplemented with 20 mM glucose, 1 mM magnesium sulfate, and 100 µg/mL ampicillin were induced at OD 600 0.8 with 0.3 mM isopropyl β-D-1-thiogalactopyranoside (IPTG) for 4 h at 30 • C and 220 rpm. After centrifugation, the pelleted cells were suspended and sonicated in the 100 mM phosphate buffer (pH 7.8) containing 20 mM imidazole and 6 M guanidine hydrochloride to fully solubilize the fusion protein. Purification of the peptide involved immobilized metal affinity chromatography (IMAC) of the cell lysate, CNBr cleavage of the fusion protein, and reversed-phase HPLC (RP-HPLC) with the use of a Reprosil-pur C 18 -AQ column (Dr. Maisch GmbH). The collected fractions were analyzed by MALDI-TOF mass-spectrometry using a Reflex III instrument (Bruker Daltonics). The fractions containing the target peptides were lyophilized and dissolved in water. The peptide concentrations were estimated using UV absorbance. The fractions with confirmed masses were dried in vacuo and repurified by the second round of RP-HPLC. Repurification of peptides was performed using the analytical column (Symmetry 300 C 18 ) at a flow rate of 1 mL/min in a linear gradient of solution B (80% acetonitrile, 0.1% TFA) in solution A (5% acetonitrile, 0.1% TFA): 0-100% for 50 min ( Figure S3).
The intramolecular disulfide bonds formation in AA139 was confirmed using alkylation with iodoacetamide (IAA). Two peptide aliquots (80 µM in 95 µL of 100 мM potassium phosphate buffer, pH 7.8), one of which was supplemented with 2 mM dithiothreitol (DTT), were incubated at 55 • C for 30 min. Then, 5 µL of a freshly prepared 400 mM aqueous IAA solution were added to both tubes, and the samples were incubated at room temperature in the dark for another 30 min. The samples were then desalted using ZipTip-C 18 pipette tips (Merck-Millipore) and analyzed by MALDI-TOF mass-spectrometry.
Serum and Erythrocytes
Normal human serum (NHS) used as a source of complement was collected by medical staff (Laboratory of Viral Infections Diagnostics, Department of Clinical Microbiology, Pavlov First Saint Petersburg State Medical University, Saint Petersburg, Russia) from more than 20 healthy volunteers, pooled, aliquoted, and stored at −70 • C no longer than two months. Serum aliquots were thawed at +4 • C on the day of the experiment, kept in an ice bath before introducing to test tubes, and were not used repetitively. To obtain serum with inactivated complement, it was incubated at +56 • C for an hour immediately before the experiment.
Complement Activation
The ability of peptides to modulate the human complement system was evaluated by hemolytic assay and by ELISA, as previously described [20,21]. In addition to C3a ELISA, as in previous works, we used the C5a ELISA Kit from "Cytokine" (Saint Petersburg, Russia).
Briefly, the experimental samples contained erythrocytes, diluted NHS as a source of complement proteins, and arenicin analogues at different concentrations. For the CP assay, E sh was introduced to a final concentration of 1 × 10 8 cells per mL, NHS was diluted to 1%, and DGVB ++ was used to dilute all of the components. For the AP assay, there were 1 × 10 8 cells per mL of E rab , 5% NHS, and GVB + . After the 30 min incubation at +37 • C, the lysis of the erythrocytes was stopped by the addition of PBS (phosphate buffered saline, pH 7.4) in a ratio of 1:7.5. Samples were centrifuged at 500 g for 5 min at room temperature and the supernatants were photometered at 414 nm. The same supernatants were used for C3a and C5a determination by ELISA.
For the calculation and visualization of results, we utilized the coefficients for the evaluation of the hemolytic activity of complement (H, for "hemolysis") and of complementdependent C3a accumulation (E, for "ELISA").
The hemolytic activity of serum in a sample was counted as The control was a sample with no peptides added. The H values above zero indicate the augmentation of complement-mediated hemolysis, while the H values below zero mean inhibition.
The alterations in the accumulation of C3a or C5a were expressed as E = OD450(sample) − OD450(control) OD450(control) As the control, a sample with no peptides added was used. As with the H coefficient, the E values above or below zero indicate an increase or decrease in anaphylatoxin accumulation, respectively.
Statistical Analysis
Statistical analysis was made using R language (v4.0.2) in RStudio environment (R Core Team, R Foundation for Statistical Computing, Vienna, Austria). Significance of the H and E coefficient values' deviation from the controls was evaluated by the two-sample t-test. The experiments on complement modulation were performed at least four times for each of the peptides. For both the hemolytic and ELISA assays, p-values less than 0.05 were considered statistically significant. The plots were drawn using R language with the ggplot2 (v3.3.2) and ggpubr (v.0.4.0) packages.
Institutional Review Board Statement: Not applicable.
Data Availability Statement: All data presented in this study are available from the corresponding author on reasonable request. | 2022-10-01T15:04:45.468Z | 2022-09-28T00:00:00.000 | {
"year": 2022,
"sha1": "e60f72ee3bd24c45298b6423c6b982e1a987b486",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-3397/20/10/612/pdf?version=1664373886",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8024b1858b84c7f4e59514b54ef9863605b65baa",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253846855 | pes2o/s2orc | v3-fos-license | On-site cycling drag analysis with the Ring of Fire
The Ring of Fire (RoF) measurement concept, introduced by Terra et al. (Exp Fluids 58:83. https://doi.org/10.1007/s00348-017-2331-0, 2017; Experiments in Fluids 59:120, 2018), is applied to real cyclists to enable the aerodynamic drag determination during sport action. This principle is based on large-scale stereoscopic particle image velocimetry (PIV) measurements over a plane crossed by the athlete during cycling. The momentum before and after the passage of the athlete poses the basis for the control volume analysis in the athlete’s frame of reference, which returns the aerodynamic drag. This approach extrapolates aerodynamic studies towards more realistic conditions, compared to experiments performed in wind tunnels with scaled or stationary athletes. The measurement concept is termed Ring of Fire as the rider crosses a region of intense light. Two experiments are conducted, indoor and outdoor, with attention placed on the effects of the environmental conditions and the confinement of the measurement region. Stereo-PIV measurements feature a plane of approximately 2 × 2 m2, using neutrally buoyant sub-millimeter helium-filled soap bubbles (HFSB) as flow tracers. The drag measurement is obtained examining the wake produced by the athlete. It is observed that the drag value becomes independent of time after about 5 torso lengths from the passage. A statistical estimate of the drag is produced combining the results of several passages. Fluctuations of the drag value during a single passage are associated with the unsteady wake flow. Overall fluctuations among different transits are ascribed to the varying conditions of the airflow prior to the passage of the athlete. The experiments conducted outdoor exhibit significantly larger dispersion of the drag value, compared to the quieter conditions indoor. Repetition of the transit 10–30 times yields a basis for statistical convergence of the average drag value. The flow topology past the cyclist compares satisfactorily between both experiments and with wind tunnel experiments reported in literature. The current measurements clearly separate drag values from upright and time–trial athlete’s positions, indicating the suitability of this principle for aerodynamic analysis and optimization studies. .
Introduction
Most experimental research in sport aerodynamics is performed in wind tunnels, despite the fact that the dynamical situation to be simulated (e.g., a cycling or running athlete) poses challenges related to the athlete motion and its control. As a result, the problem is often simplified reverting to a stationary scaled model to match the constraints posed by the wind tunnel size and the measurement techniques used for the aerodynamic analysis. The aerodynamic force is directly measured by connecting the model to a force balance. Alternatively, the drag force can be derived from velocity measurements in the wake of the object, carried out either via Pitot rakes (e.g., Jones 1936) or particle image velocimetry more recently (Kurtulus et al. 2007;van Oudheusden et al. 2007;David et al. 2009). The latter principle invokes conservation of momentum in a control volume that encloses the object. The deficit of momentum flux past the object corresponds to the aerodynamic drag acting on it (van Oudheusden et al. 2006).
In some cases, wind tunnel experiments cannot reproduce the aerodynamic conditions with sufficient accuracy and experiments whereby the model moves in quiescent air are considered. Typical limitations encountered in wind tunnel tests of sport aerodynamics range from inaccurate scaling of shape and roughness, model blockage, interference of the support (Barlow et al. 1999) or, for instance, when the flow behind accelerating objects is to be dealt with (Coutanceau and Bouard 1977). Additionally, the wake development far downstream of the model can be performed with more advantages when the model is towed (Scarano et al. 2002) or catapulted in quiescent fluid (Von Carmer et al. 2008). The transiting model approach has been successfully adopted to investigate also ground vehicle aerodynamics (Jönsson and Loose 2016) or animals in free flight (Hedenström and Johansson 2015;Ben-Gida et al. 2013).
The recent works of Terra et al. (2017Terra et al. ( , 2018 with tomographic PIV in combination with helium-filled soap bubbles to determine the drag of a towed sphere can be seen as preliminary to the current study. The use of HFSB offers the potential to upscale the measurement region up to several square meters as demonstrated by Bosbach et al. (2009).
In the present study, a measurement apparatus is realized that quantifies the aerodynamic drag of a full-scale cyclist during sport action. The experimental procedure to achieve drag measurements follows the same principles discussed by Terra et al. (2017). The measurements are performed by large-scale stereoscopic PIV over a field of view of about 4 m 2 . For such approach, where the rider crosses the illuminated measurement plane, the experimental method is referred with the name "Ring of Fire" (RoF).
Here, the RoF concept is applied for the study of the aerodynamic drag in cycling. The latter dominates the forces opposing the athlete's motion, which justifies the attention devoted to aerodynamic drag in several studies (Kyle and Burke 1984;Wilson 2004;Lukes et al. 2005;Crouch et al. 2017 among others). The flow field around a pedaling cyclist features a complex system of vortices, in turn depending also upon the cyclist's torso and legs position along the crank cycle. These latter variations in aerodynamic drag cannot be solely ascribed to changes in frontal area, but result from complex aerodynamic interactions leading to different flow and vortex topology. Crouch et al. (2014) have produced a detailed aerodynamic survey in the wake of a cyclist by wind tunnel experiments. The work resulted in the identification of the most prominent streamwise vortices emanating from the athlete at different positions during pedaling.
The detailed velocity field around a full-scale cyclist model has been recently measured with robotic particle image velocimetry (Jux et al. 2018), where also the near field flow topology has been characterized.
The RoF experiments aim at determining the aerodynamic drag of the cyclist during sport action, such to obtain an estimate close to on-site conditions. The results are to be compared not only to the above mentioned wind tunnel studies and computational fluid dynamics (CFD) simulations, but also to other techniques currently practiced for on-site measurements (coast down, Petrushov 1998; torque power output, Grappe et al. 1997). Moreover, they support the correlation between the aerodynamic drag and the flow field by quantitative visualizations of the velocity field in the cyclist's wake.
The present work describes the realization of the RoF concept for full-scale sport aerodynamics and discusses the experimental procedures for indoor and outdoor experiments, mimicking, respectively, track and road cycling. The aerodynamic drag estimation from cyclists during sport action is compared to literature data from wind tunnel experiments and other techniques. Furthermore, the experiments cover different postures of the cyclist (time trial and upright) with the aim to directly measure the effect of posture on aerodynamic drag and its detectability with the RoF.
Working principle
The aerodynamic drag of an object moving in a fluid can be evaluated invoking the conservation of momentum expressed in a control volume. A recent review of the problem has been provided by Rival and Van Oudheusden (2017). The formulation of the problem is simplified from its unsteady form to the steady condition when applying a Galilean transformation (Arnold 1989), whereby the frame Page 3 of 16 90 of reference moves with the object and the fluid is considered in uniform motion upstream of the object (Terra et al. 2018). If the control surfaces S 1 and S 2 are sufficiently far from the object surface, it can be shown that the viscous stress is negligible (Kurtulus et al. 2007). The drag force can, thus, be expressed as follows: In the expression above is the air density. The cyclist is assumed to move at constant speed u C with respect to the laboratory frame of reference. Let us now consider a control surface normal to the direction of motion of the cyclist. Prior to the passage of the cyclist, the air motions feature a chaotic velocity u env , resulting from the environmental effects, as depicted in Fig. 1-top. Assuming uniform and quiescent conditions prior to the passage would largely simplify the problem formulation. However, even in scaled experiments, the disturbances in the air motion induced by the environment and the seeding generation are reported not to be negligible (Terra et al. 2018). (1) After the passage of the cyclist, the flow velocity features a coherent wake with a velocity profile u wake that follows the moving cyclist. Making use of a Galilean transformation, the representation of velocity and momentum changes from the laboratory to the cyclist frame of reference moving at speed u C . As a result, the air flow velocity ahead of the cyclist can be written as U ∞ = u env − u C , while that at its back as ( Fig. 1-bottom): This expression is valid at the condition that the mass flow is conserved across S 1 and S 2 . This is ensured by shrinking the inlet plane (S 1 ) from the outer edges, starting from the equal size as that of the outlet plane (S 2 ).
Equation (2) yields the instantaneous aerodynamic drag from the surface integral of momentum and pressure over a fixed plane before and after passage of the cyclist. Ensemble averaging (Eq. 3) of the drag among multiple passages is performed to achieve a higher degree of statistical convergence.
where N is the number of model passages. The aerodynamic drag exhibits temporal fluctuations associated with the unsteady nature of the flow around the cyclist. However, these unsteady fluctuations are little relevant to the evaluation of the cyclist's drag, given their short time scale. Time averaging is, therefore, performed within the ensemble average (Eq. 4) with the objective of reducing the effect of the unsteady fluctuations on the evaluation of the time-average drag.
where T is the total number of time steps and D(t i ) is the ensemble average drag at each time step in the wake.
Experimental setup and procedure
Experiments were conducted with a cyclist riding a timetrial (TT) bike. For the indoor case, the cyclist was male, 1.89 m tall, with weight of 68 kg. He wore a short-sleeve time-trial suit from Team Sunweb and a Giant Rivet TT helmet. The cyclist riding through the outdoor setup was 1.84 m tall and weighed 83 kg at the moment of testing. He was equipped with a long sleeved time-trial suit from Team Blanco and a Lazer Wasp TT helmet. Moreover, for safety reasons, both cyclists wore a pair of laser goggles. The approximate torso chord length, c, for both athletes, is 600 mm. In the indoor experiment, a Giant Trinity TT Advanced Pro bike with 2 × 11 gears was used, while a Ridley Cheetah TT bike with 2 × 9 gears was used for the outdoor experiment.
Experimental facilities and cycling conditions
The experimental facilities and test conditions are presented in Table 1. The top view of the sport hall and of the outdoor site is shown in Fig. 2. The flow tracers are generated and confined within a tunnel of 4 m × 3 m and 3 m × 2 m (width × height) for the indoor and outdoor experiments, respectively. Curtains are used to maintain a high concentration of tracers within the duct. The entrance and the exit in the outdoor experiment are closed during accumulation and opened prior to the transit of the cyclist. For the indoor experiment, a curtain at the exit was sufficient. The measurement plane is near the half of the duct. Considering the small blockage ratios of 3.5 and 7% for the indoor and outdoor experiments, respectively, a non-confined environment is assumed for the control volume approach. The floor is covered with a thin carpet (polypropylene, 3 mm) to avoid ground slipperiness due to the PIV seeding. A photograph of the setup during experiments is shown in Figs. 3 and 4. During the indoor experiment, the cameras were positioned 6 m upstream of the duct entrance.
Although the two experiments have similar acceleration length before the measurement plane (Fig. 2), the limited available braking length in the indoor experiments requires conducting the tests at lower velocity (5.3 m/s). The crank angle is defined as the angular position of the right foot (forward) with respect to horizontal crank position .
In both the cases, measurements are conducted with the cyclist in upright and time-trial position (see Fig. 5). Following Crouch et al. (2014), the pedaling frequency (cadence) is normalized with the advancing speed, k = 2r f u C , where r is the bike crank length, f the cadence and u C the cyclist velocity, as reported in Table 1. The reduced frequency is k = 0.12 indoor and k = 0.23 outdoor, respectively.
PIV instrumentation, imaging and data processing
Velocity measurements are performed with a large-scale stereoscopic-PIV system. The experimental parameters are presented in Table 2. Neutrally buoyant helium-filled soap bubbles (HFSB) are used with an average diameter between 0.3 and 0.4 mm, providing sufficient light scattering to visualize a field of view (FOV) of the order of 4 m 2 . The tracers In the indoor experiment, a low-repetition rate PIV system is used, whereas the outdoor experiment features high-speed PIV (see Table 2 for specifications). The results are not affected by the selection of the hardware, which is different only due to the availability at the time of the experiment. The low-speed system benefits from the higher pulse energy and sensor resolution with well-resolved particle images (diffraction disk covered with approximately 2 pixels). On the other hand, the high-speed system offers three orders of magnitude higher temporal resolution, enabling more advanced data processing, at the cost, however, of a lower imaging resolution (diffraction disk imaged over 0.5 pixels). The pulse separation with the low-speed system is chosen considering the out-of-plane loss-of-correlation factor (Keane and Adrian 1992). A cross-correlation analysis with multigrid image deformation (Scarano and Riethmuller 2000) is employed. A typical recording of particle images is shown in Fig. 6 for both experiments. The more controlled environment conditions in the indoor experiment result in a more uniform dispersion of the tracers and PIV images with homogeneous concentration. Achieving uniform seeding distribution in the outdoor experiment is hampered by the effect of wind gusts. From the raw PIV images, the cyclist's crank angle at the moment of the passage through the laser sheet is determined with an accuracy of ± 10°.
Data processing
The recorded images are analyzed with the LaVision DaVis 8 software. The pre-processing removes background light by subtracting the minimum intensity over time at each pixel. The recordings from the indoor experiment are analyzed with dual-frame cross-correlation. The time separation between frames is set to 3 ms. A sliding sum-of-correlation algorithm (Sciacchitano et al. 2012) is employed for the outdoor experiment. For the latter, the analysis performs an average of the correlation maps from seven pairs of frames sliding a time interval of 3.5 ms. The time separation between frames is set to 2 ms. To quantify the range of resolvable velocity scales, the dynamic velocity range (DVR) is determined as the ratio between the maximum velocity in the near wake of the cyclist and the standard deviation of the velocity distribution in the quiescent flow prior to the cyclist's passage. Details of the image processing parameters and estimates of the measurement dynamic range are summarized in Table 3. The drag force evaluation after one passage of the cyclist is obtained via Eq. (2). The velocity field prior to the passage of the cyclist is significantly weaker than in his wake. Averaging the measurements before passage over a short time interval (1.25 s and 0.1 s for the indoor and outdoor experiment, respectively) reduces the effect of measurement noise in the determination of u env . To further reduce the measurement noise in the drag estimate, a wake contouring approach is applied which isolates the cyclist's wake from the outer flow region. The wake is defined as the flow region whose velocity is below a certain fixed percentage (5% in the present case) of the minimum velocity in the flow field. Such region is then dilated by two adjacent vectors to include also the shear layers, thus obtaining the outlet surface S 2 of Eq.
(2). The inlet surface S 1 is obtained by shrinking S 1 in all directions up to the point that the conservation of mass is satisfied.
The cyclist's speed is monitored measuring the bicycle transit time across the light sheet. In the indoor experiment, a magnetic sensor provides the cyclist speed in real time, additionally.
The wake past the cyclist exhibits unsteady behavior. Consequently, also the evaluation of the drag force yields temporal variations. A statistically significant estimate of the cyclist's average drag is produced by ensemble averaging (Eq. 3) the velocity field obtained from 10 and 28 repeated measurements for the outdoor and indoor conditions, respectively.
Two main repeatability issues are identified that require a specific treatment of the instantaneous data to retrieve ensemble average flow fields: (1) since the cyclist crosses the measurement plane at a different Y coordinate for every passage, the measured velocity field is relocated in the Y-direction to compensate for such shift; (2) the relative distance between cyclist and measured wake planes is not exactly the same among different passages; the exact streamwise relocation is obtained examining the position of the cyclist when he crosses the measurement plane. For the latter problem, the high-speed PIV system resolves the motion of the cyclist within few millimeters in the streamwise direction; therefore, any error associated with variations of the relative distance between cyclist and wake planes can be neglected.
The results are presented in the coordinate system as shown in Figs. 3 and 4 with t = 0 defined as when the rearmost point of the saddle crosses the laser sheet. To make the comparison between results from both experiments possible, the flow field variables and time are made dimensionless in the following way: The dimensionless streamwise velocity u * x is written in the frame of reference of the cyclist, meaning that when u * x = 0 , the velocity deficit is equal to the cyclist velocity, and when u * x = 1 there is no velocity deficit (equivalent to freestream conditions).
The uncertainty of the estimated C d A values is analyzed a posteriori, based on the standard deviation of the instantaneous drag area estimates, and the number of independent samples (considering both the number of passages of the cyclist, and the number of independent flow measurements in the wake of a cyclist during one passage). A detailed analysis of the measurement uncertainty and drag resolution of the Ring of Fire system for small-scale applications is reported in a recent work of Terra et al. (2018), where the effect of simplifications in the conservation of momentum equation is considered.
Measurement procedure
Before the passage of the athlete, the duct curtains are closed and the HFSB accumulate for approximately 2 min. Atmospheric wind conditions require continuous operation of the seeding generator for the outdoor experiment. Instead, in the indoor experiment the bubbles production is paused prior to the passage of the cyclist and the momentum disturbance introduced by the seeding rake micro jets decays.
Dimensionless streamwise vorticity * The cyclist starts from the same predefined distance and crank angle for each passage, to have well-matching athlete posture (leg position) in the measured area between passages. In the indoor experiment, the image acquisition is triggered by a photoelectric sensor, while the user manually triggers the image acquisition in the outdoor experiment. Transferring the acquired images to mass storage requires 5 min with the high-speed PIV system, whereas typically 40 image pairs are recorded with the low-speed PIV system permitting to repeat the experiment within 1 min.
Air flow conditions before cyclist transit
The conditions before the passage of the cyclist rarely exhibit fully quiescent air. The environmental flow motions feature a velocity u env , which is in general non-zero, nonuniform and non-stationary, mostly due to external conditions and the seeding injection. An instantaneous flow field before the cyclist's passage is illustrated in Fig. 7 for both the indoor (left) and outdoor (right) experiments. To reduce the noise in the data, the velocity is averaged in time during 1.25 s (indoor) and 0.1 s (outdoor) before the passage of the athlete.
The indoor experiment was performed in a closed, thus, quieter environment; whereas during the outdoor experiment, the presence of moderate wind (0.5-1 m/s) could only be partly attenuated by the walls of the tunnel. This is clearly visible in Fig. 7, where the environment velocity is of the order of a 5 cm/s in the indoor experiment and attains 30 cm/s outdoor.
The velocity distribution prior to the passage is taken into account for the drag computation via Eq. (2) as it contributes to the overall momentum budget, as also discussed by Terra et al. (2018). Furthermore, unsteady effect may influence the interaction of the wake with the initial velocity field, resulting in variations of the measured drag. The latter effects, however, are neglected and cannot be directly observed with the current experimental apparatus.
Velocity field in the cyclist wake
The flow fields in the wake of the cyclist are discussed for the indoor upright and time-trial configuration as well as for the outdoor time-trial configuration. Figure 8 shows a comparison of the instantaneous streamwise velocity u * x at t* = 3. Note that the cyclist contours in Fig. 8 are meant to indicate the general cross section of the athlete and do not reproduce the exact position of the legs. The development of both an indoor as well as an outdoor instantaneous wake over time is available online as supplementary material. First, a comparison between time-trial ( Fig. 8-left) and upright positions (Fig. 8-middle) for the indoor experiment is given. The magnitude and location of the peak momentum deficit are similar in both cases. The out-of-plane velocity contour of the wake ( u * x = 0.95), however, is clearly wider for the upright case. Interestingly, it has the same height for the time-trial position as it has for the upright position, despite a higher height of the cyclist in upright position.
Next, the time-trial position is compared between the indoor (Fig. 8-middle) and outdoor experiments (Fig. 8right). The wake observed in the outdoor experiment is wider and shows a slightly higher peak momentum deficit. Despite that the heights of both cyclists in time-trial position were very similar, in the indoor experiment, the u * x = 0.95 contour is consistently higher (see also Fig. 9). A reason for this can be the different inclination angle of the torso of both cyclists, generating a different amount of downwash over the back.
The temporal development of the ensemble average streamwise velocity field (u * x ) past the cyclist is shown in Fig. 9. The ensemble average is obtained from 28 and 10 individual runs from, respectively, the indoor and outdoor experiments. The maximum deficit in the wake (~ 45%) is observed at the shortest time delay after the passage. The deficit is not uniformly distributed and attains its maximum behind the legs. The turbulent diffusion causes a rapid redevelopment of the flow in the wake, as it is seen for the individual runs as well. Considering its boundary by the contour where the streamwise velocity attains 95% of the undisturbed value, one observes that the flow entrainment smoothens the fine details of the streamwise velocity distribution and internally to the wake, the peak velocity deficit reduces. The diffusion process causes the wake to exceed the measurement region, with consequences on the uncertainty of the drag estimate. This occurs earlier for the outdoor experiment (t* ~ 9) than for the indoor experiment (t* ~ 13), which is ascribed to the higher intensity of velocity fluctuations in the surrounding environment. The higher acquisition frequency of the outdoor experiment provides a more detailed look into the temporal development of the wake, however, at the cost of a lower accuracy and higher amount of erroneous vectors.
Next to the out-of-plane velocity, the similarity between the flow fields is also assessed by looking at the in-plane streamlines. It is apparent that the primary features are consistent throughout Fig. 9, in that, close to the cyclist, a strong downwash exists near the vertical centreline. It can be reasoned that this characteristic is responsible for the downward movement over time of the wake structure. Furthermore, a strong inwash between 0.8 m and 1.2 m from the floor is induced by the main hip vortices in both experiments, which is further increased by the head vortices as seen in Fig. 10. Over time, the hip/thigh vortex structure seems to outlast the smaller vortex structures, which in turn means that the former will dominate the wake behavior in the far wake. There, the induced inwash causes a narrowing of the upper wake, while the broadening of the lower wake structure can be assigned to the induced outwash by the vortex pair, as well as the present ground, which constrains the downwash.
The analysis of the wake in terms of vorticity elucidates some of the characteristic aspects of the flow developing around and past the cyclist. Figure 10 illustrates and compares the distribution of streamwise vortices as measured indoor (upright and time trial) and outdoor (time trial). Positive vorticity relates to counter-clockwise rotating vortices, while negative vorticity to clockwise ones. The flow structures characterizing the upright and timetrial wakes from the indoor experiment are compared in Fig. 10-left and -middle. There is substantial equivalence in the vortex structure strength and position, with the exception of the hip-thighs and the head vortices. In the former, the upright position shows higher vorticity on both sides. In the latter, the upright helmet vortices are negligible structures.
Moreover, the upright posture shows new large-scale structures, namely the shoulder vortex and the arm vortex couple. It is hypothesized that for each shoulder, one outer vortex is shed. Its generation mechanism is proper of what has been called as a 3D separation. In fact, on both sides, they are co-rotating with the hip vortices. This structure arises as a consequence of the very low pressure in the upper back of the cyclist. The arm vortex couples consist of a counterrotating vortex with respect to the shoulder vortex on the outside of the arm and a co-rotating one on the inside of the arm. They are assumed to be originated from the forward extended arms towards the brake hoods.
The vorticity field of the indoor and outdoor time-trial position exhibits an overall agreement, although some details are not exactly reproduced. This may be ascribed to the torso angle, not fully repeated during indoor and outdoor experiments. The vorticity structure presented in Fig. 10 also shows a good similarity with that reported in the studies of Crouch et al. (2014Crouch et al. ( , 2016.
Ensemble average drag area
Following the authoritative review article from Crouch et al. (2017), the drag results are presented as drag area (C d A). In fact, the overall aerodynamic efficiency of the cyclist is governed by both frontal area of the cyclists and the bike and the drag coefficient (shape of the cyclist and bike). Based on Eq. (2) and on the procedures described in Sect. 3.3, the instantaneous drag area is computed for each passage as a function of the dimensionless time. In Fig. 11, the drag area evaluation is given for five passages with the cyclist in upright posture. In the outdoor experiment, half a crank cycle is spanned along ∆t * ≈ 4, while along ∆t * ≈ 7.5 for the indoor experiment.
For t * ≤ 5 , the drag area computed via Eq.
(2) is underestimated as the contribution of the static pressure in the measurement plane is wrongly estimated too close to the cyclist due to the larger velocity gradients (the in-plane gradients are modulated due to the limited spatial resolution; the outof-plane ones are completely neglected because stereo PIV is used). In case of the outdoor experiment, a C d A plateau persists until approximately t * ≤ 10 , when a sudden drop in the drag area occurs. This can be related to part of the wake moving out of the measurement domain in several runs. In the outdoor experiment, the external atmospheric conditions and a narrower field of view cause the problem. Moreover, the outdoor experiment generally exhibits larger fluctuations, especially in the near wake, which indicate a poorer control and repeatability of experimental conditions.
The comparison between two distinct postures of the athlete is shown in Fig. 12 to illustrate the overall sensitivity of the Ring of Fire system to macroscopic variations of the drag area. Together with the time-average C d A, a shaded band wide 2 C d A represents the experimental uncertainty at 95% confidence level. Interestingly, although both the experiments were designed to obtain phase-locked average data, no clear cyclic trend depending on the crank angle is visible. This result differs from the findings of Crouch et al. (2014), who highlighted a 20% drag area variation with the crank angle, for a fixed t * . This outcome shows that the wake diffusion and turbulent mixing is the main phenomenon affecting the streamwise wake trend.
Time ensemble average drag area
The measurements of drag area value for several configurations are summarized in Fig. 13. The interval 6 ≤ t * ≤ 9 is considered, where systematic errors due to the pressure term and wake exit from the measurement region can be neglected.
The drag area of the cyclist in the outdoor experiment is higher for both time-trial and upright positions. These results are in agreement with the wake contours in Figs. 8 and 9, where a wider contour with higher peak momentum deficit is observed for the outdoor cyclist. A relative difference between 20 and 35% is measured between time-trial and upright positions, which is in agreement with literature. The Fig. 11 Instantaneous drag area measurements with the cyclist in upright posture bigger difference between the two experiments is observed when comparing the mean drag area in upright position, with the outdoor experiment returning a higher C d A value. It is hypothesized that this is due to a bigger difference in frontal area between the upright postures compared to the difference between the time-trial postures.
Finally, the current results are compared to the data collected from literature. The results of aerodynamic research in cycling exhibit a large scatter due to differences in riders, bicycle models, postures and garment and general experimental conditions. Figure 14 (top) and (bottom) compare drag areas versus velocity, measured in time-trial position and upright position, respectively, for different experiments and some computer simulations. Measurements obtained during races are obtained at velocity between 12 and 16 m/s. In our experiments, the limited space for accelerating and braking led to a lower velocity for the tests (5-8 m/s). Conversely, Grappe (2009) showed that in the range of 5-20 m/s, the drag area of a cyclist remains approximately constant. The results from the current experiments fall within this large cloud of data and correlate favorably with wind tunnel and on-site experiments. In contrast, results from CFD simulations yield systematically lower values of drag area.
Conclusions
Large-scale stereo-PIV measurements are conducted to determine the aerodynamic drag of a moving cyclist in indoor and outdoor on-site conditions using the control volume approach. The flow is measured in the wake of a cyclist moving at 5 m/s and 8 m/s for, respectively the indoor and outdoor experiments. Instantaneous as well as ensemble average streamwise velocity fields have been obtained. Despite the differences between the two experiments in the cyclist geometry, bike model and the cycling speed, the flow fields in the near wake of the riders compare well between both experiments and literature. The instantaneous and ensemble average aerodynamic drag is evaluated via a control volume approach along the wake | 2022-11-25T15:33:02.161Z | 2019-05-13T00:00:00.000 | {
"year": 2019,
"sha1": "3fab4eca47481b8074ad406e0c6c9d0b08c2952f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00348-019-2737-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "3fab4eca47481b8074ad406e0c6c9d0b08c2952f",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
249191321 | pes2o/s2orc | v3-fos-license | Entire solutions to 4-dimensional Ginzburg-Landau equations and codimension 2 minimal submanifolds
We consider the magnetic Ginzburg-Landau equations in $\mathbb{R}^4$ $$ \begin{cases} -\varepsilon^2(\nabla-iA)^2u = \frac{1}{2}(1-|u|^{2})u,\\ \varepsilon^2 d^*dA = \langle(\nabla-iA)u,iu\rangle \end{cases} $$ formally corresponding to the Euler-Lagrange equations for the energy functional $$ E(u,A)=\frac{1}{2}\int_{\mathbb{R}^4}|(\nabla-iA)u|^{2}+\varepsilon^2|dA|^{2}+\frac{1}{4\varepsilon^2}(1-|u|^{2})^{2}. $$ Here $u:\mathbb{R}^4\to \mathbb{C}$, $A: \mathbb{R}^4\to\mathbb{R}^4$ and $d$ denotes the exterior derivative acting on the one-form dual to $A$. Given a 2-dimensional minimal surface $M$ in $\mathbb{R}^3$ with finite total curvature and non-degenerate, we construct a solution $(u_\varepsilon,A_\varepsilon)$ which has a zero set consisting of a smooth 2-dimensional surface close to $M\times \{0\}\subset \mathbb{R}^4$. Away from the latter surface we have $|u_\varepsilon| \to 1$ and $$ u_\varepsilon(x)\, \to\, \frac {z}{|z|},\quad A_\varepsilon(x)\, \to\, \frac 1{|z|^2} ( -z_2 \nu(y) + z_1 {\textbf{e}}_4), \quad x = y + z_1 \nu(y) + z_2 {\textbf{e}}_4 $$ for all sufficiently small $z\ne 0$. Here $y\in M$ and $\nu(y)$ is a unit normal vector field to $M$ in $\mathbb{R}^3$.
Introduction
We consider the magnetic Ginzburg-Landau energy in R n E(u, A) = 1 2 R n |(∇ − iA)u| 2 + ε 2 |dA| 2 + 1 4ε 2 (1 − |u| 2 ) 2 (1.1) whose argument is a pair (u, A) where u : R n → C is an order parameter and A : R n → R n is a vector field which we also regard as a one form A(x) = A j (x)dx j in R n , so that its exterior derivative is and |dA| 2 in (1.1) is given by Energy (1.1) arises in the classical Ginzburg-Landau theory of superconductivity; the quantity |u| 2 measures density of Cooper pairs of superconducting electrons and A is the induced magnetic potential. The zero set of u is interpreted as that of defects of the underlying material where superconductivity is lost. The functional E arises from a U (1)gauge theory, meaning that it is invariant under U (1) gauge transformations (u, A) → (ue iγ , A + ∇γ) for any γ : R n → R.
The associated Euler-Lagrange equations for E are given by In the above expression the brackets ·, · represent the standard inner product of complex numbers. The operator in the left-hand side of the second equation reads, in Euclidean coordinates, The term (∇ − iA)u, iu is dual to a gauge-invariant real-valued 1-form called superconducting current.
This type of connection between solution of semilinear PDEs and minimal submanifolds is well understood in the Allen-Cahn case
for real-valued functions u. Solutions that concentrate along minimal hypersurfaces have been broadly studied after the pioneering work by Modica-Mortola [26] and De Giorgi's conjecture [9]. In general one expects existence of solutions u ε (x) whose zero set lies close to a given minimal hypersurface M and u ε (x) ≈ tanh(z/ √ 2ε), where z is a normal coordinate to M . This is the principle behind various constructions including the works by Pacard and Ritoré [32] in compact manifolds and by the second author, M. Kowalczyk and J. Wei [12,13] for entire solutions in R n .
In [13] such a construction has been achieved for n = 3 in (1.8) and each given embedded minimal surface M in R 3 with finite total curvature and non-degenerate in suitable sense, the simplest example being the catenoid. The purpose of this paper is to construct entire solutions to (1.6) with the asymptotic behaviour (1.7) around the class of codimension 2 minimal submanifolds in R 4 corresponding to the embedding M × {0} ⊂ R 4 of the class of minimal surfaces M in R 3 considered in [13].
Thus, we look for entire solutions (u ε , A ε ) of system (1.6) such that the zero set of u ε is a codimension 2 manifold close to M = M × {0} and |u ε | → 1 as ε → 0 in compact subsets of R 4 \ M . Next we recall some major features of this celebrated class of minimal surfaces.
Complete minimal surfaces of finite total curvature in R 3 . A surface M embedded in R 3 is said to be a minimal surface if it is a critical point of the area functional, or equivalently if its mean curvature k 1 + k 2 ≡ 0 vanishes on M , where k 1 , k 2 are the principal curvatures. Denoting with K = k 1 k 2 ≤ 0 the Gauss curvature, we say that M has finite total curvature if M |K| < ∞.
The theory of embedded, complete minimal surfaces M with finite total curvature in R 3 , has reached a notable development in the last four decades. For more than a century, the plane and the catenoid were the only two known examples of such surfaces. While a general theory for those surfaces was available for a long time, only in 1981 Costa discovered a first non-trivial example for M embedded, orientable, with genus 1, later generalised by Hoffman and Meeks to arbitrary genus, see [8,17]. These surfaces have three ends, two catenoidal and one planar. Many other examples of multiple-end embedded minimal surfaces have been found since; see for instance [20,48] and references therein. Outside a large cylinder, a general manifold M in this class decomposes into the disjoint union of m unbounded connected components M 1 , . . . , M m , called its ends, which are asymptotic to either catenoids or planes, all of them with parallel axes, see [31,19,41]. After a rotation, we can choose coordinates x = (x 1 , x 2 , x 3 ) = (x ′ , x 3 ) in R 3 and a large number R 0 such that Each end M k can asymptotically be represented as where for suitable constants a k , b k , b ik , the function F k satisfies (1.9) The coefficients a k are ordered and balanced, in the sense that a 1 ≤ a 2 ≤ · · · ≤ a m , a 1 + · · · + a m = 0. (1.10) The second variation of the area functional corresponds to the Jacobi operator of M where |A M | 2 = −K is the square norm of the second fundamental form. For later purposes, we record the curvature decay along the ends for r large. This is a consequence of the expansion (1.9), see [13].
(1. 12) We say that M is non-degenerate if the second variation of the area functional has no bounded kernel other than that induced by rigid motions, that is The assumption of non-degeneracy is known to hold in some notable cases of embedded, minimal surfaces in R 3 , like the catenoid or the Costa-Hoffman-Meeks surface of any genus, see [29,30,28].
Main results.
In what follows, M designates a complete, minimal surface with finite total curvature embedded in R 3 , which is also non-degenerate in the sense (1.13). For the moment we assume that the order in the ends (1.10) is strict, namely a 1 < a 2 < · · · < a m . (1.14) Our main result states the existence of a solution of Problem (1.6) with a profile as in (1.7) with M = M × {0} and (ν 1 , ν 2 ) = (ν, e 4 ), where ν is a unit normal vector field on M . We will identify M = M × {0} with M .
From (1.15) the asymptotic behavior of the predicted solution near M for z = 0 is given by The proof yields that corresponding bounds in the statement of the theorem can also be obtained for derivatives of any order. In fact, the contraction mapping principle, the fact that D z u 0 (0) is invertible and the implicit function theorem yields that the set of values x with u ε (x) = 0 can be described as a smooth surface of the form x = y + h 1 1 (y)ν(y) + h 2 1 (y)e 4 + O(ε 2 ), y ∈ M. It is possible to generalise Theorem 1 by removing the hypothesis of non-parallel ends (1.14). In this case, we choose a balanced, ordered vector of real numbers (1.17) We will prove the existence of a solution (u ε , A ε ) such that u −1 ε (0) lies uniformly close to M , for r(x) = O(1), and its k-th end, k = 1, . . . , m lies at uniformly bounded distance from the graph (1.18) As done in [13], we need to make a further geometric requirement on the λ k 's. If two ends are parallel, say a k = a k+1 , then we will need that λ k+1 > λ k for otherwise the graphs (1.18) will eventually intersect. For some σ ∈ (0, 1), we require With these assumptions we can state the general result.
The connection between critical points of functionals of Allen-Cahn or Ginzburg-Landau types and, respectively, codimension 1 and 2 minimal submanifolds has long been known, and a large literature has been dedicated to that subject. Among other works, in the Allen-Cahn case we can mention [25,42,21,47,38,12,13,14,32]. Limits of critical points of min-max type to build codimension 1 minimal surfaces on compact manifolds have recently been used in [6,2,15] as a PDE alternative to the Almgren-Pitts min-max approach [35,24,40].
As for complex-valued Ginzburg-Landau type equations, point vortex concentration in the two dimensional case has been analyzed in many works, among them the classical books [4,39,33]. In higher dimensions, limits towards geodesics in three dimensional space or more generally codimension 2 minimal surfaces have been analyzed from the optics of the calculus of variations in [37,7,22,5,27,18,34].
In the self-dual magnetic Ginzburg-Landau context of energy (1.1) in a compact manifold, Pigati and Stern [34] proved that critical points with uniformly bounded energies approach the weight measure of a stationary, codimension 2 integral varifold. Our result can be interpreted as a form of converse of their statement in the (non-compact) entire space for a special class of minimal submanifolds. In the recent work [23] Liu, Ma, Wei and Wu have found a different class of entire solutions to (1.6) connected with the so-called saddle solution of Allen-Cahn equation and a special minimal surface built by Arezzo and Pacard [1]. Symmetries allow them to reduce the problem to one in two variables.
After completing this paper, we became aware of the very interesting work by De Philippis and Pigati [10]. They have established a result that complements the findings in [34] for the scenario of a non-degenerate codimension 2 minimal submanifold. Their method, based on variational techniques, does not provide detailed asymptotic information. However, they have successfully resolved the more challenging case of Ginzburg-Landau equations where no induced magnetic field is present. Our techniques do not extend to cover that particular case.
Preliminaries
In this Section we discuss relevant features of the model that will be used in the proof of Theorems 1 and 2. We will discuss gauge invariance and its effect on the linearised operator, from which follows the important decomposition (2.10) below. Moreover, we set notation that will be used throughout the paper.
Here and in what follows we define the dot product between pairs as For a pair W = (u, A), we define where G γ is defined, for any real-valued map γ, as One immediate consequence of this fact is that the global minimiser (1, 0) is moved into other global minimisers by gauge transformations, providing directly a collection of solutions to S(W ) = 0 given by (e iγ , dγ).
where Ω is the domain considered.
2.2. The linearised operator. The Fréchet derivative S ′ (W ) around W is given by from where we infer that Φ is orthogonal to the gauge-kernel if and only if The linearised operator (2.5) can be expressed in a more useful form, by the decomposition where −∆ω := (d * d + dd * ) ω is the Hodge Laplacian on forms. Defining we can write (2.8) as and obtain a much clearer insight on the linearised. Indeed, for increments satisfying (2.7) where L W is an elliptic operator which behaves at infinity like −∆ + Id for all finite energy configurations W . For this reason, we refer to L W as the gauge-corrected linearised.
The natural space where the operator L W is defined and continuous is the space H 1 W (M ) of functions Φ = (φ, ω) for which the the covariant Sobolev norm is finite, being ∇ the Levi-Civita connection on M and A the 1-form of W . We introduce a notation that allows us to write in a compact way the gauge-corrected linearised L W . Define the following gradient-like operator where again W = (u, A). Observe that applying the operator d * + d to a p form one obtains the formal sum of a (p − 1)-form and a (p + 1)-form. In what follows this will not play a role; we only use it to be able to define the Hodge operator as a "square" of some first order operator If in particular ω is a 1-form, then we define for any vector v .
Composing ∇ W with its formal adjoint we obtain a laplacian-like operator With this notation, the linearised can be written as The sense of this expression is that it highlights the behaviour at infinity of L W , namely L W ∼ −∆ W + Id. This is due to the fact that T W vanishes at infinity whenever W is a finite energy configuration, that is 2.3. The planar case. Suppose we have a solution W 0 = (u 0 , A 0 ) to S(W 0 ) = 0 in the Euclidean space R n . As pointed out in [43], in this case there is a n-parameter family of isometries of the space, namely the translations, that generate elements of the kernel of S ′ (W 0 ). Direct differentiation along the j-th direction produces elements V j = (∂ j u 0 , ∂ j A 0 ) which are not in L 2 and in general have non-vanishing projection along the gauge-kernel. On the other hand, setting amounts to projecting V j onto the orthogonal of the gauge-kernel, in the sense that for j = 1, . . . , n. Moreover ∀j ∈ {1, . . . , n}.
As already mentioned, one case that will be of particular interest for us is the planar case R 2 with the single-vortex solution centred at the origin, that is, the solution U 0 = (u 0 , A 0 ) introduced in (1.3). The gauge-corrected linearised around U 0 , which will be denoted with is non-degenerate and stable, see [43] and [16], that its kernel is generated only by the elements (2.14), which in this case are given by being f and a the solutions to system (1.4). We claim that Indeed, using that d * A 0 = 0, we have (2.17) Let α = 1; then, using the first order system (1.5), we have With a similar calculation for the case α = 2 we find r dt 1 ) T if α = 2 and the claim is proved.
Let us denote Z U 0 := ker L = span{V 1 , V 2 }. As shown in [43], it holds the coercivity estimate on Z ⊥ for some c > 0. Lastly, we remark that the coercivity estimate (2.18) yields directly, via Lax-Milgram Theorem, to a solvability theory for the equation for a right-hand side Ψ ∈ Z ⊥ U 0 . Precisely it holds the following result.
where V α is as in (2.16). Then there exists a unique solution (2.21) . for some C > 0.
Outline of the proof
In this section we sketch the proof of Theorems 1 and 2 and explain the main ideas. We follow the lines of [11,12,13,14,32,33,46], which consists of finding a precise global approximation starting from a non-degenerate lower dimensional solution and then we build an actual solution as a perturbation of such approximation. A major difference in this case is that the equation for the perturbation is a fixed point formulation whose main term has an infinite-dimensional kernel due to gauge invariance.
Firstly, we define the approximate solution, which in our case will be a pair denoted W . This will encode alone the local behaviour (1.15) around the vortex set. By embeddedness, any point sufficiently close to M can be written in Fermi coordinates x = X(y, z) = y + z 1 ν 1 (y) + z 2 ν 2 (y), y ∈ M, |z| < δ.
We choose as a local approximation where we left implicit the composition with X and h : M → R 2 is a parameter, which depends on ε. A precise calculation of the error of approximation S(W 0 ), performed in §5.3, suggests that a better approximation is obtained by setting W 1 = W 0 + ε 2 Λ, where Λ is a decaying term. Using the minimality of M , we compute where k 1 , k 2 are the principal curvatures of M and Rem = O(ε 3 e −σ|t| ). Here t = z/ε − h(y) and J is the Jacobi operator of M , namely the second variation of the area functional. This approximation will be good enough for our purposes. It remains to extend it beyond the support of the Fermi coordinates. For a suitable cut-off function ζ supported in a neighbourhood of M , we set where Ψ is a pure gauge, carefully chosen in order to create only very small extra terms in the global error S(W ).
Next, we set up the perturbation scheme. We seek a true solution of the form W + Φ, which is equivalent to solving where S is given by (2.1). We write, equivalently where E = S(W ) is the error of approximation, S ′ (W ) is the linearised operator of (2.1) and is formally a quadratic operator in Φ. Starting from (3.3), it is evident that finding a bounded (in a suitable topology) inverse to the linearised operator S ′ (W ) will allow us rephrase the equation for Φ as a fixed point problem. However, as already mentioned, gauge-invariance makes the invertibility of S ′ (W ) challenging. Rather than this, we develop an invertibility theory for S ′ (W ) + Θ W Θ * W = L W which is the "gauge-corrected" linearised. More precisely, the key equation to be solved is By doing so, for such Φ the it will hold So the price to pay for this correction is the presence of a right-hand side in (3.2). Actually, as we will see shortly, the natural degeneracies of the manifold due to ambient isometries prevent, in principle, the possibility of solving (3.4) as it is. Again, the equation is solvable up to a correction term on the right-hand side, which will be added to the one already present in (3.5). Lastly, we will show a posteriori that the natural symmetries of the solution constructed will make all these correction terms vanish, thus finding a true solution.
The main technical result of the paper is the invertibility theory for L W . The proof is based on the fact that, close to the manifold, it holds where D is a first order operator and we denoted Here , L is the 2-dimensional linearised given by (2.15) and it is independent by the y variable. From this expression, it can be seen the role of the Jacobi operator in the resolution of (3.4): we can formally write it as We can obtain an improvement of the approximation by solving Using (3.1), (3.7) becomes An equation of this form can indeed be solved, using the nondegeneracy hypothesis of the Jacobi operator, up to adding corrections terms proportional to the Jacobi fields (1.12).
To sum up, the proof is divided into the following steps: • Define an approximation to a solution W 0 , locally around M , using the canonical profile U 0 . • Add a correction term of order ε 2 to W 0 , obtaining an improved approximation W 1 .
• Glue W 1 to a suitable pure gauge defined away from M , to get a global approximation W . • After formulating the problem as a fixed point, solve using the invertibility theory of the gauge-corrected linearised, up to a correction term that depends on h. • Use the nondegeneracy hypothesis for the Jacobi operator to find a suitable perturbation h that cancels the correction term above. We can do so up to adding new correction terms. • Show that the corrections automatically vanish.
Norms
In this section we introduce several norms which will be used throughout the rest of the paper. For 0 < γ < 1 we denote is the usual Hölder box. The C k,γ norm is defined as In what follows we will need to measure the size and decay of three types of functions, namely those defined on M , those defined on the product M × R 2 and those defined on R 4 .
Norms on M . Let φ be a function defined on M . We want to define a norm to account the decay of φ along the manifold. We defined the map Norms on M × R 2 . We will use a weighted version of (4.3). For µ ≥ 0 and σ ∈ (0, 1), let and, for any k ≥ 1, We will use the norms (4.5) and (4.6) indistinctly for functions ϕ = ϕ(y, x) defined on M × R 2 and for functions ϕ = ϕ(X h (y, x)), possibly defined in the whole space R 4 but supported in a region where Fermi coordinates are defined, and hence understood as defined on M × R 2 .
Norms on R 4 . Finally we introduce a standard weighted norm in the whole space R 4 , simply by setting where ψ is defined in R 4 and µ ≥ 0.
The approximate solution
Now we start carrying out the details of the proof, beginning with the construction of the first, local approximation W 0 . In what follows, M ⊂ R 4 is a two dimensional complete, minimal surface with finite total curvature, which is non-degenerate in the sense of (1.13). We consider the case in which M is embedded in R 3 and that the subsequent immersion in R 4 is done in the canonical way.
In the following calculations we will always adopt the following convention unless differently specified: Latin letters classically used for indexing (i, j, k, . . .) will be used for tangential coordinates to the manifold, while Greek letters (α, β, γ, . . .) will denote coordinates in the normal direction. If needed, we will use the first letters of the Latin alphabet (a, b, c . . .) to indicate all coordinates at once. 5.1. First local approximation. We parametrise a neighbourhood of M in R 4 as follows: let (ν 1 , ν 2 ) := (ν, e 4 ) and let X : O → N , given by be the Fermi coordinates around M . We can choose where τ : M → R + is given by Let now h = (h 1 , h 2 ) be a vector-valued function defined on M and consider the change of coordinates z = ε(t + h(y)), which generates a new parametrisation X h : O h → N given by Observe that, in principle, with this choice of τ the map X is not necessarily one-to-one. On the other hand, in §5.2 below we will make a precise assumption on h that will guarantee injectivity for X h . We define the first approximation to be 3) and we left implicit the composition with the chart X h .
Precise assumptions on h.
Here we state precisely what the assumptions on the vertical perturbation h are. As we have seen in Section 3 the step of making the projections vanish will amount to solve a system having as a principal part the Jacobi operator of h, which in our case reads and such system will be solved via a fixed point argument in a space of functions whose size is small in ε. This procedure is not doable in a straightforward way as the right-hand side of the fixed point is not automatically small.
This issue has already been addressed in [13, §3.3] and is due to the fact that if two consecutive ends of M are parallel, the error of approximation created in the region between the two ends is very small but does not decay with the distance from the origin, and eventually dominates the whole right-hand side.
This issue can be solved by adjusting the parameter h. Consider any m-tuple of real numbers (λ 1 , . . . , λ m ) satisfying We have the following result, the proof of which is postponed to Section 12.
Lemma 2. For any real numbers λ 1 , . . . , λ m satisfying (5.6) there exists a smooth function We remark that the gap condition (1.19) is not necessary for the proof of Lemma 2 but will be useful later to obtain estimates.
At this point we make the following assumption: we write For any vector λ satisfying (5.6) let h 1 0 be the function predicted by Lemma 2 and let h 0 := (h 1 0 , 0). As explained in [13, §3.3], such choice of h 1 0 will let the ends of the hperturbed manifold drift apart fast enough to prevent the creation of the non-decaying error term. On the other hand, we assume for h 1 that for some C > 0 5.3. The size of the error S(W 0 ). In order to measure how good W 0 is as an approximation we compute the error of approximation S(W 0 ) in N , where S is given by (2.1). We begin by expressing the differential operators, namely the connection Laplacian and d * d, in coordinates X h .
Given a 1-form ω = ω a dx a the general coordinate expression for −∆ ω is being ∂ ω a = ∂ a − iω a the components of the connection gradient ∇ ω . Similarly, the operator d * d acting on 1-forms can be written in the following way: let ω ab := ∂ a ω b − ∂ b ω a . Then We will first write the operators in coordinates X. First, remark that the Euclidean metric on N can be expressed as a function of Fermi coordinates as a block matrix where I 2 is the 2 × 2 identity matrix. This is a consequence of the well-known corresponding expression in codimension 1 (see, for instance, [14, Lemma 11.1]) and our choice of embedding in R 4 . In particular, we can write where g ij z is the metric restricted to the manifold We introduce the functions and let be the mean curvature in the direction of ν β . Observe that by the flatness of the immersion, H 2 z ≡ 0. The mean curvature vector H z can be expanded in terms of the principal curvatures of M as see [12]. In particular, we write We will use a truncated expansion where we used the minimality of M . See for instance [12]. Let us consider local coordinates for M around a generic point p, namely Using these, we end up with the following coordinate expression for the operators we recall that all coefficients are evaluated at (ξ, z). Here we are abusing again the notation, leaving implicit the composition with the chart. We can also expand and obtain Next we analyse how these expansions transform when we switch to X h . In what follows we will denote h β j = ∂ j h β , h β ij = ∂ ij h β and so on. Changing coordinates yields, for a function φ = φ(ξ, t) and a 1- and where all coefficients above have to be evaluated either at (ξ) or (ξ, t + h). A useful remark which applies to our case is that if the 1-form ω is purely orthogonal to M , in the sense that ω = ω α (t)dt α , then only the terms of the form ω βγ don't vanish and hence Evaluating S(W 0 ) using the formulas just found and the fact that U 0 solves (1.2), we find where V β (t) is as in (2.16), being ∇ U 0 as in (2.12), and Now, let σ ∈ (0, 1). Using (5.10) and (5.13), we can write the following expansion of the first error of approximation where ∇ βγ,U 0 is as in (5.15).
Improvement of approximation. The term
appearing in (5.16) is the largest term of the error not contributing to the projections, in the sense that Next step will be to improve the approximation W 0 in order to eliminate (5.17). While it may not be strictly necessary, including it makes the expansions of both h and the remainder Φ more precise. The solvability conditions are indeed automatically satisfied in the terms quadratic in ε.
Thus we improve the approximation, and we do so by setting We remark that all terms W 0 , W 1 and Λ are defined only in a region close to M as they are all defined through Fermi coordinates. The corresponding error can be written as We recall that L is the two-dimensional linearised around U 0 given by (2.15), and we defined If we choose Λ such that (5.17) in the error of approximation will be erased. It is natural to look for a solution of the form The existence of such solutions is given by Lemma 1, given that the right-hand sides satisfy the orthogonality conditions Also, since the right hands sides in (5.19) are both O(e −|t| ) for |t| large a standard barrier argument along with the fact that L ∼ −∆ + Id at infinity ensures that sup t∈R 2 e σ|t| |Λ(y, t)| < ∞, ∀y ∈ M, for any 0 < σ < 1. In this way, the biggest part of S(W 0 ) not contributing to the projections is erased. We can estimate the error created in N Thus, the error S(W 1 ) can be written as where |R 1 (y, t, ι, ς)| + |∂ ι R 1 (y, t, ι, ς)| + |∂ ς R 1 (y, t, ι, ς)| ≤ Cε 3 (1 + r(y)) −4 e −σ|t| .
and we used that J [h 0 ] = 0. Using (1.11) we can estimate term by term the error (5.20). For instance . Similar calculation can be carried out for all the other terms in the expression of S(W 1 ). In all we get Now that we know precisely the size of all the terms involved in the error (5.20), all that is left to obtain an actual approximation is to extend it beyond the support of the Fermi coordinates.
5.5. The global approximation. The approximation obtained so far is sufficient for our purposes when considered in a neighbourhood of M . Next step is to extend W 1 beyond the support of the Fermi coordinates in a way that keeps the error S small in a norm that accounts decay along the ends. We begin by considering an extension of the set N previously defined where the Fermi coordinates are still well-defined and the error of approximation maintains the same size. Let and δ is a small positive number. Remark that ε|t + h 1 | = |z − εh 0 | where z is the normal coordinate to the manifold. This means that setting where X h is given by ( We will define the approximate global solution as where Ψ = (ψ, dψ/iψ) is a pure gauge of the form (2.4), for some S 1 -valued function ψ which is smooth in the support of (1 − ζ δ ). This ensures that away form the manifold W is a solution, precisely S(W ) = S(Ψ) = 0 on {ζ δ = 0} (5.25) independently on the choice of ψ, as long as it is regular enough. On the other hand, we need to choose the function ψ such that Ψ glues well with W 1 on the set {0 < ζ δ < 1}, to avoid the creation of a large error. Let us recall that where (ρ, θ) are the polar coordinates relative to (t 1 , t 2 ). Also, remark that as ρ = |t| grows large it holds therefore it looks natural to choose ψ = e iθ (and consequently dψ/iψ = dθ) in N ′ \ M 0 . In such case it holds for ε sufficiently small. Therefore and hence exponentially decaying terms in |t| gain extra decay in r and exponential smallness in ε when considered in {0 < ζ δ < 1}. We claim that the interpolation error term is exactly of this type. A lengthy but straightforward calculation gives As a general estimate, we can write Using (5.26), the fact that f ′ , a ′ = O(e −ρ ) and that we obtain where we used (5.27). Hence, the interpolation error has the required smallness.
Finally, we define the pure gauge Ψ in the following way: consider the signed distance function from M 0 , where M 0 is given by (5.23), given bȳ is in the region where ν points, and −1 otherwise. Let now d : In this way, the pair Ψ = (ψ, dψ/iψ) satisfies our requirements in the sense that and it extends smoothly on {ζ δ = 0}.
Proof of main result
In the previous section we built an approximate solution W of the form We now look for a solution of the form W + Φ, where Φ will be small in a suitable topology.
In other words, we want to solve S(W + Φ) = 0. (6.1) We can rephrase (6.1), using (2.10), as . As explained in Section 3, we will solve (6.2) by first finding a solution Φ to which we will be able to do up to corrections (see Proposition 1 below), and then showing that all corrections (including −Θ W Θ * W [Φ]) vanish thanks to the symmetries of the solution found.
Rather than (6.3), we will solve the more general problem where ζ 2 is a cut-off function supported close to M (defined below in (7.2)), V α are as in (2.16) and b α are functions defined on M which are unknowns of the problem. This correction is necessary to obtain good estimates for the solution. The principal operator in (6.4), L W , has an approximate kernel given by V α (t), α = 1, 2, suitably cut-off away from the manifold M . The correction ζ 2 b α (y)V α (t) has the role of ensuring that the right-hand side of the equation satisfies an orthogonality condition to the approximate kernel, yielding the a priori estimates.
We solve (6.4) by first proving the following result for its linear version. The adjustment on the right-hand side provides unique solvability in terms of Φ and b = (b 1 , b 2 ). In the sense of the following Proposition, which will be proved in Section 9.
Using Proposition 1 we can write (6.4) as a fixed point problem on the space . Hence, up to enlarging A enough, by contraction mapping principle we can find an unique Φ ∈ C 2,γ 4 (R 4 ) and with Φ C 2,γ 4 (R 4 ) ≤ Aε 3 (6.7) such that W + Φ is a solution of the corrected problem (6.4). An important fact about such Φ that will be proved in Section 12 is its Lipshitz dependence on h 1 , namely It only remains to make this correction vanish, and to do so we will adjust the parameter h 1 .
Adjusting h 1 to make the projection vanish
In this section we prove that there is a suitable choice of the function h 1 such that the quantities b α (h 1 ) in (6.4) vanish, up to a further correction term accounting for the degeneracies of the Jacobi operator. So far we have solved Where ζ 2 is defined as follows: let ζ be a cut-off function such that ζ(s) = 1 if s < 1 and ζ(s) = 0 if s > 2. For every positive integer m, define If we multiply (7.1) by ζ 4 (y, t)V α (t) and integrate on R 2 , we find an expression for b α (y), Now we recall that inside the support of ζ 4 the quantity S(W ) equals the local error of approximation S(W 1 ), which has the expression (5.20). We define We also split the remaining operator in the following way where B is defined (inside the support of ζ 4 ) as Also, we set (7.4) With this notation, the system b 1 = b 2 = 0 can be rephrased as a fixed point problem where J is the Jacobi operator (5.5) and as we will see G is small and satisfies a suitable Lipschitz property (see Lemma 3 below). Equation (7.5) reads explicitly We recall that the assumption of non-degeneracy of the manifold M implies that all bounded Jacobi fields are linear combinations of those generated by the rigid motions of the manifold. In our case the Jacobi operator is decoupled, this gives 5 independent Jacobi fields, 4 of which come from those of the immersion into R 3 , namely z j = z j 0 , j = 0, 1, 2, 3. The existence of a non-trivial kernel requires the presence of a correction in an invertibility theory for the Jacobi operator (7.5), of the form Indeed, a suitable choice of the c j 's ensures that the right-hand side is orthogonal to the kernel, namely allowing for an invertibility theory that carries a priori estimates in the sense of the following result, the proof of which is postponed to Section 11.
Proposition 2. Let f = (f 1 , f 2 ) be a vector-valued function defined on M such that f C 0,γ 4 (M ) < +∞. Then there exist constants c 0 , . . . , c 4 such that system (7.6), namely . Remark 7.1. It might happen, for instance in the case where M is a catenoid, that the Jacobi field z 0 associated to rotation invariance is 0. In this case the orthogonality condition is automatically satisfied and we do not need an extra correction term.
Proposition 2 allows us to rewrite problem (7.6) as a fixed point problem In order to find such an h 1 it suffices to show that the right-hand side is a contraction mapping. This follows from the following Lemma, which will be proved in Section 12.
By the contraction mapping principle, combining the estimates provided by Proposition 2 and Lemma 3, (7.7) admits an unique solution in the space and hence we found a solution of the corrected problem (7.6).
In the next Section we will show that the above corrections, together with the one coming from gauge invariance, are actually zero.
Conclusion of the proof of Theorem 2
So far we have constructed a solution U = W + Φ of where we managed to adjust the parameter h to obtain Let us define q = ε 2 ζ 4 q −1 2 q 4 and set γ : so that U satisfies Our claim is that the coefficient vector c = (c 0 , . . . , c 4 ) and the function γ are automatically 0, which will make U the sought solution and conclude the proof. Consider the quantities We claim that Indeed, recall that in a region close to the manifold the solution U so found satisfies for some function Φ satisfying in this region
The next Lemma follows from gauge invariance, the invariances of M under rigid motions and from the balancing condition (5.6). The proof is postponed to Section 12. To prove that the vector (c, γ) vanishes we will show that it is mapped to zero by a positive, linear operator. Using Lemma 4 and (8.1) we obtain Using the expansions (8.2)-(8.3) we find Moreover, it holds (recalling that U = W + Φ) where we denoted with w and ϕ respectively the firsts components of W and Φ, and we used the decay of γ to justify the integration by parts. Observe that since Φ is small (in ε) with respect to W , the operator is positive for ε small enough. Lastly, using the decay of γ and |A M | 2 together with the fact that Θ * U 0 [V α ] = 0, it is direct to see that This computations show that an equation of the form L(c, γ) = 0 holds, where L is a linear operator which can be expressed as a small perturbation of the positive operator (c, γ) → ( V α 2 L 2 c, (−ε 2 ∆ + |w| 2 + w, ϕ )γ) and, as a consequence, we have (c, γ) = 0. We have thus found a solution to the system (1.6) with the properties required from Theorem 2.
Invertibility theory for the gauge-corrected linearised
In this Section we prove Proposition 1, of which we recall the statement.
for some C > 0.
To prove Proposition 1 we develop an invertibility theory for the operator L W on a space of decaying functions. We will use the fact that on a region close to the manifold the linearised L W can be approximated by L U 0 , namely the linearised operator on M × R 2 around the building block U 0 (y, t) := U 0 (t), while outside this region L W behaves like a positive operator. We aim to solve for a right-hand side Λ = Λ(x) defined on R 4 . To this goal we look for a solution of the form being Φ a function defined on M ×R 2 and Ψ is defined on R 4 . We will develop an invertibility theory for L U 0 and then apply it to find the inner function Φ, but as we will see to do so we need a correction on the right-hand side. This correction is necessary for a general solvability theory and it is due to the fact that L U 0 has a non-trivial kernel, precisely generated by {V α (t)} α=1,2 . We will invert L U 0 for a family of right-hand sides satisfying an orthogonality condition with V α , α = 1, 2, for every fixed point of M . A way to obtain this condition for a general right-hand side Λ is to replace it with in (9.1), with b α smooth functions defined on M . We observe that ζ 2 b α V α is a well defined function on R 4 , since it is understood to be 0 outside of N . In what follows it will be useful the following definition More precisely, if Φ = (φ, ω) and U = (u, A), we have and
Using (9.3), we infer that such equation is solved if the pair (Φ, Ψ) solves the system
where we used the fact that ζ 2 ζ 1 = ζ 1 . It is important to notice that the term T W does not contain any derivative. We start by solving (9.6) with the following Lemma, whose proof is postponed to Section 12.
Recalling that since h 1 ∞ ≤ Cε we have, by choosing ε sufficiently small, |t| > δ/ε on supp ∇ζ 2 . Therefore . Now, using Lemma 5, we find a solution Ψ = Ψ 1 + Ψ 2 of (9.6), where Moreover, it holds . This allows us to reduce the system (9.5)-(9.6) to a single equation depending just on Φ. First, we extend the so found equation to a problem on entire M × R 2 . Definẽ and it's straightforward to check thatB is a linear operator of size O(ε). With this notation, (9.5) will be solved if we find a solution to To solve (9.9) we use the following result, which will be proved in Section 10.
Using Proposition 3, we can rewrite (9.9) as the linear problem are chosen to highlight the dependencies from Φ, Λ arising from (9.7). Remark that where we used that ζ 1 T W ∼ e −|x| χ {ζ 1 >0} (x) to control the exponential decay in the weighted norm. Therefore, we have and hence, choosing ε sufficiently small we find a unique solution to (9.11), from which it follows the existence of a unique solution (Φ, Ψ) to system (9.5)-(9.6). Hence, Φ = ζ 2 Φ + Ψ solves (9.2) and it follows directly that which concludes the proof of Proposition 1.
Invertibility theory on M × R 2
The aim of this Section is to prove Proposition 3, which we now recall.
Recall that the operator L U 0 , is given by We look for a solution to (10.2) on a space of functions defined on M × R 2 with suitable decay. This cannot be done for every choice of right-hand side Ψ, hence instead of (10.2) we aim to solve the projected problem This variation will provide unique solvability, in the sense of Proposition 3. To prove the result, we will first prove the existence of an inverse of the same operator in the Euclidean space R 4 = R 2 × R 2 and use this to solve the problem locally, up to a small operator bounded by the size ε of the dilation. Subsequently, we find an actual solution by gluing and fixed point techniques.
10.1. Solvability theory for the linearised in the flat space. To indicate a point in R 4 we will use two coordinates (y, t) ∈ R 2 × R 2 , so that the centre of the linearisation is U 0 = U 0 (t). Our aim is to solve The following result holds.
Moreover, the estimate holds for some C > 0. If besides Ψ ∈ C 0,γ (R 4 ), then Let us first prove existence. We observe that if Φ = (φ, ω) and Ψ = (ψ, η) then we can split ω = ω t dt + ω y dy and η = η t dt + η y dy. In this way the first equation in system (10.4) where we have that the last equation is not coupled with the first two. Therefore, we need to solve two separate problems: where ω = ω(x, y)dx, and the second one is 10.1.1. Solving system (10.10). We apply Fourier transform to the whole system (10.10) on the y variable, obtaining (after dropping the subscript t from ω t for simplicity) where ξ is the Fourier variable, which can be written in a compact way as where L is the gauge-orthogonal, two-dimensional linearised around U 0 (t) given by (2.15) andb We now observe that (10.12) can be solved since the bilinear form , being a sum of a coercive operator (see (2.18)) and a positive one. Precisely, choosingb α s as in (10.13) ensures that the right-hand side belongs to Z ⊥ U 0 and in turn yields by Lax-Milgram theorem to the existence of a unique solutionΦ to problem (10.12). Besides, we have and hence choosing δ > 0 sufficiently small we get for some C > 0. Now, applying inverse Fourier transform and Plancherel's Theorem we find a solution Φ ∈ H 1 U 0 (R 4 ) × L 2 (R 2 ) to (10.10) satisfying (10.6) and This proves the existence part for (10.10).
10.1.2. Solving equation (10.11). Here we mimic the procedure of §10.1.1 and obtain a solution ϕ of (10.11) satisfying finding in this way a solution to (10.4) satisfying (10.7). The bilinear form related to the operator is coercive in the following sense.
Lemma 6. The operator B is coercive on H 1 R 2 , i.e. there exists a λ > 0 such that We postpone the proof of Lemma 6 to Section 12. From the results of §10.1.1 and §10.1.2 we find a solution to (10.4) satisfying estimates (10.7). Next, we proceed with the proof of the Hölder estimates (10.8). The first step is to prove the following L ∞ estimate in a cylinder.
10.1.3. L ∞ -estimates. Let Φ = T (Ψ) be a solution to (10.4) with Ψ ∈ L ∞ (R 4 ). We claim that the following estimate holds for every p ∈ R 2 . First, we prove that Indeed, for such given p we define the function ̺ : R 2 → R given by ̺(y) = 1 + δ 2 |y − p| 2 and letΦ
The equation forΦ becomes
By estimates (10.7) we obtain Therefore, choosing δ sufficiently small, we find Next we observe that, if ν is sufficiently large, and From the last two estimates and (10.17), by taking the supremum over p ∈ R 2 , (10.16) immediately follows. Now, to prove (10.15), we write Using (10.16) we get and (10.15) follows.
10.1.4. Hölder estimates. Now we prove the Hölder estimates (10.8). Elliptic interior regularity yields that for every point p = (y, t) ∈ R 2 × R 2 it holds (B(y,1)×B(t,1)) + Ψ C 0,γ (B(y,1)×B(t,1)) On the other hand, using the L ∞ estimates found in (10.1.3) and since C doesn't depend on the point p, taking the supremum we find that is the sought estimates. The proof of Proposition 4 is concluded.
10.2. Proof of Proposition 3. We now use the theory developed in §10.1 to prove the invertibility for the linearised L U 0 on M × R 2 . For each point p ∈ M we can find a local parametrization onto a neighbourhood U p of p in M , so that writing then we may assume that θ p is smooth, θ p (0) = 0 and with C independent of p. We represent the Laplace-Beltrami operator by where F k is given by (1.9). Let now r(ξ) = |ξ|. According to (1.9), we find and hence the metric on the end M k expands as In this way we can compute thus, we conclude that the coefficients b ij , b i and their derivatives are uniformly bounded.
Proof of Proposition 3 -existence.
Let us now fix a small δ > 0. We have that Secondly, we choose a sequence of points (p j ) j∈N ⊂ M such that, defining then M is covered by the union of V k and so that each V j intersects at most a finite, uniform number of V k , with k = j. Consider now a smooth cut-off function η such that η(s) = 1 for s < 1 and η(s) = 0 for s > 2. We define the following set of cut-off functions on M η k m (y) = η |ξ| mδ y = Y p k (ξ) which are supported in U k,m := Y p k ξ ∈ R 2 : |ξ| ≤ mδ and extended as 0 outside of U k,m . Remark that η k 1 η k 2 = η k 1 since {η k 2 = 1} ⊇ U k,1 . Now, our choice of {V k } and the fact that V k ⊂ {η k 1 = 1} guarantee that there is a constant C > 0 such that Also, we remark the estimate At this point, we look for a solution of (10.3) of the form where we used Einstein summation convention. With this ansatz, (10.2) can be written as where R is as in (9.4). The above equation is satisfied if we solve the system To this end, we need the following result, which will be proved in Section 12.
Lemma 7. There exists a constant C > 0 independent on ε such that for every pair H ∈ C 0,γ (M × R 2 ) there exists a solution Φ = Φ(H) to the equation defining a linear operator in H, such that .
where we used (10.20). We can plug such Φ 0 into (10.21), writing it in coordinates and apply the inversion operator T of Proposition 4 to get where S is defined as . We readily check that Taking the supremum over k and δ, ε sufficiently small we find a unique solution to (10.25), from which we then find Φ = Φ 0 + η k 1 Φ k . Moreover, looking back at how the solution was built we infer that and the proof is concluded.
Proof of Proposition 3 -weighted estimates.
Recall that we defined, for µ ≥ 0 and 0 < γ, σ < 1, the norm Let Φ = ̺Φ. In terms ofΦ, equation (10.3) reads where R U 0 is as in (9.4), andb Remark that, for some C > 0, and hence Now, observe that up to reducing σ and ε, we can solve (10.26) by fixed point arguments with the inversion theory provided by Proposition 3, with the corresponding estimates for Φ, namely on the other hand, (10.28) thus, putting together (10.27) and (10.28), we obtain the sought estimates and the proof is complete.
Invertibility theory for the Jacobi operator
In this section we prove Proposition 2, namely we solve the system for a right-hand side f = (f 1 , f 2 ) T of class C 0,γ 4 (M ). We recall the precise statement.
Proposition 2. Let f ∈ C 0,γ 4 (M, R 2 ). Then there exist constants c 0 , . . . , c 4 such that system (11.1) admits a solution h = H(f ) satisfying We begin with an invertibility theory for the Laplace-Beltrami operator in the space and for a right-hand side having mean zero. The following result holds.
Moreover, if g C 0,γ 4 (M ) < ∞, then such ψ satisfies also for some C > 0 Proof. We consider the weak formulation of the problem We claim that B is coercive on H, namely for some c > 0. Let us accept this for a moment and observe that hence we have the continuity of ℓ g on H and, by Riesz theorem, the existence of a solution Now, to solve the actual weak problem (11.5) we observe that a φ ∈ C ∞ c (M ) can be written as φ =φ + α for some constant α andφ ∈ H. Condition (11.3) implies then , that is, ψ is the sought solution. We now prove the coercivity of B. Observe that (11.6) is satisfied if Notice that (11.8) and (11.9) imply that, passing to a subsequence, φ n → a in L 2 loc (M ) for some a ∈ R. We split where R > 0 is sufficiently big. We claim that the left-hand side of (11.10) converges to 0. First we notice that because of L 2 loc -convergence and Cauchy-Schwartz inequality. Secondly, by triangle inequality, We use that but φ n ∈ H implies M |A M | 2 φ n = 0 therefore it must be a = 0. We will show that this is incompatible with (11.8) and (11.9). Recall that outside of a large cylinder the manifold M decomposes into its ends M k , each of which resembles a plane. We claim that as n → ∞. This, together with the fact that φ n → 0 in L 2 loc (M ) contradicts (11.9), thus proving (11.6).
A direct calculation with the change of variable ξ = y/|y| 2 shows that for some constant C > 0 independent on n. From this argument follows that φ n → a in L 2 (B δ ) for some constant a ∈ R. On the other hand, on every ball B ′ ⊂ B δ not containing the origin we already know, from the fact that φ n → 0 in L 2 loc , that φ n → 0, thus a = 0. This implies that The claim is proved.
The only thing left to do is to prove that the solution ψ found satisfies The claim is easily proved on compact sets by using local L 2 -elliptic estimates and Sobolev embedding, along with (11.7), so we go on proving it on the ends of M . Recall that |A M | = O(r −2 ) for r large and that on each end M k the expression (10.18) for ∆ M holds, namely where ξ are the Euclidean coordinates mapped on the surface via the chart ϕ k and Since the manifold M resembles a plane on its ends, we use again the Kelvin transform on where x ∈ B(0, δ) for some δ sufficiently small. It's straightforward to verify the existence of coefficients c ij and c j , defined in B(0, δ), such that (11.14) and such that c ij = O(|x| 2 ), c j = O(|x|), for |x| small.
Thus
ψ H + ψ ∞ ≤ C |A M | −1 g L 2 (M ) and the first part of the proof is concluded. Next we consider the case where g C 0,γ 4 (M ) < ∞. We claim that ψ * ≤ C g C 0,γ 4 (M ) + ψ ∞ that is, h is a bounded Jacobi field. By hypothesis of non-degeneracy, it must hold h = 3 k=0 α kẑ k and since h ∈ H ♯ we have hence h = 0. Finally, to prove (3) we consider {h n } be a sequence of functions such that sup n h n ♯ ≤ 1, and we show that {g n } = {T (h n )} has a convergent subsequence in H ♯ . Local elliptic estimates imply uniform bounds for g n in the norm of C 0,γ and in turn, by Arzelà-Ascoli theorem, the existence of a subsequence converging uniformly over compact subsets to a limit g. We claim that g n → g also in H ♯ . By completeness and pointwise convergence, it will suffice to show that for every ǫ > 0 g n − g m ♯ < ǫ (11.18) up to taking n, m sufficiently large. Let us consider a sufficiently large ball B R = M ∩ {r < R}. We have as m, n → ∞ by local uniform convergence. On the other hand which is small up to enlarging R. This proves (3) and hence the existence of a solution to (11.17) by Fredholm alternative. The estimates follow from the formulation (11.17) and (11.4).
Proofs of Lemmas 2-7
This Section contains all the proofs of Lemmas that have been postponed so far.
12.1. Lemma 2. For any real numbers λ 1 , . . . , λ m satisfying there exists a smooth function h 1 0 , defined on M , such that ∆ M h 1 0 + |A M | 2 h 1 0 = 0 on M and such that on each end M j h 1 0 (y) = (−1) j λ j log r + η on M j , . Thus, (12.2) holds also for i = 0 and the claim is proved. This concludes the proof.
Proof. Consider at first G 1 -we claim that To see this, let us take for instance G 1 1 . It holds that from which follows (12.4). Lipschitz dependence (12.5) can be checked directly from the expression for instance, and the other terms are similarly checked. Now let's consider G 2 -for ease of notation, define as shown by calculations analogous to those for the Lipschitz behaviour of G 1 . The estimates from Proposition 1 yield to and hence, up to choosing ε sufficiently small, we have (12.9). Lastly, we observe that (12.7) Is a direct consequence of the smallness in ε of all the coefficients of the second order operator B, while (12.8) is a consequence of the mild Lipschitz dependence in h 1 of such coefficients, for instance a ij 1,δ (y, ε(t δ + h δ 0 + h δ 1 )). Finally, we prove that the term is Lipschitz with a constant that is exponentially small in ε. This is straightforward using the self-adjointness of L U 0 and the fact that V α ∈ ker L U 0 . Indeed, integrating by parts yields to where we also used the orthogonality between Φ and V α . Using the exponential decay of V α and the usual argument with the support of ζ 4 we obtain Finally, we can put all the estimates just found together to obtain that G has an O(ε)-Lipschitz constant.
Let us consider a cylinder
Using the decay of γ, we see that the last quantity vanishes as R → +∞, proving the claim. Now, we prove the second part of the lemma, namely that Recall that, if U = (u, A) T , for i = 1, . . . , 4. We're going to prove that the integration on C R of S(U ) against both of these quantities produces boundary terms that, together, vanish as R → +∞. For instance consider i = 3. It holds Where we used the invariance of the cylinder C R in the z 3 -direction. Next, we compute and as above an integration by parts yields to where we used the gauge invariance of the energy E. In all we get We claim that lim R→+∞ ∂C R (∇ U U ·r) · ∇ x 3 ,U U = 0. (12.10) We begin by expanding (∇ U U ·r) ∇ x 3 ,U U = |V 1 | 2 (ν 1 · e 3 ) ν 1 − (−1) k ελ k rr ·r + O ε 2 r −2 and by observing that, since the k-th end of the manifold reads the normal vector satisfies (−1) k ν 1 = 1 1 + |∇F k | 2 (∇F k , −1) = a k r −1r − e 3 + O(r −2 ).
On the portion of C R near this end we have Now, remark that t 1 = (x 3 − F k (x 1 , x 2 ) − ελ k log r + O (1)) 1 + O R −2 and hence we get This follows from the fact that on ∂C R the distance between ends is greater than ρ := 2γ log R. Now, we use the fact that for large r the covariant gradient of U is exponentially small with the distance from each end of the manifold. Precisely, it holds This fact can be proved precisely as in Lemma 9.4 in [13], namely through characterization (8.1) of U and barriers, using the fact that the main order of the linearised behaves roughly as −∆ + 1 on each component. Using (12.11), we get that far from the ends of the manifold Finally, we infer but since m k=1 a k = m k=1 λ k = 0 we get (12.10) and the proof is complete for i = 3. For i = 4 the proof is almost identical, with the only difference being that the terms λ k don't appear. For i = 2 is similar, but the integration of S(U ) · ∇ x 2 U on C R produces an extra boundary term ε 2 2 ∂C R |∇ A u| 2 + ε 2 |dA| 2 + 1 4ε 2 1 − |u| 2 2 n 2 due to the fact that the domain is bounded in the x 2 direction. We denoted n 2 = x 2 /r. By the expansion of the gradient ∇ U U , we get and hence, since r=R n 2 = 0, lim R→+∞ ∂C R |∇ A u| 2 + ε 2 |dA| 2 n 2 = lim The other boundary terms are treated exactly as before and of course the same proof holds also for i = 1. We are only left with the case i = 0, for which is convenient to switch to cylindrical coordinates. We have and just as in the previous cases an integration by part produces a vanishing term (due to the rotational symmetry of the energy) plus a boundary term, namely but from the expansion of the gradient we obtain and letting R → +∞ we obtain the claim also for i = 0. The proof is concluded.
Proof. By contradiction suppose that we can find a collection of functions {φ j } such that φ j H 1 = 1 for every j and B [φ j , φ j ] → 0 (12.12) as j → ∞. Extracting a subsequence, we have that φ j ⇀ φ * in H 1 and by weak lower semicontinuity of B, using (12.12), we infer which in turn implies φ * = 0. Now, observe that Now, given any ǫ > 0 we can find r(ǫ) > 0 such that On the other hand, by Rellich lemma φ j → 0 in L 2 (B r(ǫ) ), which means that by choosing j big enough we have Finally, from equation (12.13), we find from which we draw a contradiction using (12.12).
12.5. Lemma 7. There exists a constant C > 0 independent on ε such that for every pair H ∈ C 0,γ (M × R 2 ) there exists a solution Φ = Φ(H) to the equation defining a linear operator in H, such that Proof. Observe that we can embed M × R 2 ⊂ R 5 . Let B R be the ball of radius R centred in the origin of R 5 and consider, for k = 1, 2, . . . , the following problem (12.14) For every k problem (12.14) admits a weak solution Φ k . This follows from Riez's theorem, along with the fact that the corresponding linear operator defines a positive, symmetric bilinear form on H 1 U 0 ,0 ((M × R 2 ) ∩ B k ), namely the closure on C ∞ c ((M × R 2 ) ∩ B k ) pairs under the H 1 U 0 norm. Moreover, the linear operator in (12.14) satisfies on each component maximum principle, thus using H ∞ as a barrier we obtain the uniform bound Thus, using elliptic estimates we obtain the presence of a subsequence of Φ k that converges uniformly over compact sets as k → ∞ to a limit Φ that solves for some constant C > 0 independent on ε.
Proof. This proof follows the same lines as the one of Lemma 7 above. We use again a barrier using the fact that the operator satisfies maximum principle. The weighted estimates follow from the same argument used in the proof of Proposition 3. | 2022-05-31T01:15:48.757Z | 2022-05-30T00:00:00.000 | {
"year": 2022,
"sha1": "1000e0357adc50db75b00c516ca8838c2d1772f3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1000e0357adc50db75b00c516ca8838c2d1772f3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
52306144 | pes2o/s2orc | v3-fos-license | Independent determinants of prolonged emergency department length of stay in a tertiary care centre: a prospective cohort study
Background Emergency department (ED) overcrowding is a potential threat for patient safety. We searched for independent determinants of prolonged ED length of stay (LOS) with the aim to identify factors which can be targeted to reduce ED LOS, which may help in preventing overcrowding. Methods This prospective cohort study included consecutive ED patients in a Dutch tertiary care centre. Multivariable logistic regression analysis was used to identify independent determinants of ED LOS > 4 h, including patient characteristics (demographics, referral type, acuity, (number of) presenting complaints and comorbidity), treating specialty, diagnostic testing, consultations, number of patients in the ED and disposition. Furthermore, we quantified the absolute time delays (measured in real-time) associated with the most important independent determinants of prolonged ED LOS. Results In 1434 included patients independent determinants of prolonged ED LOS were number and type of presenting complaints, specialty, laboratory/radiology testing and consultations, and ICU admission. Modifiable determinants with the largest impact were blood testing; Adjusted odds ratio (AOR (95%-CI)); 3.45 (1.95–6.11), urine testing; 1.79 (1.21–2.63), radiology imaging; 3.02 (2.13–4.30), and consultation; 5.90 (4.08–8.54). Combined with the laboratory/radiology testing and/or consultations (requested in 1123 (78%) patients) the decision-making and discharge process consumed between 74 (42%) and 117 (66%) minutes of the total ED LOS of 177 (IQR: 129–225) minutes. Conclusions In tertiary care EDs, ED LOS can be reduced if the process of laboratory/radiology testing and consulting is optimized and the decision-making and discharge procedures are accelerated. Electronic supplementary material The online version of this article (10.1186/s13049-018-0547-5) contains supplementary material, which is available to authorized users.
Introduction
A prolonged ED length of stay (LOS) keeps doctors and nurses longer occupied with one patient, and decreases the effective capacity which contributes to overcrowding. Emergency department (ED) overcrowding is associated with worse patient outcomes and satisfaction among health workers and patients [1][2][3]. Crowding is also a problem in the Netherlands; 68% of the ED managers experienced crowding several times a week or even daily [4]. In the UK the 4-h rule was introduced to restrict the ED work-up time [5]. Also in our hospital in the Netherlands crowding is an issue affecting satisfaction of patients and medical personal. Therefore, an ED LOS of 4 h was adopted as an important cut-off point to indicate prolonged LOS.
Reduction of ED LOS may contribute to reduction of ED overcrowding [2,6]. Despite the fact that numerous ED and patient characteristics have been associated with prolonged ED LOS, a recent systematic review demonstrated that previous studies were inappropriate to decide which factors need to be targeted to optimize ED logistics [7]. Several issues arise in these studies. First, almost all have been retrospective studies while real-time measurements are especially important when studying time delays in ED logistics. Secondly, often only a few factors have been measured while all potential bottlenecks should be taken into account to understand which factors should be targeted and can actually be modified in clinical practice. For example, age has frequently been associated with ED LOS [7][8][9]. However, this could be caused by age-related differences in comorbidity and disease severity. Comorbidity has been associated with a higher number of ED visits, hospital admissions, readmissions and health care costs [10][11][12][13]. Comorbidity increases, in combination with multiple presenting complaints, the complexity of care and could contribute to a prolonged ED LOS. Typically, patients of a tertiary care centre have more comorbidities and presenting complaints.
Finally, previous studies mostly originate from the USA or Canada [7]. The European ED setting is quite different with regard to the number of ED visits and ED LOS. In addition, EDs are staffed by both ED physicians and other specialists (as opposed to only ED physicians), and the general practitioner has an important role as gate keeper in referral of patients to the hospital [14].
Importance
Successful solutions for reduction of ED LOS in the European health-care setting can only be developed if ED patient flows are prospectively studied and independent modifiable 'bottlenecks' are identified by using multivariable prediction modelling [15].
Aim of the study
The purpose of the present study was therefore two-fold: first, to identify independent determinants of prolonged ED LOS (including all important patient, doctor and ED (management) factors). Secondly, to measure (in real time) the absolute time delays associated with the modifiable independent determinants of prolonged ED LOS, and the delays associated with decision-making and ED discharge.
Study design and setting
This was a prospective observational cohort study including ED patients of the Leiden University Medical Centre (LUMC), a Dutch tertiary care centre with2 6.000 ED visits each year. Data were collected from 16 December 2014 to 11 February 2015.
In the Netherlands approximately 84 hospitals have EDs, the highest concentration in the western part of the Netherlands. Patients are either referred to the ED by a general practitioner (GP) or are self-referred (which is accepted). Furthermore, the Netherlands is divided in regional ambulance services. In the region of the LUMC the ambulance service "Hollands Midden" is responsible for most of the ambulance transports. This district covers 875 km2 with 760.000 citizens and is accountable for~60,000 transports per year. In 95% of the rides ambulances arrive at the scene within 15 min of the dispatch call. The Dutch ED setting is characterized by the presence of both ED physicians and other specialists in the ED. The ED is staffed 24/7 by ED physicians, who are responsible for self-referred patients, trauma and critical care and patients who are directly referred to an ED physician.
The study was approved by the medical ethics committee of the LUMC, who waived the need for individual informed consent because of the purely observational character of the study (Protocol number P14.288).
Participants
All consecutive patients presenting to the ED between 10 a.m. and 10 p.m. were included during 30 randomly chosen days, including weekends, in a 9 week time period [16]. Approximately 70% of all ED patients per 24 h arrive in the selected time period . It is important to note that this way of sampling does not create selection bias [17].
Data collection and measurements
In Fig. 1 the patient flow through the ED is depicted schematically.
Demographic variables, referral status, type of arrival (by ambulance or own transport), triage category, treating specialty, comorbidity, type and number of presenting complaints, diagnostic testing, consultations of other specialties in the ED, number of patients present in the ED at the time of ED registration and final disposition, were prospectively registered by an observer (DvdV) in a standardized case report file in SPSS (SPSS V.23.0, IBM, New York, USA). Transit times (ED bed placement time, ED departure time and time of physician arrival, consultation request and arrival of the consulted physician, time of physically leaving the ED) were measured in real-time.
In the digital hospital information system (Chipsoft, Amsterdam) information about the ED registration time, the day of discharge and times of diagnostic testing (times when laboratory analysis was started or radiology imaging was requested and times when these were finished) were obtained. In the final conclusion of the medical file the total number of presenting complaints or problems were quantified. The triage category and triage complaint were registered according to the Manchester Triage System (MTS).
Comorbidity was assessed in two ways: the Charlson comorbidity index (CCI) was calculated and comorbidities were registered by an organ-based method [18]. For exact definitions and scoring system for the comorbidities see online Additional file 1.
ED LOS was calculated by subtracting the ED registration time from the time that the patient physically left the ED. If the ED departure time was after 10 pm, the time registered in the digital hospital information system was used. Waiting time was calculated by subtracting the ED registration time from the time a patient was placed in an ED treatment room. Time until seen by a physician was calculated by subtracting ED room entrance time from the time the physician had arrived.
Outcome measure
The primary outcome measure of the present study was a total ED LOS of more than 4 h.
Data analysis Sample size estimation
For the multivariable logistic regression analysis we used the rule of thumb that approximately 10 events per covariate were needed to prevent overfitting. Because we wanted to put 27 variables in the model, 270 events were needed.
Descriptive statistics
Continuous data were presented as mean (standard deviation: SD) if normally distributed and median (interquartile range: IQR) if data were rightly skewed. Categorical data were presented as number (%). Differences between continuous data were analyzed with student t-tests or Mann-Whitney U-tests as appropriate. Furthermore, chi-square tests were used for analyzing descriptive categorical data.
Main statistical analysis
Multivariable binary logistic regression analysis with backward entry of arrival type (self-referral or by ambulance), age, triage category, treating specialty, diagnostic testing, consultations, disposition (hospital admission or discharge home), number of comorbidities, types and number of presenting complaints and number of patients present in the ED at the time of ED registration, was used to identify the independent determinants of ED LOS longer than 4 h. Before we entered variables in the model we examined if the variable had a linear relation with the outcome, if not, we categorized the variable and created dummy variables.
We calculated ED LOS for the total cohort and for patients with diagnostic testing or consultations. The Hosmer-Lemeshow test was used to assess goodness of fit. The c-statistic was used as a measure of discriminative performance of the prediction model. Variance influence factors (VIF) were assessed to assess if multicollinearity is a problem. Multicollinearity was not considered as a problem if the VIF was below 3.
The odds ratios (ORs) with 95%-confidence intervals (CI) were reported. P-values < 0.05 were considered significant. All data were analyzed using SPSS statistics 23.0.0 software (IBM, New York, USA).
Independent determinants of ED LOS > 4 h
In
Time components of ED LOS
The several time components, which contribute to the total ED LOS, are illustrated in Fig. 1.
The absolute time delays of diagnostic testing and consultations are shown in Table 3. In the total patient cohort the median ED LOS was 156 min (98-225). In 1123 patients (78%) at least one consultation or diagnostic test was requested, which increased the total ED LOS to 177 min (129-242). The median time between patient registration and requesting diagnostic testing or consultations ranged between 23 (13%) to 82 (46%) minutes and the median time between requesting and having the results ranged between 26 (15%) to 69 (39%) minutes of the total ED LOS. The median time between finishing diagnostic testing or consultation and ED discharge was 48 (27%) minutes of the total ED LOS.
Discussion
The main conclusion of this study is that laboratory/ radiology testing and consultations are necessary in 78% of all ED patients and are the most important independent determinants of prolonged ED LOS. More importantly, between 42 and 66% of the total ED LOS is spent on laboratory/radiology testing and consultations, and the decision-making and discharge process thereafter.
In a recent systematic review it has been suggested that despite many studies investigating factors associated with ED LOS, accurate conclusions cannot be drawn about how to reduce ED LOS. Reasons are that studies were mostly retrospective and some important factors like specific presenting complaints or co-morbidities were not assessed, nor were multivariable prediction models developed to correct for potential confounding [7]. In the present study we therefore aimed to prospectively investigate all important ED and patient factors affecting ED LOS. This yielded some important insight in factors affecting ED logistics. Firstly, in contrast to previous studies age per se was not independently associated with prolonged ED LOS [7][8][9]. Our study suggests that this is probably explained by a higher percentage of diagnostic testing and consultations in older people [16]. Secondly, in previous studies the association between comorbidity and ED LOS has been insufficiently examined [7]. In addition, the large variability in comorbidity scoring methods makes it difficult to compare our findings with previous literature [19]. In an attempt to increase the comparability, we examined the influence of comorbidity in two ways: the number of comorbidities and the CCI were analysed separately. Neither were independent determinants for prolonged ED LOS, probably because the larger number of diagnostic tests and/ or consultations results in longer ED LOS in patients with more comorbidity, rather than the number of comorbidities per se.
Thirdly, in our study an association between arrival by ambulance and ED LOS was not found, possibly because arrival by ambulance is a measure of disease severity and/or complexity which has been quantified in our study by triage category, number of comorbidities, and number of presenting complaints and problems. The variable "arrival by ambulance" is probably eliminated from the multivariable regression model because the larger number of diagnostic tests and consultations associated with these other measures of disease severity and/ or complexity are the independent determinants of prolonged ED LOS.
Triage complaint (n,%) Uni-and Multi-variable binary logistic regression analysis was performed with backward entry of all variables. Data are presented as odds ratio (OR (95% CI)). The Hosmer-Lemeshows test had a p-value of 0.905. The area under the curve (c-statistic) was 0.850 (0.827 to 0.873). The VIFs varied between 1.00 and 1.47, never above 3. N = 1434 A "-"indicates that the variable was eliminated from the model, and no independent determinant in the multivariable regression analysis a The CCI and number of comorbidities were analyzed separately. Both were not associated with ED LOS > 4 h b Other specialties were pediatrics, ophthalmology, dermatology, otorhinolaryngology, psychiatry and gynecology Abbreviations: ED emergency department, Ref reference, OR odds ratio, CI confidence interval Finally, although in many studies hospital admission was associated with prolonged ED LOS [7], our study found the opposite. One reason is that we discriminated between hospital admission to a normal ward or to an MCU/ICU. Admission to a normal ward was not associated with prolonged ED LOS and ICU admission was even associated with a short ED LOS, probably because patients who need ICU admission are often taken care of with a team with multiple expertises in the shockroom. During this team approach, diagnostic testing and consultations are accelerated and done simultaneously because of the acuity.
Hospitalized patients are more ill and complex, which could explain the association with prolonged ED LOS in previous studies. However, in our study, this complexity is reflected by the variables "number of presenting complaints and co-morbidities" and not by the variable "hospital admission," explaining the lack of an association between hospital admission and prolonged ED LOS in our study. Thus, it is not the hospitalization per se that results in prolonged ED LOS but the associated complexity and diagnostic tests and consultations.
Our study has several implications for clinical practice and offers suggestions for improvements. The time delays caused by waiting time and time until seen by physician are short, which implicates that time-saving could mainly be achieved by reducing treatment time.
First, our study suggests that advanced triage, i.e. early initiation of diagnostic testing at the time of triage [20] would reduce ED LOS. In our study, merely 24% of the patients with diagnostic testing were selected for advanced triage. Moreover, reducing the number of additional blood analyses could improve the time between requesting diagnostic tests and their result. For example, erythrocyte sedimentation rate and C-reactive protein may not be useful for clinical decision making in the ED [21]. Troponins often need to be assessed multiple times.
Secondly, in patients with multiple comorbidities, delays caused by diagnostic testing and consultations could be prevented if they are immediately hospitalized once a clear indication for hospital admission exists, i.e. need for supplemental oxygen or intravenous medication. Awaiting test and consultations results in a clinical decision unit would be a suitable option for these patients, provided that patient safety is not jeopardized. In patients who do not need hospitalization but who do require outpatient follow-up some diagnostic tests and consultations might be done in the outpatient setting.
Thirdly, reduction of ED LOS could be achieved by clear agreements on who admits patients with multiple comorbidities and presenting complaints, and by increasing the presence of staff members in the ED, as has been suggested in a recent study [22]. This is expected to reduce ED LOS because in the Netherlands referred patients are mostly seen by residents of the treating specialty who, before making decisions, need to discuss patients with their supervising staff members who are often not present in the ED because of obligations elsewhere, i.e. in the operation theatre, outpatient department or ward.
Finally, if hospital admission is indicated, ED physicians have to consult the resident of the admitting specialty. If instead hospital admission could directly be discussed with the consultant of the admitting specialty a further reduction of ED LOS would be possible since consultations are associated with a large time delay.
Limitations
Although our study has several strengths like the prospective design (with real time measurements of time delays) our study has also some limitations. Firstly, ED crowding was not measured according to a validated crowding score because existing crowding scores are not validated in the Dutch ED setting. However, the number of patients present in the ED was measured in real time and is a major factor related to overcrowding [23]. Secondly, our results may not be applicable for all hospitals because tertiary care centres usually treat more complex patients. As a result, the impact of consultations on ED LOS could be overestimated in urban hospitals, because of a higher consultation rate in tertiary care centres [24]. Thirdly, although our sampling method should not have introduced selection bias [17], it is theoretically possible that during night hours, laboratory and radiology services are more short-staffed, probably delaying diagnostic testing results. In addition, staff members are less available for supervision. Therefore, the impact of consultations and diagnostic testing on ED LOS could have been underestimated.
Finally, although in our hospital the number of patients per month is fairly stable throughout the year, there could be some impact of season. However, given the low impact of number of patients present in the ED on ED LOS (Fig. 3), it is not likely that variation in number of ED presentation with season does affect the independent determinants of prolonged ED LOS. It is possible that the type of patients varies with season but we adjust for that in the multi-variable regression analysis.
Conclusions
Reduction of ED LOS is possible by optimizing the process of laboratory/radiology testing and consultations and facilitation of the decision-making and discharge procedures. Future studies should investigate if an accelerated hospital admission protocol will reduce ED LOS. Time components of total ED length of stay are presented as median (IQR) and categorical data are presented as frequency (%). The ED LOS of each subgroup of the total population is shown In total, 796 (56%) patients had diagnostic tests of which 193 (24%) patients had advanced triage, known as starting diagnostic testing before entering the ED room. In 116 patients, the median time between starting of the blood analysis and entering ED room was 23 min (IQR 9-54). Likewise, urine analysis started in 8 patients and radiology imaging started in 78 patients before entering ED room. The median times and IQR were respectively, 12 min (2-22) and 20 min (11-40). The urine analysis was not finished in 9 patients before admission and the blood analysis was not finished in 68 patients before admission. In these patients the times between finishing diagnostic testing and ED discharge were negative Abbreviations: ED Emergency Department, LOS Length of Stay, HIS Hospital Information System | 2018-09-21T22:01:52.865Z | 2018-09-20T00:00:00.000 | {
"year": 2018,
"sha1": "00b31a0fd82d81caf643c6bfbdb35e6ebd372c61",
"oa_license": "CCBY",
"oa_url": "https://sjtrem.biomedcentral.com/track/pdf/10.1186/s13049-018-0547-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "00b31a0fd82d81caf643c6bfbdb35e6ebd372c61",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261378096 | pes2o/s2orc | v3-fos-license | GREEN ECONOMY AS A FACTOR OF SUSTAINABLE DEVELOPMENT
. The purpose of the article is to study the directions of development of the green economy in Ukraine as a factor of ensuring the transition to sustainable development of the country. Methodology . One of the main elements of the methodology for conducting research on the green economy as a factor of ensuring sustainable development are evaluation methods. Comparative, balance, graphic, economic-mathematical and other methods of economic justification are used in the work. The methods of systematisation and generalisation were used to study the principles of further ecologically balanced development, economic-statistical, structural-logical and analytical – for the development of methods and indicators for the analysis of the system of directions of the green economy, graphic – for the visual presentation of the dynamics of indicators of sustainable development. The results of the work show that the growth of income and employment in the green economy is guaranteed at the expense of public and private investments aimed at increasing energy efficiency, reducing the negative impacts of economic activity and increasing the diversity and productivity of the biosphere, in the interests of the entire population, especially the poorest. It is important to emphasise that the concept of the green economy does not replace the concept of sustainable development, but develops it and is a means of putting it into practice. Green development can only be ensured if environmental and economic policies are integrated in such a way that social progress, economic growth and improvement in the quality of life of the population take place against a background of reduced threats to the surrounding natural environment. Practical implications. The concept of green growth emphasises the importance of integrating environmental and economic policies in order to identify new potential sources of economic growth without placing an "unsustainable" burden on the quantity and quality of natural resources. The transition to a green economy requires the application of a wide range of measures, including economic instruments (taxes, subsidies, emissions trading schemes), government regulatory measures (setting standards) and non-economic measures (voluntary initiatives, provision of information). Value/originality. Important economic indicators of sustainable environmental and economic development are a nature-intensive economy and a structural indicator that reflects the specific weight of products and investments in the natural resource-based sectors of the economy.
Introduction
The existence of humanity today requires an amount of resources that exceeds the Earth's capacity. In the last quarter of a century, world GDP has quadrupled, but economic growth has been achieved mainly through the consumption of natural resources. If humanity's demand for natural resources continues to grow at the current rate, the equivalent of two current planets will be needed to sustain human life in 2030, and 2.8 planets in 2050 ("Green" economy, old.livingplanet.org.ua). Vol. 9 No. 3, 2023 Threats of depletion of limited natural resources and climate change as a result of accelerated growth of the world population and economies of emerging countries, accompanied by negative impacts on the environment, cause widespread recognition of the need to introduce new approaches to ensuring economic growth and development, which include minimising the burden on the base of natural resources and environmental living conditions of the population due to the use of additional sources of growth (Makovoz, Perederii, 2018). It is obvious that fundamentally new steps are needed, a transition to such a concept of development, which will allow to comprehensively solve social, financial, fuel and climate problems.
According to scientists, such a solution is the concept of green economy ("Green" economy, old.livingplanet.org.ua). In Ukraine, the need to implement the concept of green economy is associated with a difficult socio-economic situation, low quality of the natural environment in most regions, dependence on foreign markets for resources and energy, low energy efficiency of national production (Borovyk, Yelagin, Polyakova, 2020), deterioration of the nation's health and quality of life of the population.
Today in Ukraine the institutional foundations of green growth have not been finalised, therefore it is very important to determine the priority areas of innovative development of state policy based on the priority of implementation of international and European standards, which will allow to use the experience and achievements of developed countries in the "greening" of the national economy (Galushkina, Musina, Potapenko, 2017).
The purpose of the study is to examine the directions of development of the green economy in Ukraine as a factor of ensuring the transition to sustainable development of the country. In order to achieve this goal, the following tasks were solved: a wide range of tools for the transition to a green economy was proposed; the principles of further ecologically balanced development were studied; the system of directions of a green economy was presented; green growth was characterised as a means of stimulating economic development.
The theoretical and methodological research consisted of the dialectical method of cognition, the basic provisions of economic theory and management of organisations, scientific works of the motherland's and foreign scientists with the problem of the organisational component of the institutional mechanism of the green economy.
In the course of the research the following methods were used: cause-and-effect analysis (to identify institutional obstacles to the development of the green economy); statistical and economic (to search and process statistical data and to research indicators of the green economy); logical generalisation (to form conclusions); graphic (to create graphs and drawings for the visual representation of statistical data and the conclusions formed regarding the correctness of determining the directions of transformation of the nature management system on the basis of the concept of the green economy in accordance with innovative waves).
This study of the main aspects of the effectiveness of the green economy, which strengthens the relationship between environmental and economic interests, serves as a guideline for achieving sustainable environmental and economic development at regional level. The main features of the green economy are: recognition of the value of natural capital as a source of social well-being; the need to invest in natural capital; reducing inequality and overcoming poverty; creating jobs and ensuring social justice; using renewable energy sources and low-carbon technologies; efficient use of resources and energy; creating sustainable cities (eco-cities) using green technologies.
Green Economy as a Foundation for Social Welfare
The term "green" economy was first used in 1989 in a report prepared for the UK government by a group of environmental economists as part of a consultation on how to ensure sustainable development and how to measure it (Pearce, Markandya, Barbier, 1979).
The green economy is an economic model that strives for sustainable and profitable development, seeking situations that bring economic, social and environmental benefits. In this context, the green economy claims that social welfare can be achieved by reducing environmental risks and threats (Green economy, uk.economy-pedia.com;Tomashuk, Baldynyuk, 2023). Therefore, a green economy consists of a long-term vision in which companies, markets and investors strive for sustainable development that guarantees long-term profitability.
The theory of the green economy is based on 3 axioms: 1. It is impossible to expand the sphere of influence infinitely in a limited space.
2. It is impossible to demand the satisfaction of endlessly growing needs in conditions of limited resources.
3. Everything on the earth's surface is interconnected (Green economy, 4ua.co.ua). Figure 1 shows a functional scheme with a positive inverse relationship with the deterioration of the quality of natural resources.
The concept of "green economy" incorporates the ideas of many other schools of economic thought and philosophy (feminist economics, postmodernism, Vol. 9 No. 3, 2023 ecological economics, environmental economics, anti-globalisation, international relations theory, etc.) related to the problems of sustainable development (Green economy, 4ua.co.ua).
The main goals of the green economy are: -improving social welfare, fighting for social justice, combating deficits and reducing environmental threats; -resource efficiency, carbon reduction and social responsibility; -increased public funding to tackle carbon emissions and create green jobs; -strong commitment to energy efficiency and biodiversity (Green economy, uk.economy-pedia.com).
Today, the survival and development of humanity requires a transition to a green economy, i.e., a system of economic activities related to the production, distribution and consumption of goods and services that in the long term will lead to an increase in human well-being, while not exposing future generations to the consequences of significant environmental risks or environmental deficits (Green economy, 4ua.co.ua). In Figure 2, the directions of transformation of the system of nature management based on the concept of green economy are presented in accordance with innovative waves.
In the business world, the concept of a green economy is at the centre of attention. Financial funds, venture capitalists, governments of advanced countries, businessmen and consumers are already building a green economy (Green economy, 4ua.co.ua). Investments in energy-efficient technologies and natural infrastructure are already yielding reasonable returns.
A wide range of tools is available to help transition to a green economy: -in accordance with the principles of sustainable pricing, including the elimination of inefficient subsidies, the valuation of natural resources in monetary terms and the introduction of taxes on things that harm the environment; -a public procurement policy that encourages the production of environmentally friendly products and the use of production methods that comply with the principles of sustainable development; -reforming the system of "environmental" taxation, which involves shifting the emphasis from labour tax to taxes on environmental pollution; -increase public investment in sustainable infrastructure (including public transport, renewable energy, energy-efficient buildings) and natural capital to restore, preserve and, where possible, enhance natural capital; -targeted government support for research and development related to the creation of environmentally friendly technologies; -social strategies designed to ensure coherence between social goals and existing or proposed economic strategies (Green economy, 4ua.co.ua). According to the forecasts of the Organisation for Economic Co-operation and Development (OECD), by 2050, with the current method of production and level of resource consumption, the world will lose 61-72% of flora and fauna compared to 2000, and the preservation of natural territories will be irreversibly disturbed by 7.5 million km 2 (Borovyk, Yelagin, Polyakova, 2020). In this context, one of the incentives for the development of the green economy is trade liberalisation, which can lead to an increase in trade flows of goods and services for environ- Source: (Galushkina, Musina, Potapenko, 2017) Vol . 9 No. 3, 2023 mental protection. This, in turn, will accelerate the replacement of old technologies and contribute to reducing the level of pollution and environmental damage caused by waste.
Principles for Further Environmentally Sustainable Development
The course of greening the economy led by the European Union is based on the principles of the concept of sustainable development. Accordingly, the scope of cooperation activities between the EU and Ukraine in the field of environmental protection and greening of the economy of Ukraine is defined by the desire to minimise environmental externalities for the full existence of future generations ("Green solution" of business -unity for sustainable development, www.gs.dp.ua).
In the Declaration of the United Nations Conference on Environmental Problems (zakononline.com. ua), signed as a result of the Conference on Environmental Problems held in Stockholm from 5 to 16 June 1972, 26 international legal principles for the further ecologically balanced development of society were defined for the first time (Table 1).
In addition, the Green Economy Coalition also proposes a number of principles for the formation of a green economy in the context of globalisation (Table 2).
At the heart of the green economy are green technologies that address the causes, rather than the effects, of environmental problems by radically changing approaches, products and, no less importantly, consumer behaviour. These include: energy efficiency and alternative energies, electricity management systems, ecological transport, waste management, air and water emissions. These technologies will make it possible to achieve the clear objectives set by the modern world economy, namely: 1. Reducing pollution and improving resource efficiency in construction, manufacturing, agriculture and infrastructure.
2. Mitigating adverse climate change through the transition to greener, cleaner energy (wind, solar, geothermal, tidal, hydro, bio, waste-to-energy, Vol. 9 No. 3, 2023 Table 1 List of principles for further environmentally sustainable development № Characteristics of the principles 1 -Freedom, equality and favourable living conditions for people in the environment; 2 -Protecting natural resources for the benefit of present and future generations; 3 -Supporting, restoring and enhancing the country's natural resources; 4 -Prioritising the protection of the natural environment when planning economic development; 5 -Careful and maximum beneficial use of the Earth's non-renewable resources; 6 -Reducing greenhouse and other harmful emissions; 7 -Prevention of marine pollution; 8 -Economic and social development to improve the quality of life; 9 -Financial and technical assistance to developing countries to cope with environmental and natural disasters; 10 -Stability of commodity prices in developing countries; 11 -Setting international environmental standards that can be met by countries at different levels of economic development; 12 -Providing financial and technical assistance (where necessary) to countries to ensure the availability and conservation of resources; 13 -Comprehensive planning of countries' development to ensure rational management of resources; 14 -Rational planning aimed at achieving a balance between the needs of development and the protection of the environment; 15 -Planning the urbanisation of settlements to avoid negative impacts on the environment; 16 -Controlling the demographic situation; 17 -Planning, management and regulation of the quality of natural resources; 18 -Use of science and technology to prevent and combat environmental risks and solve environmental problems; 19 -Environmental education and public access to information; 20 -Stimulating scientific research in the field of the environment at national and international levels; 21 -Not causing environmental damage to other States in the course of organising activities within their own jurisdiction;
22
-The development of international law in relation to the determination of responsibility and compensation for damage to victims of pollution; 23 -Consistency between international and national standards;
24
-International cooperation within the framework of multilateral and bilateral agreements for effective control, prevention, reduction and elimination of negative impacts on the natural environment; 25 -The coordinating role of international organisations in the field of environmental protection and improvement; 26 -Non-use of nuclear weapons and all other means of mass destruction.
Table 2
List of basic principles of the green economy № Characteristics of the principles І -The principle of ensuring sustainable development is embodied in the unity of environmental, social and economic components.
ІІ
-The principle of equality and justice aims at equalising countries and eliminating social differences within national borders, respecting human rights and gender equality.
ІІІ
-Respect for the dignity of the individual manifests itself in the reduction of poverty through the transformation of "traditional" jobs and the active creation of new ("green") jobs, the development of human potential, improved access to social services and the promotion of the right to development.
IV
-The principle of frugality of the green economy is implemented by minimising the impact on the environment, taking into account ecological limits and ensuring economic activity within them, preliminary assessment of the potential impact of new technologies on the environment, optimal and rational use of natural resources.
V -The principle of participation is based on the combination of transparency and openness of the activities of all interested parties (citizens, businesses, state institutions), providing the possibility of effective participation of citizens in the process of making management decisions at all levels.
VI -Governance is implemented through regulation based on consultation with all stakeholders, the development of standards to assess progress, the development of international cooperation and international responsibility for damage.
VII -The sustainability of the green economy is manifested in the development of social and environmental protection systems, the support of different green economic models that can be applied to different ecologically oriented economic models.
VIII -The principle of efficiency requires that the pricing of goods and services takes into account social and environmental costs, the life cycle of the product, the relationship between the dynamics of production and consumption, and possible negative social and environmental impacts.
IX
-The intergenerational principle of the green economy is embedded in long-term decision-making, attracting financial support for the development of different models of sustainable development and supporting the production of green goods and services.
Source: (Chmyr, Zakharkevich, 2013) Vol. 9 No. 3, 2023 hydrogen) and low-carbon end-use processes (electric or hybrid engines). 3. Reducing vulnerability and adapting to climate change by developing early warning systems and technologies resistant to temperature anomalies; improving management of biodiversity and forest resources.
4. Increased well-being as a result of more productive and sustainable use of biodiversity resources, including natural cosmetics and pharmaceuticals (Solosych, Podlisnyuk, 2013).
World society therefore needs a new concept of development that will enable it to solve social, financial, fuel, climate and other problems in a comprehensive manner and to achieve not only quantitative growth but also significant qualitative and real improvements.
System of Green Economy Directions
The transition to a green economic model can be ensured by annual investments of 2% of world GDP (about 1.3 trillion USD) over the period 2012-2050. The green investment scenario will provide higher annual growth rates for 5-10 years than investments in conventional development. The "greening" of the economy is a way to eradicate poverty ("Green" economy, old.livingplanet.org.ua). There is a direct link between poverty eradication and the rational management of natural resources and ecosystems, as the poor benefit directly from the increase in natural capital. Table 3 presents the system of directions of the green economy.
The main strategic document of the state policy in the environmental sphere is the Basic Principles (Strategy) of the State Environmental Policy of Ukraine for the period up to 2030, approved by the Law of Ukraine of 28 February 2019 (Gula, 2021). In the current Strategy until 2030, the goal of the national state environmental policy is defined, which consists, in particular, in achieving a satisfactory state of the environment through the introduction of an ecosystem approach to all spheres of social and economic development of Ukraine, in order to ensure the constitutional right of every citizen of Ukraine to a clean and safe environment, the introduction of balanced nature management, and the preservation and restoration of natural ecosystems (Gula, 2021).
Increasing the use of energy from renewable sources and alternative fuels is considered an important part of Ukraine's strategy to preserve traditional fuel and energy resources and reduce the associated negative impact on the environment (Gula, 2021). Figure 3 provides information on biodiesel production in EU countries in 2021-2022.
For Ukraine, which has many problems and imbalances in the development of the labour market, the disclosure of the social potential of the green economy is of great practical interest, as it will contribute to the study of ways to overcome unemployment through the implementation of promising innovative mechanisms that have not received enough attention so far (Perga, 2012). The creation of green jobs depends, of course, on the prospects for the development of the green market.
Agriculture, which is largely dependent on climate change, can now safely be described as the largest consumer and polluter of water, the cause of deforestation and the loss of biodiversity. At the same time, this sector is a potentially great source for creating additional (including green) jobs and solving related social problems, especially considering Table 3 System of directions of "green" economy № The name of the direction Characteristics of directions I Implementation of renewable energy sources -According to environmentalists, more than half of all fossil fuels should remain unexplored to avoid significant climate change on the planet.
Improvement of the waste management system
-At present, in the developed countries of the world 1-3 kg of solid household waste is produced per capita per day, and only in the USA this amount increases by 10% every 10 years. In Ukraine, the total area of landfills is more than 42 thousand km2.
III Improvement of the water management system -Today, one in six people on the planet suffers from a lack of fresh drinking water.
Development of clean (sustainable, green) transport
-UNEP is working on ways to reduce the demand for transport, especially private cars, without compromising overall mobility.
V Organic farming in agriculture -Presupposes the refusal to use herbicides, pesticides, toxic chemicals and fertilisers of artificial origin. Organic products do not contain genetically modified organisms, are processed without the use of e-ingredients and are stored away from contact with artificial substances.
VI
Energy efficiency in the housing and utilities sector -The presence of residential complexes with inefficient thermal insulation structures and heat supply systems causes significant heat losses.
Protecting and effectively managing ecosystems
-All the diversity of human activities in the biosphere leads to changes, the direction and extent of which are commonly referred to as an environmental crisis.
Source: (Borovyk, Yelagin, Polyakova, 2020) Vol. 9 No. 3, 2023 the 1.3 billion people currently employed in it and the new trends of global development (Perga, 2012;Tomashuk, 2017). The introduction of economic incentives affects the greening of investments and the production of goods and services in general.
Green Growth as a Means of Stimulating Economic Development
The growth of the world economy under the existing model of production can create a situation when the damage caused by pollution and destruction of the natural environment begins to exceed the income received. Overcoming this situation is possible only thanks to the introduction of innovations for the reproduction of natural resources (Makovoz, Perederii, 2018).
Green growth is a means of stimulating economic growth and development that ensures that natural assets continue to provide the resources and environmental services on which people depend for their well-being. For this, it should serve as a catalyst for investment and innovation, which will be the basis for sustainable growth and lead to the emergence of new economic opportunities ("Green" economy, minpriroda.by; Mazur, Tomashuk, 2019). Figure 4 shows the dynamics of financing by developed countries to improve climate conditions in developing countries.
In order for a green economy to emerge, a number of conditions need to be in place. These include legislation that supports this type of economy, increased public investment and private enterprise in the so-called green sectors, and public administration policies that promote the green economy. Many banks promote investment in environmental projects, known as green banks (Green economy, uk.economy-pedia.com).
In addition, the instruments of the new tax system include the provision of tax incentives for companies engaged in economic activities related to the processing of waste up to the disposal stage; the use of secondary raw materials for further production; the use of environmentally friendly packaging materials and their reuse; the introduction of lowwaste, resource-and energy-saving technologies; investments in the development of green production and "green" products; the introduction of the latest technologies; the restoration of landscape areas to their original state, etc.
Thus, the introduction of environmental taxes in Ukraine will also stimulate the introduction of the green economy in the context of the transition to sustainable development (Solosych, Podlisnyuk, 2013). The main budgetary source of financing environmental protection in Ukraine today is the environmental tax, which is a compulsory payment based on the actual volume of various emissions, discharges and waste dumping into the environment. To develop a green economy in Ukraine, it is necessary to create a reliable source of funding for environmental protection activities and stimulate the introduction of resource-saving and environmentally friendly
Figure 3. Biodiesel production in the EU, million tonnes, 2021-2022
Source: (Stepanenko, 2022) Vol . 9 No. 3, 2023 technologies into production, for which it is necessary to improve the current system of environmental taxation by introducing the following: -рreferential taxation for companies that reduce emissions, discharges and waste disposal; -taxation of environmentally hazardous products that cause damage to the environment (e.g., fertilisers, electrical and electronic equipment); -taxation of the harmful effects of physical and biological factors on the environment and humans (noise, electromagnetic radiation); -fines for environmental violations; gradual approximation of environmental tax rates in Ukraine to the European ones, which will meet the requirements of the EU-Ukraine Association Agreement; -еnshrining in the Budget Code of Ukraine the requirements for the targeted use of environmental tax revenues exclusively for environmental purposes (Gula, 2021). Today, Ukrainian enterprises are looking for new ways to achieve ecological cleanliness of production, which opens new ways of applying the green economy, which allows not only to preserve the environment, but also to improve their competitiveness on foreign and domestic markets due to the modernisation of the production process (Makovoz, Perederii, 2018).
Reconciling economic growth and environmental safety, finding tools for decarbonisation and choosing effective measures for adapting to climate change -these are the tasks facing scientists, experts, public organisations, business representatives and politicians who are not indifferent to the modern problems facing humanity. In this context, the Ukrainian Wind Energy Association (UWEA) has submitted its proposals for the National Renewable Energy Action Plan (NREAP) 2030, which include a number of measures, namely: -stimulating electricity production from renewable energy sources on a market basis; -stimulating the production of electricity from RES on a market basis without government support (corporate PPA); -include in the Energy Strategy of Ukraine until 2050 a scenario for the construction of manoeuvrable capacities and energy storage facilities; -laying the groundwork for the development of hybrid renewable energy plants in Ukraine; -promoting the development of offshore wind energy and hydrogen technologies (Green economy: how to strike a balance, 2021).
In the international practice of ecological economy, among its competitive strategies is used the strategy of ecological tax reform, which allows to create jobs and preserve the environment at the same time, as it shifts the base from income and wage fund to consumption of natural resources and harmful emissions (Solosych, Podlisnyuk, 2013). It increases wages in line with economic development and stimulates investment in innovative technologies, reduces the cost of natural resources by reducing the material intensity of production and energy consumption, i.e., significantly reduces harmful emissions.
Economic growth has contributed to the revision of approaches aimed at harmonising the principles of income maximisation with the intensive use of resources and the principles of rational management of nature, taking into account the needs of future generations (Solosych, Podlisnyuk, 2013). In fact, the green economy should be considered as a way to sustainable development. Figure 5 shows the main processes in the system of nature management that influence the formation of sustainable development. The transition to a green economy must take into account the opportunities and conditions of each country, its level of development, political situation and public preferences. Over the past decade, the concept of a green economy has become a strategic priority for many governments and intergovernmental organisations. Around 100 countries have embarked on the path towards an inclusive green economy and green growth strategies. By transforming their economies into engines of sustainability, these countries are poised to address the major challenges of the ХХІ century, from urbanisation and resource scarcity to climate change and economic instability.
Advantages of Applying the Green Economy
The implementation of the green economic model involves increasing the role of the state and intergovernmental bodies in economic regulation, creating conditions for business development based on new environmental standards and cleaner production technologies, and greening industrial sectors of the economy ("Green" economy, old.livingplanet.org. ua). Ukraine, as part of the European family, cannot remain aloof from these processes. Therefore, today it is necessary to look ahead and propose concrete solutions so that Ukraine becomes an integral part of these global changes. Table 4 provides an analysis of the benefits of applying the green economy for the state and business entities.
In the conditions of resource and energy dependence of Ukraine, created by the situation when ecologically harmful technologies are used in outdated energyinefficient enterprises, it is the gradual replacement of the "brown" industrial economy by a new "green" one as a strategic priority of development that gives a chance to ensure the national security of the state in the coming decades ("Green" economy, old.livingplanet.org.ua). Table 5 presents indicators of the green economy progress index.
It is important to emphasise that the concept of the green economy does not replace the concept of sustainable development, but develops it and is a means of putting it into practice. "Green" development can only be ensured if environmental and economic policies are integrated in such a way that social Vol. 9 No. 3, 2023 Table 4 Analysis of the benefits of green economy for the state and business entities Advantages of applying the green economy for the state for business entities -Reducing the economy's dependence on external supplies of raw materials and price fluctuations; -Implementation of energy and resource-saving technologies; -Access to new markets through clean technology; -Attracting foreign direct investment; -Improving the environmental situation and preserving natural resources; -Creating a positive "green" image.
-Reducing the specific costs of resource consumption; -Modernisation of production; -Generating additional income through the use of available resources (through waste recycling); -Improving product quality and competitiveness; -Possibility to receive state benefits; -Diversifying the asset structure and reducing strategic risks associated with traditional production.
Source: (Makovoz, Perederii, 2018) Vol. 9 No. 3, 2023 progress, economic growth and improvement in the quality of life of the population take place against a background of reduced threats to the surrounding natural environment.
Findings
Green growth and an improved version of sustainable development are interpreted by some scientists as a new economic engine capable of solving a number of acute problems of modern socioeconomic development, including the threat of environmental degradation, the depletion of reserves of basic natural resources, an increase in the frequency of weather anomalies and climate change. In this context, special attention should be paid to solving the most acute social problems, namely: poverty, lack of food for a large part of the world population, low level of medical care, deepening of social stratification, lack of access to basic infrastructure services (Goncharenko, Parkhomenko, Luchyn, 2020). Fig. 6 shows a diagram of the directions of economic influence on the formation of balanced relations between man and the environment.
The environment and its quality are increasingly seen as a value in themselves, a consumer good, and society should be prepared to pay for this by recognising the priority of environmental interests. In this context, it is important for rational nature management to consider the environment not so much as a resource base, but as natural capital, as part of a single whole -capital. Table 6 shows the production stages of the transition to a green economy.
Thus, sustainable development strategies of countries with developed economies are complex long-term documents that, firstly, provide for the adoption of balanced decisions regarding the rates of economic growth, social development and the maintenance of ecosystems in a satisfactory state; secondly, they pay considerable attention to technological development and innovation as important factors in increasing the efficiency of resource use and environmental protection; thirdly, they are based on the involvement of business and the public in their discussion and adoption (Ivashura, 2022).
The goal of the green economy is to ensure the implementation of the cooperation of the three main directions of development, namely social welfare, economic growth and environmental protection. This means that the green economy requires the well-established and effective functioning of the three main factors of sustainable developmentsocial, economic and environmental. It should be emphasised that the efforts of all stakeholders in the transition to a green economy must combine the need for short and medium-term profit with the long-term systemic transformation. Economic growth is key to providing the resources and ensuring the social protection and equity needed to finance the actions and build the capacity to transition to a green economy.
Conclusions
The green economy is the basis for implementing the concept of sustainable development based on Vol. 9 No. 3, 2023 more efficient use of resources and energy, reduction of CO2 emissions, reduction of harmful effects on the environment and development of a socially integrated society. The concept of a green economy provides a solution to various crises: financial and economic, food, climate, fuel, water and biodiversity. The strategy for the transition of the European Community to a green economy by 2050 indicates that such an economy should be identified with a system that unites ecosystems (natural capital), the economy (physical capital) and society.
Progress in supporting the stable mutual development of the economy and the natural environment is primarily linked to government policy. It is proposed to develop a global innovative breakthrough strategy to move towards a future integrated society based on the partnership of civilisations. The main task of this strategy is to ensure the harmonious co-evolution of society and nature by creating economic levers for resource conservation, large-scale shifts in the economic structure and production technologies (significant reduction of the share of nature-consuming industries and sectors while increasing the share of nature-reproducing industries, widespread use of recycling processes for natural substances, development of more advanced disposal technologies). Table 6 Production stages of the transition to a green economy Process Process Goal -Environmental friendliness of the product, its suitability for repair, reuse and recycling -Analysis of materials and identification of alternatives -Eco-packaging. Packaging design and strategies to comply with the basic principles of packaging -Life cycle assessment and the so-called environmental product declaration Production -Determining the environmental impact of enterprises and processes (screening) -Optimising energy, water and resource consumption -Waste minimisation and management, utilisation of by-products -Waste minimisation and management, utilisation of by-products -Transparent supply chains -Promoting the company's corporate principles of environmental friendliness and sustainable development Business operations -Business screening in the green economy -Production impact on the environment -Transitioning from a product to a product as a service (turning data into profit, ongoing customer relationships, after-sales service) -Setting up reverse logistics and return schemes -Creating road maps for CO2 emissions -Introducing a system of consumer responsibility (extended consumer responsibility). This method of economic regulation obliges manufacturers and importers to recycle products at the end of their life cycle Source: (Ivashura, 2022) | 2023-08-31T15:24:21.359Z | 2023-08-25T00:00:00.000 | {
"year": 2023,
"sha1": "bbfa818848ef7caaf28a07d29bd176c65a628670",
"oa_license": "CCBYNC",
"oa_url": "http://baltijapublishing.lv/index.php/issue/article/download/2145/2150",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "724ade97310e6fe18113c6c088458d7af769972a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
81432352 | pes2o/s2orc | v3-fos-license | Patient Engagement and Multidisciplinary Involvement Has an Impact on Clinical Guideline Development and Decisions: A Comparison of Two Irritable Bowel Syndrome Guidelines Using the Same Data
Abstract Background and Aim The value of a multidisciplinary group and patient engagement in guideline groups is uncertain. We compared the recommendations of two guidelines that used the same data during the same time frame but with different participants to obtain a “real world” perspective on influence of the composition of guideline groups. Methods The Canadian Association of Gastroenterology (CAG) and the American College of Gastroenterology (ACG) recently updated their clinical practice guidelines for the management of Irritable Bowel Syndrome (IBS). Both the CAG and ACG used the same methodology and methodologist and were presented with the same data for interpretation. The ACG group consisted of predominantly academic gastroenterologists, while the CAG group also included general practitioners, a psychiatrist, a psychologist and a patient representative. The CAG group were also asked what components of the group were valuable. Results There were 14 statements with the same or similar recommendations. There were 10 statements in the CAG guideline not addressed by the ACG guideline and five recommendations where the opposite was the case. There was one statement that the two groups both addressed, but each group came to different conclusions. CAG members were in 100% agreement that involving a patient and having a multidisciplinary team was valuable and may have played a role in these differing interpretations of the same data in an IBS guideline. Conclusions There has been little uptake of patient involvement and multidisciplinary teams in guideline groups. However, this study provides a unique example of added benefit through broader group representation.
Irritable bowel syndrome (IBS) is a common constellation of symptoms that is experienced by 10% to 20% of the population (1). Irritable bowel syndrome imposes a significant burden on the health care system and reduces quality of life (2). In Canada, over 5 million Canadians live with IBS, and $8 billion is attributed to lost productivity each year (3). Given the cost of IBS to the patient and health service, it is important that clinicians are given evidence-based guidance on the optimum management of IBS. They are informed by a systematic review of evidence and follow a transparent methodology to translate best evidence into clinical practice for advancing patient outcomes. There is debate on the ideal composition of a guideline group (4). There is some evidence that a multidisciplinary group (5) and the involvement of patient representatives (6) leads to different questions being asked and some changes in the conclusions reached, but there is a paucity of "real world" data on this topic (7).
Recently, both the Canadian Association of Gastroenterology (CAG) (8) and the American College of Gastroenterology (ACG) (9) updated their clinical practice guidelines for the management of IBS. The ACG Task Force consisted of predominantly academic and clinical gastroenterologists, while the CAG Consensus Group included a broader group of health care professionals including academic gastroenterologists, general practitioners, psychiatrists, and psychologists. Furthermore, the CAG Consensus Group also included a patient representative from the IMAGINE (Inflammation, Microbiome, and Alimentation: Gastro-Intestinal and Neuropsychiatric Effects) Network, a CIHR Strategy for Patient Oriented Research (SPOR) chronic disease network. We compared the two guidelines to see if there was any variation in the statements being evaluated and the conclusions of guidelines that might be attributable to the different compositions of the ACG and CAG groups.
METHODS
Both and CAG and ACG struck a core group to decide on the scope of the guideline and the statements that would be evaluated. The ACG core group consisted of academic gastroenterologists, but the CAG group also had psychiatry and the patient representative input. Author PM was lead methodologist for both guidelines and conducted a series of systematic reviews on interventions in IBS (10-12) using Cochrane methodology (13) to support the guideline. Both the ACG and the CAG guideline groups were presented with identical data from systematic reviews to inform the statements they were evaluating. Both groups used a modified Delphi approach (14) to reach consensus, and the in-person meetings for each group were held within one week of one other. The lead methodologist (PM) was common to both guideline groups, but all other members were different, and PM kept discussion from each group strictly confidential so neither the ACG nor the CAG group knew of the content of the other group's discussion. Both the ACG and CAG groups, however, were aware that PM was the methodologist for both guidelines and that the same data were being presented. Both groups used the GRADE approach (15) to evaluate the quality of the evidence and the strength of recommendations. In both cases, the quality of evidence was graded as high to very low by two independent methodologists. There were 12 participants in the CAG guideline group, including six gastroenterologists, two general practitioners, one psychologist, one psychiatrist, one methodologist and one patient representative. The patient representative was a full participant throughout the clinical guidelines development process, contributing to all stages of the guideline according to standard recommendations (16), including development of statements, the prevoting process, the group discussion and voting. The ACG guideline group was comprised of 10 academic and community gastroenterologists.
The scope of the statements was compared between the two guidelines. We also compared the direction of voting and the strength of recommendation made by each group when statements were similar. Finally, we evaluated how the CAG group viewed the multidisciplinary nature of the group and the input of the patient representative using a five-point adjectival scale and the authors were asked the open ended questions for qualitative feedback on the usefulness of the multidisciplinary team and the value of group decision-making.
RESULTS
Overall, there were 14 statements that were similar between the two groups, with similar or the same recommendations according to GRADE criteria (15) ( Table 1). There was one statement in the ACG guideline (9) giving a conditional recommendation for general psychological therapies in IBS. The CAG guideline split this into four separate statements giving conditional recommendation for cognitive behavioural therapy techniques and hypnotherapy but not making a recommendation (either for or against) for psychodynamic psychotherapy or relaxation therapy (8). There were five statements in the ACG guideline not addressed by the CAG guideline and 10 recommendations where the opposite was the case ( Table 2). One statement on rifaximin in nonconstipated IBS had differing conclusions between the two groups despite receiving the same data. The ACG guidelines gave this a conditional recommendation for the use of rifaximin in nonconstipated IBS (9), whereas the CAG guideline did not make a recommendation either for or against rifaximin in this patient population (8).
The survey conducted with the CAG Consensus Group members on the guideline development process was completed by 10 of 11 of participants (PM did not vote as the methodologist common to both guideline groups). All participants agreed or strongly agreed that the multidisciplinary nature of the group helped form their opinion ( Figure 1). The descriptive words used to respond to the open-ended questions are outlined in Box 1. The most commonly used words, spontaneously provided by at least 25% of the participants were 'professionals', 'experience', 'primary care', 'helpful', 'patient preference' and 'patient-centric;. According to one of the CAG consensus group members, "the broad spectrum of clinical practice settings and expertise made the consensus meeting a very fruitful one. We all see different aspects of the same disease, therefore having a broader consensus group and patient involved in the process generated a much fuller and richer picture. " The patient representative commented that involving the lived experience perspective and wide ranging clinical viewpoints can "be assuring to end users and adds greater credibility to the guideline document. "
DISCUSSION
We present one of the few examples in the literature of results of a "real world" setting where two guideline groups were given the same data and same methodology for evaluating the quality of the data and reaching consensus. Both groups contained an optimum number of participants, usually regarded as between (18) five years after these recommendations were published. A greater proportion ask for patient organization input once the guideline is written, but this is an inadequate approach to ensuring that the patient voice is heard because they can only react to what is written in front of them rather Not addressed than be involved in process from conception to completion (16). This is particularly important because one component of the GRADE approach (15) is to capture patients' values and preferences in making recommendations. This clearly cannot be done adequately if a patient is not involved in the process. The reasons for this slow uptake are multifactorial but may relate to the lack of evidence for the value of this approach. A systematic review of patient engagement in guidelines (19) identified 71 articles that reported on the value of patient involvement in the guideline process. Most were qualitative and focused on how engaging patients improves the incorporation of values and preferences in the guideline (19). None had a comparison guideline where patient engagement was not used and the same data were evaluated at approximately the same time. Our study is a rare example of this type of comparison and suggests patient engagement can add value. This is particularly true of the rifaximin statement where no recommendation was made in the CAG guideline, but the ACG gave a conditional recommendation for this drug in nonconstipated IBS. Both groups struggled with the modest efficacy of the drug, the expense of the product and concerns around antibiotic resistance. The patient (MM) involved in the CAG guideline pointed out in the in-person meeting that patients view antibiotics as being used to treat infection, and the clinician would be signalling to their IBS patient that they had an 'infectious disease', which does not fit with the paradigm that they have for their problem. This concern resonated with the general practitioners in the group and some of the other panel members, as well. It is also supported by research that suggests patients need a consistent paradigm for why they have their IBS symptoms, and if there is no clear message on what is causing their disease and treatments that fit within that paradigm, then patient satisfaction is less and outcomes could be poorer (20). These issues were not raised in the ACG group, and this may be one reason why a different conclusion was reached. The challenge in interpreting this qualitative information is that this is just one example, and it is difficult to obtain quantitative comparative information from a large number of guidelines to provide robust evidence. Nevertheless, our observation is consistent with another study that also found patient engagement in a guideline on amyloid positron emission tomography in dementia led to different recommendations from evaluating the same data (21). This study also found that most recommendations were very similar, but there was value in adding the patient perspective. On occasion, this did lead to different recommendations, which were seen as more patient-centric. This is the only other example in the literature that we could identify, but it is reassuring that this gives a similar message. Differences in the two IBS guidelines do not just relate to patient involvement. Research suggests that whether a guideline group is homogeneous or heterogeneous depends on the Figure 1. Opinion of the CAG guideline consensus group on the value of each discipline involved in the IBS guideline Box 1. Words used to describe the experience of working in the CAG guideline group disease being studied (7). When the disease being evaluated is highly specialized with little impact on other disciplines, then a guideline group consisting of specialists (and a patient representative) may well be appropriate. However, IBS is a disease primarily managed in primary care and with a strong overlap of psychological comorbidity, where psychological interventions have been shown to be effective. It is therefore logical to include these groups in the guideline panel; they were also important in shaping the guideline. As stated previously, the involvement of primary care was an important factor in the decision regarding rifaximin. The CAG guideline also divided psychological interventions into four categories, while the ACG guideline considered them as all one intervention. The CAG position on psychological interventions is therefore more nuanced, emphasizing that data are more robust for some psychological interventions compared with others.
There are some limitations of this work. Any research in this area has the challenge that it is difficult to know with certainty what the 'correct' recommendations should be. This work shows that, although many recommendations are the same, there are some important differences. It does not say which guideline is the best reflection of the evidence of which should be followed by clinicians. This is only one disease and two groups, and different results may be obtained in different diseases and with different settings. Finally, not all differences between the guidelines are attributable to differences in group composition. For example, the Canadian guideline evaluated prucalopride, which is not available in the United States, while the opposite is true of alosetron. Thus, local availability of interventions is likely to be the explanation for some of the differences observed.
In conclusion, we present a 'real world' example of the same data being evaluated by two guideline groups: one consisting of a single specialty and the other being multidisciplinary with patient involvement. Differences between the two guidelines emphasizes that the composition of the guideline group needs to be chosen carefully and should represent all types of health care professions working in the disease of interest and should include a patient with the disease so that this valuable perspective can be captured. | 2019-03-18T14:06:40.910Z | 2019-01-17T00:00:00.000 | {
"year": 2019,
"sha1": "9f8f716b30ccecf2433aade7927893d8c8464a39",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jcag/article-pdf/2/1/30/28514176/gwy072.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9f8f716b30ccecf2433aade7927893d8c8464a39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208520631 | pes2o/s2orc | v3-fos-license | Estimation of Bladder Pressure and Volume from the Neural Activity of Lumbosacral Dorsal Horn Using a Long-Short-Term-Memory-based Deep Neural Network
In this paper, we propose a deep recurrent neural network (DRNN) for the estimation of bladder pressure and volume from neural activity recorded directly from spinal cord gray matter neurons. The model was based on the Long Short-Term Memory (LSTM) architecture, which has emerged as a general and effective model for capturing long-term temporal dependencies with good generalization performance. In this way, training the network with the data recorded from one rat could lead to estimating the bladder status of different rats. We combined modeling of spiking and local field potential (LFP) activity into a unified framework to estimate the pressure and volume of the bladder. Moreover, we investigated the effect of two-electrode recording on decoding performance. The results show that the two-electrode recordings significantly improve the decoding performance compared to single-electrode recordings. The proposed framework could estimate bladder pressure and volume with an average normalized root-mean-squared (NRMS) error of 14.9 ± 4.8% and 19.7 ± 4.7% and a correlation coefficient (CC) of 83.2 ± 3.2% and 74.2 ± 6.2%, respectively. This work represents a promising approach to the real-time estimation of bladder pressure/volume in the closed-loop control of bladder function using functional electrical stimulation.
and the risk of implanted electrode corrosion and may lead to habituation of the spinal reflexes. Moreover, continuous stimulation requires high power consumption with respect to conditional stimulation.
To overcome the limitations of continuous stimulation, conditional stimulation can be employed, where the stimulation is applied when an impending bladder contraction is set to occur or the bladder requires voiding. To inhibit impeding bladder contraction by conditional stimulation, it is necessary to detect the onset of nascent hyperreflexive contractions.
Several methods for detecting bladder contractions and triggering stimulation have been reported. One common approach is to monitor intravesical pressure using artificial sensors [17][18][19][20][21][22][23] . However, there are a number of challenges facing artificial sensors that need to be addressed before clinical trials can be considered. These challenges are primarily related to invasiveness, artifacts from patient movement, abdominal pressure changes 24 , material biocompatibility 25 , and related instruments' decreasing reliability over time 26,27 . Several alternative approaches have also been proposed to measure either the intravesical pressure or volume, using electromyography (EMG) of the external urethral sphincter 6,[28][29][30] , electroneurography (ENG) of the pudendal nerve trunk using cuff-electrodes 5,31 , pudendal nerve activity using penetrating intrafascicular electrodes 32 , and ENG of the pelvic nerve33 and sacral nerve roots 33,34 . However, several issues, such as a high degree of invasiveness, motion artifacts caused by organ movement, and low signal-to-noise ratios of electroneurograms, limit the chronic monitoring of intravesical pressure/volume. Moreover, ENG recordings from either the pudendal nerve 5,31 , sacral root nerve 33,34 or pelvic nerves 33 using cuff electrodes provide a whole-nerve action potential composite of several units in the nerve that also carries information from other sources.
Multi-channel recordings from dorsal root ganglia (DRG) 27,35 or dissected intradural dorsal rootlets 36 have also been considered as a potential alternative to pressure/volume estimation. The DRG is a cluster of sensory neurons that conveys information from the skin, muscles, and joints of the limbs and trunk to the spinal cord. Several studies have also demonstrated the feasibility of deriving limb-state estimates from the firing rates of primary afferent neurons recorded in DRG [37][38][39][40][41][42][43][44][45] . An important challenge facing DRG recordings is the long-term chronic recordings from DRGs. Recently, it has been demonstrated that the longest continuously tracked bladder afferent could last for 23 days 46 .
Recently, neural activity recorded from the dorsal horn of the spinal cord has been used to demonstrate the feasibility of estimating the pressure 26 and volume 47 of the bladder. For this purpose, a microelectrode array was implanted in the spinal dorsal horn of the L6 to the S1 segment. It was demonstrated that the firing rates of the sorted neurons can be used to estimate the pressure 26 and volume 47 during the filling of the bladder.
One important issue in decoding continuous action from neural recordings is the decoding model itself. The most common approaches used in the past to monitor the bladder have been linear regression 26,27,36,47 and Kalman filtering 27 . Despite the existence of efficient linear decoding models, linear regression models only look at the linear relationship between the mean of the dependent variable and the independent variables, while neural signals originate from highly nonlinear and multidimensional systems. Moreover, linear approaches are relatively sensitive to outliers. To improve the performance of the decoding model, some studies used nonlinear regression methods, such as nonlinear autoregressive moving average (NARMA) models 27 and support vector regression (SVR) 26 . In 27 , it was demonstrated that the NARMA model provided the most accurate bladder pressure estimate (based on the normalized root-mean-squared error) compared to Kalman filtering and linear regression. Moreover, it was shown that the decoding accuracy achieved by SVR was significantly greater than that achieved by the linear filter 26 . However, the nonlinear regression used in 26 is a static structure in which there is no feedback, and the outputs are calculated directly based on the inputs (i.e., firing rates). The static structure of the decoding method is not an efficient method for representing the nonlinear dynamic relationships between neural signals and bladder states. Moreover, it has difficulty capturing global system behaviors in the identification of nonlinear systems that combine long-and short-term dynamics.
In this paper, we propose a deep recurrent neural network (DRNN) for the estimation of bladder pressure and volume from extracellular neural activity recorded directly from spinal cord gray matter neurons. The model is based on the Long Short-Term Memory (LSTM) architecture, which has emerged as a general and effective model for capturing long-term temporal dependencies 48 . The LSTM is a state-based RNN whose output depends not only on the current information to perform the estimation but also explicitly takes into account the long-term information from the past.
In addition to spiking activity, it has been demonstrated that the local field potential (LFP) activity recorded from the lumbosacral dorsal horn also carries information about intravesical pressure 26 . Neural information processing at each level of observation (spiking, LFP, electroencephalogram) entails the interaction of both evoked (input-driven or stimulus-related) and induced (background processes or intrinsic dynamics) factors. In this paper, we combine modeling of spiking activity and LFP activity into a unified framework to estimate the pressure and volume of the bladder.
Afferents innervating the bladder project to the lumbosacral (L6-S1) segments of the rat spinal cord. The question arises as to which segment provides more information about the bladder. To answer this question, we investigated single-electrode recordings from the L6 and S1 segments using mutual information and decoding performance.
Another important issue in the closed-loop control of the bladder and in developing an efficient therapeutic approach for individuals with spinal cord injury or neurological disorders is the simultaneous measurement of both the pressure and the volume of the bladder. To date, researchers have focused on estimating either the intravesical pressure 26,27 or volume 36,47 from neural signals. In this paper, we also investigate the simultaneous estimation of both the pressure and volume of the bladder from single-electrode lumbosacral spinal recording using the proposed decoding model.
Materials and Methods
Animal preparation and surgery. The experiments were conducted on fifteen intact adult male Wistar rats (180-400 gr). All surgical procedures and experimental protocols involving animal models described in this paper were approved by the Institutional Animal Care and Ethics Committee of Iran Neural Technology Research Center, Iran University of Science and Technology. The whole protocols and methods were performed in accordance with the recommendations and relevant guidelines for the care and use of laboratory animals. The animals were anesthetized with urethane (1.5 g/kg, intraperitoneally) and remained sedated with ketamine (3 mg/ kg). The fur around the T13 to the L4 vertebrae was removed. Then, the skin was incised along the vertebrae, and the muscle tissue was removed until the vertebrae were visible. A laminectomy was performed on the lumbar vertebrae (L1 to L2) to expose the L6 to S1 spinal cord segments. To expose the bladder, a ventral midline incision was made. A sterile polyethylene (PE) 50 tube (0.5 mm ID and 0.9 mm OD) (AD Instruments Ltd, Australia) was inserted into the bladder wall through the dome and secured with a purse-string suture. The PE tube was attached to a stopcock connected to a pressure transducer (NovaTrans transducer system, MX860, Smiths Medical ASD, Inc.) and a syringe pump (SN-50C6, Sino Medical-Device Technology Co., Ltd., China) for recording the intravesical bladder pressure and infusing saline into the bladder, respectively. The pressure signals were amplified (900×) and sampled at 50 Hz (maximum sampling rate of pressure sensor is 1 kHz). A digital scale (GF-300, A&D Instruments Ltd., UK) was positioned under the rat to measure the voided volume. The volume signals were sampled at 5 Hz. The experimental setup is illustrated in Fig. 1. The bladder was filled with three different rates of saline solution (i.e., 6, 8, and 10 ml/h), and the neural signals and the pressure and volume of the bladder were recorded simultaneously during filling. Each trial started with the saline infusion and continued until several voiding contractions and bladder leakages occurred. The duration of each trial was between 450 and 850 seconds. At the end of each trial, the bladder was emptied with a syringe, and the rat was relaxed for 20 minutes. During (a) A catheter was inserted into the bladder wall thorough the dome and secured with a purse-string suture. One head of the catheter was connected to the pressure transducer and another head to the infusion pump. A digital scale was positioned under the rat to measure the voided volume. One single-needle electrode was inserted into L6, and another one was inserted into the S1 spinal cord segment for neural activity recording. Bladder filling was performed with 0.9% saline at room temperature using an infusion pump at different rates (6,8, and 10 ml/h). (b) A typical histological image from the S1 spinal cord segment of rat 8. Staining was performed with hematoxylin and eosin (H & E).
Electrode implantation and neural signal acquisition.
Recording electrodes were made from epoxylite-insulated tungsten with shank diameter 75 μm, 5°-10° tapered tip of 120 μm exposed length, and 300-500 kΩ resistance (FHC Inc., Bowdin, ME USA). The microelectrodes were mounted on a micromanipulator (SM-15, Narishige Group Product, Japan) that could control the three-dimensional positioning of the electrode with a minimum graduation of 10 μm. The electrodes were positioned at locations within the L6 and S1 dorsal horn approximately 0-1 mm lateral from the midline between 100 and 200 μm in depth ( Fig. 1(b)). To determine the best electrode position within the dorsal horn, the electrode was vertically advanced through the spinal cord dorsoventrally. Then, the electrode was withdrawn and moved 100 µm mediolaterally and/or rostrocaudally to an adjacent location, while the correlation of neural activity with the bladder pressure was visually inspected on the monitor of the recording system. The positions that produced the highest correlations were selected. The recording process was performed within a custom-made Faraday cage to increase the signal-to-noise ratio. Neural signals were amplified (programmable gain, 1000×) and recorded at a 20 kHz sampling rate using a data acquisition system (USB-ME64 system, Multichannel Systems Reutlingen, Germany).
preprocessing and feature extraction. The recorded neural signals were bandpass filtered between 300 and 3000 Hz with the low-pass and high-pass elliptic filters of order four. Spikes were detected using the Wave_ Clus program, which is publicly available online (http://www2.le.ac.uk/centres/csn/research-2/spike-sorting). The threshold for spike detection was set to four times the standard deviation of the noise estimated from the filtered signal, and spike events were identified as each instance the signal exceeded this threshold. The feature set was formed from the continuous firing rate (FR) and the LFP. All data analyses and decoding models were performed with customized algorithms written in MATLAB.
Continuous FR. Continuous FR was computed by taking a Gaussian window of duration 50 ms and counting the number of spikes within the window at each time. Then, the FR was smoothed using a causal moving average filter with a span of 20 samples.
Local field potential. The power of the LFP subband component constituted the second feature set. The LFP signals were down sampled to 2000 Hz. Short-time Fourier transform (STFT) was used to generate power spectra over time with a 500 ms Hanning window with 80% overlap. The average of the power spectrum in five frequency bands (1-2.9 Hz, 3-8.8 Hz, 9.1-26.7 Hz, 27.7-81 Hz, 83.9-256 Hz) was used as the neural signal feature for decoding the pressure and the volume of the bladder. The average of the power spectrum was smoothed using a moving average filter with a span of 10 samples. The frequency bands of the LFP signals were selected according to 26 . Decoding model. The architecture of the proposed decoding model is shown in Fig. 2. The model is based on an autoencoder (AE) 49 and a recurrent neural network with LSTM 50,51 . Autoencoder is one of the deep architecture-based models used to learn low-dimensional features from a high-dimensional input vector 49 . LSTM is a type of RNN that allows learning of long-term temporal dependencies. In principle, the RNN involves dynamic elements in the form of a feedback loop, which feedback the lagged outputs of the neurons to the inputs of neurons 52 . The feedback loops enable the network to perform dynamic mapping and learn tasks that extend over time. However, conventional RNN encounters challenges in modeling long-term dependencies, such as vanishing and exploding gradient problems 48,53 . Recurrent neural networks with LSTM have been shown to be an efficient method to overcome these problems. By taking advantage of an AE to extract the effective features and an RNN with LSTM to take into account the long-term information in the past, a deep learning structure is proposed for estimating the pressure and volume of the bladder.
Autoencoder. The basic architecture of an AE network consists of three layers, one input layer, one hidden layer with dimensions less than the dimensions of the input layer, and an output layer with dimensions equal to the dimensions of the input layer 49 . The AE network transforms the high-dimensional input data into a feature space with lower dimensions, while the decoder network can reconstruct the input data from the feature space. The AE attempts to make the network output value close to the input value by minimizing the reconstruction error. After training the AE, the output layer together with its connections from the hidden layer are removed, and the hidden layer is considered the extracted feature. The number of neurons in the input layer and hidden layer are 41 and 30, respectively. Fig. 3. Each LSTM layer consists of three gates (input, output, forget), block input, block output, memory cell, and peephole connections. The output of the LSTM layer is recurrently connected back to the input layer and to all of the gates of the LSTM layer. The input gate determines the amount of new information entered into the block, the forget gate determines when to forget content regarding the internal state, and the output gate controls the amount of information going to the output. The input, forget, and output gates have a sigmoid activation function, σ, and a hyperbolic tangent, h, is usually selected as the activation function of the block input and output. We may then describe the dynamic behavior of the network by the following equations:
Deep LStM. A schematic diagram of the RNN with LSTM is shown in
where x is the input vector to the LSTM, W xi , W xz , W xf , and W xo are the connection weights from the input to the input gate, block input, forget gate, and output gate, respectively, W yi , W yz , W yf , and W yo are the recurrent connection weights from the output to the input gate, block input, forget gate, and output gate, respectively, and w ci , w cf , and w co are the peephole connection weights from the cell gate to the input gate, forget gate, and output gate, www.nature.com/scientificreports www.nature.com/scientificreports/ respectively. Vectors b i , b f , b z , and b o are the bias weights determined during training. Symbol represents pointwise multiplication. Three LSTM layers are considered and each LSTM layer contains 10 units. The number of units and the number of LSTM layers are selected heuristically to achieve the best performance.
The AE and LSTM-based decoder was trained using the Levenberg-Marquardt and stochastic gradient descent method, respectively. The Levenberg-Marquardt method is a compromise between the Gradient descent, which has a guaranteed convergence upon a proper choice of the step-size, and Newton's method, which converges speedily near a local or global minimum.
Data analysis methods.
To assess the performance of the proposed method in estimating bladder pressure and volume, the normalized root-mean-square (NRMS) of the estimation error and the correlation coefficient (CC), were used. The NRMS and CC were defined as where y d (t) is the desired pressure or volume and y(t) is the estimated value of desired pressure or volume. Moreover, to evaluate the information content of the neural signals recorded from different vertebral segments, the mutual information (MI) was calculated between the neural signal feature of interest (firing rate or LFP signal band power spectrum) of each vertebral segment and the bladder parameter of interest (pressure or volume). Mutual information was calculated based on an adaptive partitioning of the observation space 54 . Two-way analysis of variance (ANOVA) was used to assess the statistical significance of the results and differences, and a confidence level of 95% (p < 0.05) was chosen to indicate a significant difference. Three methods were used for evaluating the proposed method: inside-trial, trial-by-trial, and rat-by-rat. For the inside-trial evaluation, the deep LSTM was trained and tested with the data obtained during each trial of experiment. In this case, 70% of the data were used for training, and the remaining 30% of the data were used for testing. For the trial-by-trial evaluation, training was performed on the data obtained during one trial of the experiment and tested with the remaining trials. For the rat-by-rat evaluation method, the deep LSTM was trained with one trial of the experiments on one rat and tested with the data obtained from each rat. In total, 53 trials were conducted on 15 rats. The duration of each trial was between 450 and 850 seconds. Figure 4 shows the recorded infused volume, residual volume, voided volume, bladder pressure, bandpass filtered neural signal (0.3-3 kHz), continuous firing rate, intraspinal LFP activity and corresponding time-frequency analysis during a typical trial from the experiment (rat 5, trial 3, infusion rate = 10 ml/h). It is observed that both the bladder pressure and the firing rate monotonically increase with increasing bladder volume. When the bladder pressure suddenly increases, bladder leakage occurs.
Results
The time-frequency analysis of the intraspinal LFP shows that spectral patterns could clearly reflect the status of the bladder. Figure 5 shows the typical frequency bands of the intraspinal LFP signal during bladder filling. The power of the defined frequency bands suddenly increases as the bladder pressure increases, and the local maxima of the LFP frequency bands show the instant when bladder leakage occurs.
To assess the information content in the frequency bands of intraspinal LFP signals and in the continuous firing rate with respect to bladder pressure and volume, the averages of the CC and mutual information for 40 trials of experiment on 10 rats were computed (Fig. 6).
The results show that the firing rate provides more information about bladder pressure and volume (p < 0.05 based on both CC and MI. The comparison was performed between the mutual information and CC obtained from both the frequency bands and the FR. It can be seen that the neural signals recorded from S1 contain significantly more information about bladder pressure than those recorded from L6 (p = 0.013 for CC and p = 0.01 for MI) but that there was no significant difference in the information between S1 and L6 with respect to volume (p = 0.0836 for MI and p = 0.429 for CC). Moreover, the results of the statistical test show that the high frequency components of the LFP signals provide a higher correlation and greater information than the low frequency components (p < 0.05) with respect to both bladder pressure and volume. Comparisons were performed between low frequency bands (FB1, FB2, FB3, and FB4) and high frequency bands (FB5) using CC as well as MI.
Single electrode decoding. In this section, the performance in decoding bladder pressure/volume using signals recorded from the L6 segment and the S1 segment are presented. Figure 7 shows a typical decoding of the pressure/volume during one trial of the experiment (rat 5, trial 3, infusion rate = 10 ml/h) using FR, LFP band power spectra, and a combination of FR and LFP band power spectra. It is observed that excellent decoding performance is obtained using the combination of FR and LFP band power spectra. The decoding errors are 7.7% and 15.4% for pressure and volume, respectively. Table 1 summarizes the average inside-trial decoding performance obtained using the neural signals recorded from the L6 segment and the S1 segment for 40 trials of experiments on 10 rats. For both pressure and volume estimation, the decoding performance obtained using the combined FR and LFP data was significantly better than that obtained using FR or LFP alone (p < 0.001 for both NRMS and CC). Moreover, the CC shows that the S1 signals provided significantly better decoding performance than those of the L6 (p < 0.0013). (2019) 9:18128 | https://doi.org/10.1038/s41598-019-54144-8 www.nature.com/scientificreports www.nature.com/scientificreports/ Figure 8(a) shows the result of a typical decoding of pressure and volume using the trial-by-trial validation approach. Training was performed with rat 5, trial 1 with an infusion rate of 8 ml/h, and testing was performed on the same rat, trial 3 with an infusion rate of 10 ml/h. The NRMS decoding errors for pressure decoding were 19.6%, 12.6%, and 7.6% using LFP, FR, and both LFP and FR, respectively, and 30.8%, 17.3%, and 10.9%, respectively, for volume decoding. The CC values for pressure decoding were 87.2%, 90.1%, and 97.8% using LFP, FR, and combined LFP and FR, respectively, and 95.2%, 84.0%, and 97.5%, respectively, for volume decoding. The results indicate that the combination of LFP and FR provided better performance than using only LFP or FR in terms of both NRMS and CC indices. The decoding performance from single-electrode recording from the S1 segments of 10 rats using trial-by-trial validation are summarized in Table 2. The average pressure decoding errors were 26.5 ± 5.0%, 22.3 ± 5.2%, and 17.8 ± 4.8% using LFP, FR, and the combination of LFP and FR, respectively, and 30.0 ± 6.3%, 25.4 ± 3.5%, and 21.3 ± 4.5%, respectively, for volume decoding. Additionally, the average CCs for pressure decoding were 66.0 ± 8.0%, 74.2 ± 6.5%, 80.3 ± 7.3% using LFP, FR, and the combination of LFP and FR, respectively, and 60.9 ± 13.1%, 65.3 ± 7.8%, and 70.5 ± 9.9%, respectively, for volume decoding. The results for pressure decoding show that the combined LFP and FR provided significantly better performance than LFP or FR alone (p < 0.001 for NRMS and CC). The results show that the decoding error obtained for volume using the combined LFP and FR was significantly less than that obtained using only LFP or FR (p = 0.0011 for NRMS). Moreover, the CC shows that the decoding performance obtained using the combined LFP and FR was better than that obtained using only LFP or FR, but the difference was not statistically significant (p = 0.1169). www.nature.com/scientificreports www.nature.com/scientificreports/ Figure 8(b) shows an example of decoding using rat-by-rat validation. Training was performed on rat 7, trial 1, with an infusion rate of 8 ml/h, and testing was performed on rat 5, trial 3 with an infusion rate of 10 ml/h. The NRMS decoding errors obtained were 22.4%, 21.7%, and 14.1% using LFP, FR, and both LFP and FR, respectively, and 20.0%, 25.1%, and 18.9%, respectively, for volume decoding. The CC values for pressure decoding were 93.2%, 60.0%, and 96.3% using LFP, FR, and combined LFP and FR, respectively, and 97.3%, 83.7%, and 96.8%, respectively, for volume decoding. The results show that the decoding model is robust with respect to the training rat. The results of the decoding performance using the rat-by-rat validation approach are summarized in Table 3. In this approach, the data obtained during one trial from one rat were used to train the model, and the data obtained during all trials from all rats were used to test the model. The results indicate that using the combined FR and LFP provides significantly better performance than using only FR or LFP for both pressure and volume (p = 0.0033 for pressure and p = 0.0015 for volume using NRMS). The results show that the training data obtained from different rats have no significant effect on the decoding performance (p = 0.3140 for pressure and p = 0.2596 for volume using NRMS). two-electrode decoding. In two-electrode decoding, two electrodes were implanted into the S1 and L6 spinal cord segments, and the compound signals recorded from both segments were used for decoding. Figure 9 shows a typical pressure and volume decoding using the signal recorded from S1, L6, or the combined signal from S1 and L6 during two-electrode recording. Figure 9(a) illustrates the inside-trial decoding results. The NRMS values obtained for pressure decoding were 15.4%, 17.1%, and 6.3% using S1, L6, and the combined signals from S1 and L6, respectively, and 20.0%, 28.1%, and 14.2%, respectively, for volume decoding. The CC values for pressure decoding were 79.6%, 73.0%, and 96.8% using S1, L6, and the combined signals from L6 and S1, respectively, and 78.9%, 50.4%, and 83.9%, respectively, for volume decoding. Figure 9(b) shows the trial-by-trial decoding results. The NRMS pressure decoding errors obtained were 12.5%, 16.7% and 6.6% using S1, L6, and the combined signals recorded from S1 and L6, respectively. Compared to the single-electrode decoding, the two-electrode decoding yielded 47.2% and 60.5% improvement using S1 and L6, respectively. The NRMS volume decoding errors obtained were 15.3%, 16.2% and 8.7% using the signals recorded from S1, L6, and the combined signal T est Test Train Figure 7. An example of decoding the bladder pressure (a) and volume (b) using an inside-trial validation approach (training with 70% of trial 3 and testing with the remaining 30% of the same trial) on rat 5 (infusion rate = 10 ml/h). Decoding was performed using the signal recorded from S1 for different extracted feature sets: firing rate (upper trace), FB5 subband power spectrum of the LFP (middle trace), and the combined FR and FB5 subband power spectrum (lower trace). The NRMS and CC values given refer to the test period. www.nature.com/scientificreports www.nature.com/scientificreports/ from S1 and L6, respectively. Using two-electrode decoding, an improvement of 43.1% and 46.3% was achieved when compared with single-electrode decoding using the signals from S1 and L6, respectively. Table 4 summarizes the results of decoding performance for five rats during single-electrode and two-electrode decoding using the combined LFP and FR signals. In this analysis, the trial-by-trial validation approach was used. Table 1. Average and standard deviation of decoding performance across 10 rats using the inside-trial validation approach using single-electrode recording from S1 and L6 segments.
(a) (b) Figure 8. An example of decoding the bladder pressure (left) and volume (right) using trial-by-trial (a) and rat-by-rat (b) validation approaches. Decoding was performed using different extracted feature sets: firing rate (upper trace), FB5 subband power spectrum of the LFP (middle trace), and the combination of FR and FB5 subband power spectrum (lower trace). In this example, during the trial-by-trial validation approach, the network was trained with trial 1 from rat 5 (infusion rate = 8 ml/h) and tested with trial 3 from the same rat (infusion rate = 10 ml/h). During the rat-by-rat validation approach, the network was trained with rat 7 (trial 1, infusion rate = 8 ml/h) and tested with rat 5 (trial 3, infusion rate = 10 ml/h). www.nature.com/scientificreports www.nature.com/scientificreports/ The results show that the average NRMS decoding errors for pressure and volume decoding were 17.6 ± 2.7% and 22.7 ± 3.4%, respectively, when using the S1 signal, 19.8 ± 2.4% and 27.2 ± 4.6%, respectively, when using the L6 signal, and 14.9 ± 4.5% and 19.7 ± 4.7%, respectively, when the combined signal recorded from S1 and L6 was used for decoding. On average, using two-electrode decoding, an improvement of 21.5% and 20.4% was achieved for pressure and volume estimation, respectively, over single-electrode decoding. Almost the same results were observed for the CC. The results of the statistical test show that the two-electrode decoding achieved a lower NRMS error than single-electrode decoding (p = 0.0318 for pressure and p = 0.0082 for volume).
Signal stability. Figure 10 shows the decoding performance for each rat using a trial-by-trial validation approach for single electrode recording. The standard deviation for each rat is very small, which indicates the stability of the decoding across trials.
Discussion
In this paper, we propose a DRNN for the estimation of bladder pressure and volume from neural activity recorded directly from spinal cord gray matter neurons. The proposed DRNN consists of an autoencoder and a deep LSTM-based decoder. The autoencoder has a deep structure that extracts deep features from input data in an unsupervised manner 49 . The decoding model is based on the LSTM, which has emerged as a general and effective model for capturing long-term temporal dependencies 48 . The LSTM consists of memory cells whose inputs and outputs are controlled by nonlinear gates. Its output not only depends on the current information to perform the estimation but also explicitly takes into account the long-term information from the past.
Since LFP reflects a spatial averaging of synaptic activity in the vicinity of the electrode and carries information that is distinct from spikes, in this paper, we combined modeling of both spiking activity and LFP activity in a unified framework to estimate the pressure and volume of the bladder. The results show that using a combination of FR and LFP can improve decoding performance with respect to using FR or LFP alone.
An important issue in estimating bladder status using neural signals recorded from the spinal cord is the selection of spinal segment that provides more information about the bladder. Afferents innervating the bladder project to the lumbosacral (L6-S1) segments of the rat spinal cord. To address this issue, single-electrode recordings from the L6 and S1 segments were investigated using mutual information and decoding performance. The results show that the S1 segment provides more information about the bladder status than the L6 segment.
Another issue investigated in this study was the effect of two-electrode recoding on decoding performance. For this purpose, a combined signal recorded simultaneously from S1 and L6 was used for decoding, and the results were compared with single-electrode decoding. The results show that the combined signal from both segments provided significantly better decoding performance than the single-electrode decoding. Using an array of electrodes implanted along the L6-S1 segments may improve the decoding performance further. Our results show that average CCs of 78.8 ± 2.3% and 83.2 ± 3.2% were obtained for pressure estimation using single-electrode and two-electrode decoding, respectively. In contrast, the results of pressure estimation presented in 26 show that increasing the number of recording channels did not increase the decoding performance. In aforementioned paper, the CC reported for one channel was approximately 75.0 ± 15.3%, while for 8 channels it was 75.5 ± 14.6%, using SVR with 30 filter taps. This lack of enhancement may be related to the nonoptimal design of the filter and improper positioning of the electrode tips. Their results show that a CC of approximately 81.8 ± 10.0% was obtained for bladder pressure estimation using SVR with 4 channels and 100 filter taps. Regardless of decoding performance, to obtain chronic stability from the spinal recoding, it is necessary to use a multi-electrode array rather than a one-or two-electrode recording system.
An important contribution of the current study is the simultaneous estimation of both the pressure and the volume of the bladder. Simultaneously estimating both the pressure and the volume of the bladder is a critical Table 2. Average (mean ± standard deviation) of the decoding performance across 10 rats with singleelectrode recording from S1 using the trial-by-trial validation approach. Table 3. Average (mean ± standard deviation) of the decoding performance across 10 rats with singleelectrode recording from S1 using the rat-by-rat validation approach. Figure 9. An example of decoding the bladder pressure (left) and volume (right) during two-electrode recording using inside-trial (a) and trial-by-trial (b) validation approaches (rat 13, trial 2, infusion rate = 6 ml/h). Decoding was performed using both FR and the FB5 subband power spectrum extracted from neural signals recorded from S1 (upper trace), L6 (middle trace), and both S1 and L6 (lower trace). During the trial-by-trial validation approach, the network was trained with trial 1 (rat 13, infusion rate = 6 ml/h) and tested with trial 2 on the same rat (infusion rate = 6 ml/h). Table 4. Average (mean ± standard deviation) of NRMS and CC from two-electrode recoding using the trialby-trial validation approach.
Animal
issue in the closed-loop control of the bladder using functional electrical stimulation. The results illustrate that utilizing the proposed deep neural network, the simultaneous estimation of bladder volume and pressure is possible, which had not been investigated in previous studies. Another issue investigated in this paper is the estimation of pressure/volume for different infusion rates. While the individual is conscious, the bladder filling rate may vary according to different conditions. Different infusion rates lead to different rise times of the bladder pressure (slower or faster). Hence, the decoding model should be generalizable to new conditions. The results of this study show that the proposed deep model is able to estimate the pressure/volume of the bladder for different infusion rates.
The proposed method in this paper has provided very promising results in animal models, but a number of challenges will have to be addressed to establish the clinical use of the method. One of the limitations of the current study is the invasiveness of the measurements, which presents challenges in translating the intraspinal recordings from animal studies to clinical practice. Implanting microelectrodes into the spinal cord can carry the risk of infection and spinal compression. Another challenge facing intraspinal recording is the fabrication and implantation of intraspinal microelectrodes. However, recent progress in microelectrode arrays [55][56][57][58][59][60][61][62] and knowledge gained from implantable electrodes on animals can be applied to the development of a chronically implanted prosthetic device. Nevertheless, before clinical use of spinal recoding, further tests will be required to assess the long-term mechanical stability of chronically implanted electrodes and the longevity of the chronic recordings. In addition, accurate placement of electrodes in the spinal cord is challenging, even using intraoperative guidance for electrode implantation.
Another critical issue in estimating bladder status using neural signals recorded from the spinal cord is interference from non-bladder signals. It is expected that the decoding model could extract the relevant information about the bladder pressure/volume from neural activity during learning and filter non-bladder signals. However, we did not justify this filtering in this study; this analysis could be considered for a future study. Additionally, further studies should be performed to test the performance of the proposed decoding model in real time for closed-loop control of the bladder.
Data availability
All datasets generated during the current study are available from the corresponding author upon request. | 2019-12-02T16:01:27.750Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "0554a14922c27f764da3de72c9a7695374d0818b",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-54144-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0554a14922c27f764da3de72c9a7695374d0818b",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
53562923 | pes2o/s2orc | v3-fos-license | Identification of Novel Antisense-Mediated Exon Skipping Targets in DYSF for Therapeutic Treatment of Dysferlinopathy
Dysferlinopathy is a progressive myopathy caused by mutations in the dysferlin (DYSF) gene. Dysferlin protein plays a major role in plasma-membrane resealing. Some patients with DYSF deletion mutations exhibit mild symptoms, suggesting some regions of DYSF can be removed without significantly impacting protein function. Antisense-mediated exon-skipping therapy uses synthetic molecules called antisense oligonucleotides to modulate splicing, allowing exons harboring or near genetic mutations to be removed and the open reading frame corrected. Previous studies have focused on DYSF exon 32 skipping as a potential therapeutic approach, based on the association of a mild phenotype with the in-frame deletion of exon 32. To date, no other DYSF exon-skipping targets have been identified, and the relationship between DYSF exon deletion pattern and protein function remains largely uncharacterized. In this study, we utilized a membrane-wounding assay to evaluate the ability of plasmid constructs carrying mutant DYSF, as well as antisense oligonucleotides, to rescue membrane resealing in patient cells. We report that multi-exon skipping of DYSF exons 26–27 and 28–29 rescues plasma-membrane resealing. Successful translation of these findings into the development of clinical antisense drugs would establish new therapeutic approaches that would be applicable to ∼5%–7% (exons 26–27 skipping) and ∼8% (exons 28–29 skipping) of dysferlinopathy patients worldwide.
INTRODUCTION
The dysferlinopathies are a heterogeneous group of recessive myopathies caused by mutations in the dysferlin (DYSF) gene. 1,2 Characterized by progressive muscle weakness that typically begins during the second decade of life, the dysferlinopathies can be clinically divided into at least three types: Miyoshi myopathy (MM), limb-girdle muscular dystrophy type 2B (LGMD2B), and distal myopathy with anterior tibial onset (DMAT). [1][2][3] The dysferlinopathies are clinically distinguished based on the initial pattern of muscle weakness, originating in either the proximal (shoulder and pelvic girdle; LGMD2B) or distal musculature (gastrocnemius and soleus; MM). The disease can also present initially and often advances to include both the proximal and distal muscles. [4][5][6] While most patients experience a gradual decline over decades, several atypical phenotypes have been reported, including rapid loss of ambulation in less than 5 years. 6 Variable age of onset has also been reported, ranging from 10 to 73 years. 4,6,7 The DYSF gene codes for dysferlin protein, which is a large ($240 kDa) type II transmembrane protein containing seven lipidand protein-binding C2 domains, multiple Dysf and Fer domains, and a C-terminal transmembrane domain [8][9][10][11][12][13][14][15][16] (Figure 1). Dysferlin is ubiquitously expressed but is most abundant in skeletal and cardiac muscle. 15,17 Dysferlin is predominantly localized to the plasma membrane but is also observed in cytoplasmic vesicles and is associated with the t-tubule network. 15,[17][18][19][20] Dysferlin protein plays an essential role in plasma membrane repair, and loss of dysferlin results in compromised membrane resealing and deterioration of muscle fibers. 15,21,22 There is currently no cure for dysferlinopathy. Existing disease management consists mainly of physical therapy, orthopaedic surgery, and use of mechanical and respiratory aids. 5,23 A promising therapeutic strategy that has gained traction in recent years, especially with respect to neuromuscular disorders, is antisense-mediated exon skipping. Exon skipping utilizes synthetic nucleic acids called antisense oligonucleotides (AOs) to modulate pre-mRNA splicing and can be used to remove mutation-carrying exons and flanking exons to maintain an open reading frame. 24,25 In 2016, the United States Food and Drug Administration (FDA) approved eteplirsen (Exondys 51), the first-ever antisense drug for the treatment of Duchenne muscular dystrophy (DMD). 26 Eteplirsen facilitates in-frame skipping of dystrophin exon 51 and utilizes the phosphorodiamidate morpholino oligomer (PMO) antisense chemistry. The rationale for exon skipping in dysferlinopathy is supported by reports of mildly affected patients harboring in-frame DYSF deletion mutations, such as exon 32. 27 Even large deletions of DYSF have been associated with a milder disease course. 28 Exon-skipping progress in dysferlinopathy has been limited to the in vitro skipping of DYSF exon 32 in human dysferlinopathy patient cells. 29 To date, no other therapeutic exon-skipping targets have been identified for dysferlinopathy.
In this study, we undertook to identify novel exon-skipping targets in DYSF by characterizing the relationship between exon-deletion pattern and plasma-membrane-resealing ability. We created GFPconjugated DYSF plasmid constructs lacking certain exons and transfected these into dysferlinopathy patient cells, then subjected the cells to a membrane-wounding assay. We demonstrate that DYSF exon combinations 19-21, 20-21, and 46-48 are required for proper plasma-membrane resealing, while exons 26-27 and 28-29 are dispensable for membrane resealing. After identifying which exons were not required for proper membrane resealing, we designed PMOs using a predictive software algorithm and tested the ability of PMO cocktails to facilitate multi-exon skipping and restore membrane-resealing ability in patient cells. We show that a PMO cocktail targeting DYSF exons 28-29 restores membrane resealing in patient cells. Our results provide a foundation for future in vivo investigations and possible clinical translation of DYSF exons 26-27 and 28-29 skipping approaches for treating dysferlinopathy.
Mutation Analysis and Exon-Skipping Approach for Dysferlinopathy Cell Lines
We first assessed the mutation patterns present in two dysferlinopathy patient cell lines. The first cell line, MM-Pt1, was originally collected from a patient with MM. Sanger sequencing confirmed a homozygous missense mutation in DYSF exon 46 ( Figure 2A). To remove the mutated exon while maintaining the reading frame would involve multi-exon skipping of DYSF exons 46-48 ( Figure 2A). While exons 46 and 47 do not code for any known protein domain, exon 48 codes for a portion of the C2F domain ( Figure 2A). The second cell line, MM-Pt2, was also collected from a patient with MM. Sanger sequencing confirmed that this cell line is transheterozygotic, carrying a frameshift-causing point mutation in exon 21 on one allele and a missense mutation in exon 28 on the other allele ( Figure 2B). To remove the mutant exons while maintaining the reading frame would involve double exon skipping of exons 20 and 21, or 28 and 29 (Figure 2B). Exons 20 and 21 do not code for any known protein domain of dysferlin, while exons 28 and 29 code for portions of the Dysf-N and Dysf-C domains ( Figure 2B).
Determining the Feasibility of Exon-Skipping Approaches in Dysferlin through Transfection of Exon-Deleted Plasmid Constructs
Before proceeding with the application of an exon-skipping approach for our patient cell lines, we undertook to identify which DYSF exons could be removed without negatively impacting protein function. To do this, we mimicked the effect of exon skipping by using site-directed mutagenesis to generate DYSF plasmid constructs lacking exons corresponding to an exon-skipping approach amenable to each cell line. We generated GFP-fused DYSF plasmid constructs lacking DYSF exons 20-21, 28-29, and 46-48, all of which are in-frame deletions. We also generated an exon 19-21 deletion construct, as this pattern of exon skipping would also restore the reading frame in MM-Pt2 ( Figure 2B). Additionally, we generated an exon 26-27 deletion construct, based on our later procurement of another dysferlinopathy patient cell line harboring a c.2875C > T missense mutation in exon 27. The deletion of DYSF exons 26-27 is in-frame ( Figure 2B). As a control, we generated a GFP-only plasmid. To identify whether the exons deleted in the above constructs would have an impact on protein function, we transfected the constructs into cell line MM-Pt1, which exhibited impaired plasma-membrane-resealing ability compared to healthy cells ( Figure 3) and observed whether any of these plasmids could rescue membrane-resealing ability. To assess membrane-resealing ability, we incubated cells in FM4-64 dye and generated focal lesions in the plasma membranes of GFPpositive cells using a two-photon laser microscope, as previously described. 21 FM4-64 dye quickly fluoresces within the cytoplasm upon entering the cell through the lesion, and membrane resealing prevents entry of additional dye. We quantified the relative fluorescence values of FM dye within the cytoplasm over time. 30 Our results show that D26-27 and D28-29 were able to rescue membrane-resealing ability to a degree similar to that of healthy cells and cells transfected with the full-length DYSF plasmid, as measured by changes in relative fluorescence intensity over time ( Figures 3A and 3B). In contrast, D19-21, D20-21, and D46-48 were not able to rescue membrane-resealing ability ( Figures 3A and 3B).
We next considered that, while great care is taken to ensure that all plasma-membrane-wounding parameters are consistent between experiments (e.g., laser power, wavelength, number of iterations, etc.) there can still be some disparity with respect to the degree of membrane damage following laser ablation. This might generate some small variation regarding quantification, in terms of the amount of fluorescent dye which infiltrates the cell. We utilized an additional measure of membrane-resealing ability, which we termed "time to steady state," defining steady state as the point at which raw fluorescence values peak following laser wounding without any significant increase over time ( Figure 3C). Time to steady state is calculated by subtracting the time prior to laser wounding from the time point when fluorescence values peak. We observed that the full-length as well as D26-27 and D28-29 constructs were able to rescue membrane resealing, as demonstrated by significantly shorter times to steady state when compared with GFP-only control; the D19-21, D20-21, and D46-48 plasmid constructs were not able to rescue membrane resealing, as their times to steady state were not statistically different from GFP-only control ( Figure 3D). Taken together, these results show that DYSF exons 26-27 and 28-29 are dispensable for plasma-membrane resealing, suggesting that these exons are promising therapeutic targets for exon skipping, while exons 19-21, 20-21, and 46-48 are required for proper membrane resealing, suggesting that these are not promising exon-skipping targets.
In Silico and In Vitro Screening of PMOs for DYSF Our exon-skipping predictive algorithm 31 projected the expected exon-skipping efficiencies for 191 PMO sequences of 30-mer length, covering all possible target sites for exons 28 and 29. From these, the top three PMO sequences with the highest predicted exon-skipping efficiency that also met synthesis criteria (e.g., GC content, melting temperature [Tm], self-complementarity) were selected by us and produced by GeneTools (Philomath, Oregon) ( Table 1). As measured by RT-PCR, all nine possible combinations of PMO cocktails were able to efficiently skip DYSF exons 28 and 29 using 10 mM each oligo in MM-Pt2 cells, suggesting that substantive multiple exon skipping can be achieved in DYSF through the use of PMO cocktails (Figure 4A). To examine rescue of dysferlin protein, we performed western blot analysis and found that there was no detectable change in protein levels between control and 10 mM each oligo treatment groups, while mutant cells also expressed detectable levels of dysferlin ( Figure 4B).
Rescue of Plasma-Membrane Resealing in PMO Cocktail-Treated Dysferlinopathy Patient Cells
We selected PMO cocktail Ac11+32, which showed a high degree of DYSF exon 28-29 skipping, as measured by RT-PCR ( Figure 4A), for transfection into patient cells (MM-Pt2). We observed that Ac11+32 was able to rescue plasma membrane resealing in dysferlinopathy patient fibroblasts, as measured by changes in relative fluorescence intensity over time ( Figures 5A and 5B). Ac11+32 was also able to significantly reduce the time to membrane resealing, as measured by time to steady state ( Figure 5C). These results show that functional recovery of membrane wounding in vitro is possible through antisense-mediated exon skipping of DYSF exons 28-29 via PMO cocktail, suggesting that these PMOs might be promising therapeutic agents for treating patients with mutations amenable to DYSF exon 28-29 skipping.
DISCUSSION
The first identification of a potential therapeutic exon skipping target in dysferlinopathy was DYSF exon 32, described by Sinnreich et al. 27 in 2006. Since then, no new therapeutic exon-skipping targets have been described for DYSF. While some groups have attempted to identify redundant protein domains for the purpose of mini-or nano-dysferlin delivery via AAV vector, 32 the relationship between exon-deletion pattern and protein functionality has gone largely uncharacterized. In this report, we described not only the first-ever success of multiple exon skipping in DYSF, but also the identification of two novel potential therapeutic exon-skipping targets for treating dysferlinopathy-DYSF exon 26-27 and 28-29 skipping. Notably, DYSF exons 26-29 were present in previously used nano-dysferlin constructs but not in mini-dysferlin constructs, both of which were able to ameliorate pathology and membrane-resealing defects in dysferlinopathy mouse models. 32,33 Successful translation of these findings into the development of clinical AO drugs would establish new therapeutic approaches that, combined, would be applicable to approximately 5%-7% (exon 26-27 skipping) and 8% (exon 28-29 skipping) of dysferlinopathy patients worldwide, according to reported variant data in the Leiden Open Variation Database (LOVD) (http://www.dmd.nl/) and Universal Mutation Database (UMD) (http://www.umd.be/DYSF/) ( Figure 6).
This work further supports the use of dysferlinopathy patient fibroblasts in screening novel AO sequences for the identification of therapeutic exon-skipping drugs, as fibroblasts expressed readily detectable amounts of DYSF mRNA at levels sufficient for in vitro assessment of exon-skipping efficiencies. While our AO sequences were able to facilitate robust exon skipping in fibroblasts, it remains to be seen whether the same sequences will be comparably effective when transfected into muscle cells and in vivo. Our study also further validates the use of dysferlinopathy fibroblasts as an effective alternative to myoblasts or myotubes for the purpose of assessing plasma-membrane repair. 30,34 Here, healthy and dysferlinopathy patient fibroblasts displayed significant differences in their ability to reseal plasma membranes following two-photon laser wounding. Furthermore, this study highlights how the membranewounding assay can be used to validate the in vitro effectiveness of newly designed AOs at rescuing dysferlin protein function.
Since as little as 10% of wild-type protein has been associated with very mild pathology in dysferlinopathy 27 and our cell line MM-Pt2 expresses dysferlin protein somewhere between 25% and 50% of wild-type ( Figure 4B), it is reasonable to assume that while the endogenous missense mutation may not affect protein stability, there is an appreciable effect on protein functionality, as evidenced here by the significant difference in membrane-resealing ability between healthy and patient cells. The observation that our PMO cocktail was able to rescue plasma membrane resealing in patient cells despite no difference in protein expression between treated and non-treated cells is consistent with the idea that the proteins produced here via exon skipping could be more functional than non-treated proteins. Future studies aimed at characterizing the intracellular differences between native and exon-skipped proteins, such as their respective subcellular localization and interaction with other proteins, will help shed light on this issue. It would also be beneficial to test our PMO cocktail in a DYSF-null cell line with a mutation pattern amenable to exon 28-29 skipping in order to determine the degree of protein rescue following transfection.
The human dysferlin Dysf domains contain multiple positively charged and aromatic residues that exhibit a high degree of conservation in comparison to ferlin homologs myoferlin and fer-1. 16 The secondary structure of the human inner Dysf domain consists of two www.moleculartherapy.org antiparallel b strands, one at each terminus. 35 This secondary structure is conserved in the inner Dysf domain of myoferlin. 36 The solution structure of the myoferlin Dysf domain indicates the presence of stacked tryptophan/arginine (W/R) motifs, and mutations in this region are predicted to result in unfolding and protein degradation. 36 Like myoferlin, the dysferlin inner Dysf domain is also held together by stacking of arginine and aromatic sidechains, and disruption of this region is also predicted to result in instability and unfolding. 35 Notably, the majority of residues of this flat domain region contribute to the surface, suggesting that perhaps the Dysf domain is involved in protein-protein interactions. 35 While this region is evidently susceptible to deleterious mutations according to the LOVD database, 37 our demonstration that removal of exons 26-27 and 28-29 does not impact dysferlin protein function suggests that removal of portions of the Dysf domains is a possible therapeutic strategy. The hypothesis that this region of dysferlin may be superfluous is supported by the use of a mini-dysferlin that does not contain the Dysf domain but is able to rescue membrane resealing and dysferlin expression in a mouse model of dysferlinopathy. 33 While dysferlin is known to have several binding partners, 38-41 the most likely interacting partner at the Dysf domains is caveolin-3 (CAV-3). 13,42 Mutations in CAV-3 are implicated in several forms of muscular dystrophy. [43][44][45] CAV-3 is the only caveolin family member expressed in striated muscle and belongs to the dystrophin-glycoprotein complex, forming scaffolding along with t-tubules and caveolar membranes. 46-48 CAV-3 is expressed primarily in muscle, where it plays a role in regulating sarcolemma stability, vesicular trafficking, signal transduction, and regulation of nitric-oxide-dependent functions. [49][50][51][52] Importantly, there exist two putative CAV-3 binding sites Figure 1). 53 An additional CAV-3 binding site is located in a region that corresponds to exon 6. Our analyses were performed in fibroblasts, ruling out interactions with CAV-3 as influencing membrane resealing in this setting; however, it may be worth investigating possible Dysf domain and CAV-3 interactions in muscle cells.
It is also worth noting that patients with the exon 28 c.2997G > T missense mutation are reported to have significantly later disease onset (32.2 ± 4.8 years), and a patient homozygous for the c.2997G > T mutation did not use a cane at age 46. 54,55 Our results further underscore the therapeutic potential of a DYSF exon 28-29 skipping approach and suggest that whatever unknown function(s) the dysferlin Dysf domains serve, they are either not directly related to the process of plasma-membrane resealing or their function is redundant.
In conclusion, this study represents a significant achievement in the development of novel therapeutic strategies for treating dysferlinopathy. There are currently no ongoing or planned clinical trials involving exon skipping for dysferlinopathy, despite the successful translation of exon-skipping therapy into several clinical trials for other forms of muscular dystrophy, such as DMD (see ClinicalTrials.gov: NCT02958202, NCT02667483, NCT03375255, and NCT03167255). Our identification of two novel exon-skipping targets in vitro paves the way for future in vivo work that will help establish a foundation for the future clinical implementation of antisense-mediated exon skipping in dysferlinopathy.
Plasmid Construct Design and Transfection
A plasmid construct containing N-terminally GFP-tagged DYSF (termed full-length DYSF plasmid) was generously provided by Dr. Katherine Bushby (Newcastle University). All other DYSF plasmid constructs investigated in this study (D19-21, D20-21, D26-27, D28-29, D46-48, GFP-only) were generated using a Q5 Site-Directed Mutagenesis Kit (New England Biolabs, Ipswich, MA) according to manufacturer's instructions (for primers, see Table 2). Plasmid constructs were transfected into cells using a Lipofectamine LTX with Plus Reagent kit (Thermo Fisher) according to manufacturer's instructions. In brief, fibroblast cells were seeded into 35-mm collagen-coated glass-bottom dishes (MatTek, Ashland, MA) and cultured in 2 mL of growth media for 24 hr. After incubation, media was changed to 2.3 mL of Opti-MEM (Thermo Fisher) containing 3 mL Lipofectamine LTX Transfection Reagent, 3.5 mL of PLUS Reagent, and 3.5 mg of plasmid. Cells were then incubated for 24 hr, after which media was changed to fresh growth media and cells were incubated an additional 24 hr before imaging and membrane-wounding analysis.
PMO Design and Transfection
PMO sequences targeting DYSF exons were designed using a predictive software tool designed by our group that estimates expected exon-skipping efficiency. 31 The top three PMOs per exon with the highest predicted exon-skipping ability and that met technical criteria for synthesis (e.g., GC content, Tm, self-complementarity) were supplied by Gene Tools (Philomath, Oregon). PMO sequences are shown in Table 1. For molecular analysis, PMO cocktails were transfected into 70%-80% confluent patient fibroblast cells using 6 mM Endo-Porter (Gene Tools) transfection reagent at a final concentration of 10 mM each PMO in DMEM (Invitrogen). Cells were incubated in PMOs for 48 hr before harvesting. For the two-photon membranewounding assay, 40%-50% confluent patient fibroblast cells were incubated with PMO in growth media for 48 hr before being subjected to the wounding assay.
RT-PCR Analysis
Total RNA was extracted from cells using Trizol (Invitrogen), and 200 ng of total RNA was used for assessing exon-skipping efficiency via SuperScript III One-Step RT-PCR System with Platinum Taq DNA Polymerase (Invitrogen). Primers are shown in Table 2.
Membrane-Wounding Assay
Human fibroblast plasma membranes were subjected to laser-induced injury using two-photon laser microscopy as described previously. 30 In brief, cells in glass-bottom dishes were prepared for wounding by rinsing once with Tyrode's salts solution (MilliporeSigma) followed by addition of 1 mL of Tyrode's salts containing 2.5 mM FM4-64 dye (Invitrogen). Only GFP-positive cells were selected for membrane-wounding experiments. A minimum of 15 cells were wounded per treatment. Using a Zeiss LSM 710 inverted confocal laser scanning microscope and Zeiss ZEN software, a 0.2 mm  2 mm target was placed at the edge of the cell membrane. A 5-min time series of sequential image scans was performed, with cells imaged every 5 s. Cells were ablated 25 s after the beginning of the time series using a two-photon laser set to 820 nm, using 15% laser power with 10 iterations. Fluorescence values at sites of injury were quantified using Zeiss ZEN software, and for each time point relative fluorescence values were determined by subtraction of the background value (mean of t = 0-25 s) and division of the net increase in fluorescence by the background fluorescence value.
Statistical Analysis
All data were reported as mean values ± SE. The statistical differences between treatment groups were assessed by one-way ANOVA with a Tukey-Kramer multiple comparison test. p < 0.05 was considered statistically significant.
CONFLICTS OF INTEREST
The authors declare no competing interests. | 2018-11-18T16:16:28.694Z | 2018-10-11T00:00:00.000 | {
"year": 2018,
"sha1": "bd9b30fec7313704fdcb29eb74c5bbaecadb7b93",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2162253118302750/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd9b30fec7313704fdcb29eb74c5bbaecadb7b93",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
4890853 | pes2o/s2orc | v3-fos-license | Broad Antiviral Activity of Carbohydrate-Binding Agents Against Dengue Virus Infection
© 2012 Schols and Alen, licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Broad Antiviral Activity of Carbohydrate-Binding Agents Against Dengue Virus Infection
Introduction 1.Origin and epidemiology
Dengue virus (DENV) is a member of the Flavivirus genus within the family of the Flaviviridae and is the most common mosquito-borne viral disease.Flaviviruses derived from a common viral ancestor 10,000 years ago.DENV has a relative recent revolutionary history originating 1000 years ago and establishes transmission in humans since a few hundred years.There is strong evidence that DENV was originally a monkey virus in non-human primates in Africa and Asia.Cross-species transmission to humans has occurred independently for all four DENV serotypes [1,2].Each serotype shares around 65% of the genome and despite of the differences, each serotype causes nearly identical syndromes in humans and circulates in the same ecological niche [3].First clinical symptoms of dengue infections date from the 10 th century but it is not for sure that this was a dengue epidemic.The first large dengue epidemics were in 1779 in Asia, Africa and North America.The first reported epidemic of dengue hemorrhagic fever (DHF) was in Manilla, Philippines, in 1953 after World War II.It was suggested that he movement of the troops during World War II has led to the spread of the virus.It has been shown that in the 19 th and 20 th century, the virus was widespread in the tropics and subtropics where nowadays 3.6 billion people are at risk of getting infected with DENV (Figure 1).Every year, 50 million infections occur, including 500,000 hospitalizations for DHF, mainly among children, with a case fatality rate exceeding 15% in some areas [4,5].In 40 years of time, DENV became endemic in more than 100 countries because of the increase in human population, international transport and the lack of vector control.
Transmission
The transmission of DENV can only occur by the bite of an infected female mosquito, the Aedes aegypti and the Aedes albopictus.Aedes aegypti originated, and is still present, in the rainforests of Africa feeding on non-human primates (sylvatic cycle, Figure 2).DENV infection in non-human primates occurs asymptomatically.However, the mosquito became domesticated due to massive deforestation and breeds in artificial water holdings, like automobile tires, discarded bottles and buckets that collect rainwater [4].On one hand Aedes aegypti is not an efficient vector because it has a low susceptibility for oral infection with virus in human blood.Since mosquitoes ingest 1 µl of blood, the virus titer in human blood has to be 10 5 -10 7 per ml for transmission to be sustained.After 7-14 days the virus has passed the intestinal tract to the salivary glands and can be transmitted by the infected mosquito to a new host.On the other hand, Aedes aegypti is an efficient vector because it has adapted to humans and they repeatedly feed themselves in daylight on different hosts.After a blood meal, the oviposition can be stimulated and the virus can be passed transovarially to the next generation of mosquitoes (vertical transmission, Figure 2) [6].
The tiger mosquito Aedes albopictus is spreading his region from Asia to Europe and the United States of America (USA).In the 1980s, infected Aedes Albopictus larvae were transported in truck tires from Asia to the United States.Dengue viruses were introduced into port cities, resulting in major epidemics [6].
Because there is no vaccine available, the only efficient way to prevent DENV infection is eradication of the mosquito.In the 1950s and 1960s there was a successful vector control program in the USA organized by the Pan American Health Organization.They eradicated the mosquito from 19 countries.Unfortunately, the vector control program was stopped in 1972 because the government thought that DENV was not important anymore [2,4].This resulted in a re-emergence of the mosquito and DENV infections in the USA.Both demographic and ecological changes contributed to the world wide spread of DENV infections.Very recently, another approach to attack the vector has been documented [8,9].There was a mosquito made resistant to DENV infection after trans infection with the endosymbiont Wolbachia bacterium which can infect a lot of insects.A certain strain of the Wolbachia bacteria was trans infected in Aedes mosquitoes and was reported to inhibit the replication and dissemination of several RNA viruses, such as DENV.Embryos of a Wolbachia uninfected female die if the female has bred with a Wolbachia infected male.This means that Wolbachia infected mosquitoes can take over the natural population.This was recently tested in Australia, where dengue is endemic, and after 2 months the Wolbachia infected mosquitoes resistant to DENV had taken over the natural mosquito population.Thus, this indicates the beginning of a new area in vector control efforts with a high potential to succeed.
Pathogenesis
Although DENV infections have a high prevalence, the pathogenesis of the disease is not well understood.The disease spectrum can range from an asymptomatic or flu like illness to a lethal disease.After a bite of an infected mosquito, there is an incubation period of 3 to 8 days.Then there is on acute onset of fever ( ≥ 39°C) accompanied by nonspecific symptoms like severe headache, nausea, vomiting, muscle and joint pain (dengue fever).Clinical findings alone are not helpful to distinguish dengue fever from other febrile illnesses such as malaria or measles.Half of the infected patients report a rash and is most commonly seen on the trunk and the insides of arms and thighs.Skin hemorrhages, including petechiae and purpura, are very common.Liver enzyme levels of alanine aminotransferase and aspartate aminotransferase can be elevated.Dengue fever is generally self-limiting and is rarely fatal [5,10,11].
The disease can escalate into dengue hemorrhagic fever (DHF) or dengue shock syndrome (DSS).DHF is primarily a children's disease and is characterized as an acute febrile illness with thrombocytopenia (≤ 100,000 cells/mm 3 ).This results in an increased vascular permeability and plasma leakage from the blood vessels into the tissue.Plasma leakage has been documented by an increased hematocrit and a progressive decrease in platelet count.Petechiae and subcutaneous bleedings are very common [12].
DSS is defined when the plasma leakage becomes critical resulting in circulatory failure, weak pulse and hypotension.Plasma volume studies have shown a reduction of more than 20% in severe cases.A progressively decreasing platelet count, a rising hematocrit, sustained abdominal pain, persistent vomiting, restlessness and lethargy may be all signs for DSS.Prevention of shock can only be established after volume replacement with intravenous fluids [5,11].When experienced clinicians and nursing staff are available in endemic areas, the case fatality rate is < 1%.DHF and DSS occur during a secondary infection with a heterologous serotype.The first infection with one of the four serotypes provides lifelong immunity to the homologous virus.During a second infection with a heterologous serotype, non-neutralizing IgG antibodies can enhance disease severity.This phenomenon is called antibody-dependent enhancement (ADE).The pre-existing non-neutralizing heterotypic antibodies can form a complex with DENV and enhance the access to Fc-receptor bearing cells such as monocytes and macrophages [13,14] (Figure 3).This will lead to an increase in viral load and a more severe disease.These non-neutralizing antibodies can cross-react with all four virus serotypes, as well as with other flaviviruses.This phenomenon explains why young infants born to dengue immune mothers often experience a more severe disease due to transplacental transfer of DENV-specific antibodies [15].Another approach to assist this phenomenon is the observation of increased viremia in non-human primates which received passive immunization with antibodies against DENV [16].
Broad Antiviral Activity of Carbohydrate-Binding Agents Against Dengue Virus Infection 165 A second mechanism to explain ADE of flaviviruses is the involvement of the complement system.It has been shown that monoclonal antibodies against complement receptor 3 inhibit ADE of West Nile virus in vitro [14].But Fc-receptor-dependent ADE is believed to be the most common mechanism of ADE.
Entry process
The infectious entry of DENV into its target cells, mainly dendritic cells [17], monocytes and macrophages, is mediated by the viral envelope glycoprotein E via receptor-mediated endocytosis [18].The E-glycoprotein is the major component (53 kDa) of the virion surface and is arranged as 90 homodimers in mature virions [19].Recent reports demonstrated also that DENV enters its host cell via clathrin-mediated endocytosis [20,21], as observed with other types of flaviviruses [22,23].Evidence for flavivirus entry via this pathway is based on the use of inhibitors of clathrin-mediated uptake, such as chlorpromazine.However, DENV entry via a non-classical endocytic pathway independent from clathrin has also been described [24].It seems that the entry pathway chosen by DENV is highly dependent on the cell type and viral strain.In case of the classical endocytic pathway, there is an uptake of the receptor-bound virus by clathrin coated vesicles.These vesicles fuse with early endosomes to deliver the viral RNA into the cytoplasm.The E-protein responds to the reduced pH of the endosome with a large conformational rearrangement [25,26].The low pH triggers dissociation of the E-homodimer, which then leads to the insertion of the fusion peptide into the target cell membrane forming a bridge between the virus and the host.Next, a stable trimer of the E-protein is folded into a hairpin-like structure and forces the target membrane to bend towards the viral membrane and eventually fusion takes place [25,27,28].The fusion results in the release of viral RNA into the cytoplasm for initiation of replication and translation (Figure 4).
The DENV envelope
The DENV E-glycoprotein induces protective immunity and flavivirus serological classification is based on its antigenic variation.During replication the virion assumes three conformational states: the immature, mature and fusion-activated form.In the immature state, the E-protein is arranged as a heterodimer and generates a "spiky" surface because the premembrane protein (prM) covers the fusion peptide.In the Golgi apparatus, the virion maturates after a rearrangement of the E-protein.The E-heterodimer transforms to an Ehomodimer and results in a "smoothy" virion surface.After a furin cleavage of the prM to pr and M, the virion is fully maturated and can be released from the host cell.Upon fusion, the low endosomal pH triggers the rearrangement of the E-homodimer into a trimer [29].
The E-protein monomer is composed out of β-barrels organized in three structural domains (Figure 5).The central domain I contains the aminoterminus and contains two disulfide bridges.Domain II is an extended finger-like domain that bears the fusion peptide and stabilizes the dimer.This sequence contains three disulfide bridges and is rich in glycine.Between domain I and domain II is a binding pocket that can interact with a hydrophobic ligand, the detergent β-N-octyl-glucoside.This pocket is an important target for antiviral therapy because mutations in this region can alter virulence and the pH necessary for the induction of conformational changes.The immunoglobulin-like domain III contains the receptor binding motif, the C-terminal domain and one disulfide bond [30,31].Monoclonal antibodies recognizing domain III are the most efficient of blocking DENV [32,33] and this domain is therefore an interesting target for antiviral therapy.
Because dendritic cell-specific intercellular adhesion molecule 3-grabbing non-integrin (DC-SIGN) (See 1.3.1) is identified as an important receptor for DENV in primary DC in the skin and DC-SIGN recognizes high-mannose sugars, carbohydrates present on the E-protein of DENV could be important for viral attachment.The E-protein has two potential glycosylation sites: asparagines 67 (Asn67) and Asn153.Glycosylation at Asn153 is conserved in flaviviruses, with the exception of Kunjin virus, a subtype of West Nile virus [34] and is located near the fusion peptide in domain II [30,31] (Figure 5).Glycosylation at Asn67 is unique for DENV [31].
Role of DC-SIGN in DENV infection
Prior to fusion, DENV needs to attach to specific cellular receptors.Because DENV can infect a variety of different cell types isolated from different hosts (human, insect, monkey and even hamster), the virus must interact with a wide variety of cellular receptors.In the last decade, several candidate attachment factor/receptors are identified.DC-SIGN is described as the most important human cellular receptor for DENV.
Since 1977, monocytes are considered to be permissive for DENV infection [35].More recent, phenotyping of peripheral blood mononuclear cells (PBMCs) from pediatric DF and DHF cases resulted in the identification of monocytes as DENV target cells [36].
First, it was believed that monocytes are important during secondary DENV infections during the ADE process, because of their Fc-receptor expression.The complex formed between the non-neutralizing antibody and the virus can bind to Fc-receptors and enhance infection in neighboring susceptible cells [14,18,37].However, in vitro, monocytes isolated from PBMCs, apparently have a very low susceptibility for DENV infection for reasons that remain to be elucidated.
More detailed observation of the natural DENV infection, changes the idea of monocytes being the first target cells.Following intradermal injection of DENV-2 in mice, representing the bite of an infected mosquito, DENV occurs to replicate in the skin [38].The primary DENV target cells in the skin are believed to be immature dendritic cells (DC) or Langerhans cells [17,[39][40][41].Immature DC are very efficient in capturing pathogens whereas mature DC are relatively resistant to infection.The search for cellular receptors responsible for DENV capture leads to the identification of cell-surface C-type lectin DC-specific intercellular adhesion molecule 3-grabbing non-integrin (DC-SIGN; CD209) [42][43][44][45].DC-SIGN is mainly expressed by immature DC, but also alveolar macrophages and interstitial DC in the lungs, intestine, placenta and in lymph nodes express DC-SIGN [46].DC-SIGN is a tetrameric transmembrane receptor and is a member of the calcium-dependent C-type lectin family.The receptor is composed of four domains: a cytoplasmic domain responsible for signaling and internalization due to the presence of a dileucine motif, a transmembrane domain, seven to eight extracellular neck repeats implicated in the tetramerization of DC-SIGN and a carbohydrate recognition domain (CRD) (Figure 6) [47].Alen et al. [42] investigated the importance of DC-SIGN receptor in DENV infections using DC-SIGN transfected Raji cells versus Raji/0 cells.A strong contrast in DENV susceptibility was observed between Raji/DC-SIGN + cells and Raji/0 cells.DC-SIGN expression renders cells susceptible for DENV infection.Also in other cell lines, the T-cell line CEM and the astroglioma cell line U87, expression of DC-SIGN renders the cells permissive for DENV infection.To evaluate the importance of DC-SIGN, Raji/DC-SIGN + cells were incubated with a specific anti-DC-SIGN antibody prior to DENV infection.This resulted in an inhibition of the DENV replication by ~90%, indicating that DC-SIGN is indeed an important receptor for DENV.Also 2 mg/ml of mannan inhibited the DENV infection in Raji/DC-SIGN + cells by more than 80%.This data indicate that the interaction between DC-SIGN and DENV is dependent on mannose-containing N-glycans present on the DENV envelope [42].
Thus, the CRD of DC-SIGN recognizes high-mannose N-glycans and also fucose-containing blood group antigens [48,49].Importantly, DC-SIGN can bind a variety of pathogens like human immunodeficiency virus (HIV) [50], hepatitis C virus (HCV) [51], ebola virus [52] and several bacteria, parasites and yeasts [46].Many of these pathogens have developed strategies to manipulate DC-SIGN signaling to escape from an immune response [46].Following antigen capture in the periphery, DC maturate by up regulation of the co-stimulatory molecules and down regulation of DC-SIGN.By the interaction with ICAM-2 on the vascular endothelial cells, DC can migrate to secondary lymphoid organs [53].Next, the activated DC interact with ICAM-3 on naïve T-cells.This results in the stimulation of the Tcells and subsequently in the production of cytokines and chemokines [54].Inhibition of the initial interaction between DENV and DC could prevent an immune response.DC-SIGN could be considered as a target for antiviral therapy by interrupting the viral entry process.But caution must be taken into account as the DC-SIGN receptor has also an important role in the activation of protective immune responses instead of promoting the viral dissemination.However, several DC-SIGN antagonists have been developed such as small interfering RNAs (siRNA) silencing DC-SIGN expression [55], specific anti-DC-SIGN antibodies [56] and glycomimetics interacting with DC-SIGN [57].The in vivo effects of DC-SIGN antagonists remain to be elucidated.
Besides DC, macrophages play a key role in the immune pathogenesis of DENV infection as a source of immune modulatory cytokines [58].Recently, Miller et al. showed that the mannose receptor (MR; CD206) mediates DENV infection in macrophages by recognition of the glycoproteins on the viral envelope [59].Monocyte-derived DC (MDDC) can be generated out of monocytes isolated from fresh donor blood incubated IL-4 and GM-CSF.After a differentiation process MDDC were generated highly expressing DC-SIGN (Figure 7A, B) and showing a significantly decrease in CD14 expression in contrast to monocytes [59,60].Again, DC-SIGN expression on MDDC renders cells susceptible for DENV in contrast to monocytes (Figure 7A, B).MR is also present on monocyte-derived DC (MDDC) and anti-MR antibodies can inhibit DENV infection, although to a lesser extent than anti-DC-SIGN antibodies do (Figure 7C) [61].Furthermore, the combination of anti-DC-SIGN and anti-MR antibodies was even more effective in inhibiting DENV infection.Yet, complete inhibition of DENV infection was not achieved, indicating that other entry pathways are potentially involved.Two other receptors on DC reported to be responsible for HIV attachment are syndecan-3 (a member of the heparan sulfate proteoglycan family) [62] and the DC immune receptor [63].Since DENV interacts with heparan sulfate, syndecan-3 may be a possible (co)-receptor on DC.It has been hypothesized that DENV needs DC-SIGN for attachment and enhancing infection of DC in cis and needs MR for internalization [59].In fact, cells expressing mutant DC-SIGN, lacking the internalization domain, are still susceptible for DENV infection because DC-SIGN can capture the pathogen [43].
Another C-type lectin, CLEC5A (C-type lectin domain family 5, member A) expressed by human macrophages can also interact with DENV and acts as a signaling receptor for the release of proinflammatory cytokines [64].However, whereas the DC-SIGN-DENV interaction is calcium-dependent, CLEC5A binding to its ligand is not dependent on calcium.Mannan and fucose can inhibit the interaction between CLEC5A and DENV, indicating that the interaction is carbohydrate-dependent [64].However, a glycan array demonstrated no binding signal between CLEC5A and N-glycans of mammals or insects [65].The molecular interaction between CLEC5A and DENV remains to be elucidated.Immune cells, in particular dendritic cells, are the most relevant cells in the discovery of specific antiviral drugs against dengue virus, but the isolation of these cells and the characterization is unfortunately labour intensive and time consuming.
Liver/lymph node-specific ICAM-3 grabbing non-integrin (L-SIGN) is a DC-SIGN related transmembrane C-type lectin expressed on endothelial cells in liver, lymph nodes and placenta [66,67].Similar to DC-SIGN, L-SIGN is a calcium-dependent carbohydrate-binding protein and can interact with HIV [67], HCV [51], Ebola virus [52], West Nile virus [68] and DENV [45].Zellweger et al. observed that during antibody-dependent enhancement in a mouse model that liver sinusoidal endothelial cells (LSEC) are highly permissive for antibody-dependent DENV infection [69].Given the fact that LSEC express L-SIGN, it is interesting to focus on the role of L-SIGN in DENV infection.L-SIGN expression on LSEC has probably an important role in ADE in vivo and therefore it is interesting to find antiviral agents interrupting the DENV-L-SIGN interaction and subsequently prevent the progression to the more severe and lethal disease DHF/DSS.Although endothelial cells [70] and liver endothelial cells [71] are permissive for DENV and L-SIGN-expression renders cells susceptible for DENV infection, the in vivo role for L-SIGN in DENV entry remains to be established.
Broad Antiviral Activity of Carbohydrate-Binding Agents Against Dengue Virus Infection 171
Antiviral therapy
At present, diagnosis of dengue virus infection is largely clinical, treatment is supportive through hydration and disease control is limited by eradication of the mosquito.Many efforts have been made in the search for an effective vaccine, but the lack of a suitable animal model, the need for a high immunogenicity vaccine and a low reactogenicity are posing huge challenges in the dengue vaccine development [7,72].There are five conditions for a dengue vaccine to be effective: (i) the vaccine needs to be protective against all four serotypes without reactogenicity, because of the risk of ADE, (ii) it has to be safe for children, because severe dengue virus infections often affects young children, (iii) the vaccine has to be economical with minimal or no repeat immunizations, because dengue is endemic in many developing countries, (iv) the induction of a long-lasting protective immune response is necessary and finally (v) the vaccine may not infect mosquitoes by the oral route [7,73].
As there is no vaccine available until now, the search for antiviral products is imperative.The traditional antiviral approach often attacks viral enzymes, such as proteases and polymerases [74,75].Because human cells lack RNA-dependent polymerase, this enzyme is very attractive as antiviral target without cytotoxicity issues.Nucleoside analogues and non-nucleoside compounds have previously been shown to be very effective in anti-HIV therapy and antihepatitis B virus therapy.The protease activity is required for polyprotein processing which is necessary for the assembly of the viral replication complex.Thereby, the protease is an interesting target for antiviral therapy.However, the host cellular system has similar protease activities thus cytotoxic effects form a major recurrent problem.Very recently, many efforts have been made in the development of polymerase and protease inhibitors of DENV, but until today, any antiviral product has reached clinical trials.This chapter is focusing on a different step in the virus replication cycle, namely, the viral entry process.In the past few years, progression has been made in unraveling the host cell pathways upon DENV infection.It is proposed that viral epitopes on the surface of DENV can trigger cellular immune responses and subsequently the development of a severe disease.Therefore, these epitopes are potential targets for the development of a new class of antiviral products, DENV entry inhibitors.Inhibition of virus attachment is a valuable antiviral strategy because it forms the first barrier to block infection.Several fusion inhibitors, glycosidase inhibitors and heparin mimetics have been described to inhibit DENV entry in the host cell.Here, specific molecules, the carbohydrate-binding agents (CBAs), preventing the interaction between the host and the Nglycans present on the DENV envelope are discussed.
Carbohydrate-binding agents (CBAs)
The CBAs form a large group of natural proteins, peptides and even synthetic agents that can interact with glycosylated proteins.CBAs can be isolated from different organisms: algae, prokaryotes, fungi, plants, invertebrates and vertebrates (such as DC-SIGN and L-SIGN) [76,77].Each CBA will interact in a specific way with monosaccharides, such as mannose, fucose, glucose, N-acetylglucosamine, galactose, N-acetylgalactosamine or sialic acid residues present in the backbone of N-glycan structures.Because a lot of enveloped viruses are glycosylated at the viral surface, such as HIV, HCV and DENV (Figure 5), CBAs could interact with the glycosylated envelope of the virus and subsequently prevent viral entry into the host cell [78,79].Previously, antiviral activity against HIV and HCV [ [78][79][80] was demonstrated of several CBAs isolated from plants (plant lectins) and algae specifically binding mannose and N-acetylglucosamine residues.
Here, we focus on the antiviral activity of three plant lectins, Hippeastrum hybrid agglutinin (HHA), Galanthus nivalis agglutinin (GNA) and Urtica dioica (UDA) isolated from the amaryllis, the snow drop and the stinging nettle, respectively.In general, plant lectins form a large diverse group of proteins, exhibiting a wide variety of monosaccharide-binding properties which can be isolated from different sites within the plant, such as the bulbs, leaves or roots.HHA (50 kDa) and GNA (50 kDa) isolated from the bulbs are tetrameric proteins.For GNA, each monomer contains two carbohydrate-binding sites and a third site is created once if tetramerization had occurred, resulting in a total of 12 carbohydratebinding sites (Figure 8).HHA specifically interacts with α1-3 and α1-6 mannose residues and GNA only recognizes α1-3 mannose residues.UDA, isolated from the rhizomes of the nettle, is active as a monomer containing 2 carbohydrate-binding sites recognizing Nacetylglucosamine residues (Figure 8).In 1984, UDA was isolated for the first time and with its molecular weight of 8.7 kDa, UDA is the smallest plant lectin ever reported [81].The plant lectins have been shown to possess both antifungal and insecticidal activities playing a role in plant defense mechanisms.Here, the antiviral activity of the plant lectins against DENV will be further highlighted and discussed in detail.UDA is isolated from the stinging nettle and is a monomeric protein composed out of hevein domains.
Previously, concanavalin A (Con A), isolated from the Jack bean, binding to mannose residues and wheat germ agglutinin binding to N-acetylglucosamine (Glc-NAc) residues, were shown to reduce DENV infection in vitro.A competition assay, using mannose, proved that the inhibitory effect of Con A was due to binding -mannose residues on the viral protein, because mannose successfully competed with Con A [82].Together with the fact that HHA, GNA and UDA act inhibitory against HIV and HCV, we hypothesized that these plant lectins had antiviral activity against DENV, because DENV has two N-glycosylation sites on the viral envelope and uses DC-SIGN as a cellular receptor to enter DC.The antiviral activity of the three plant lectins was investigated in DC-SIGN + and L-SIGN + cells and the infection was analyzed by flow cytometry, RT-qPCR and confocal microscopy.
Broad antiviral activity of CBAs against DENV
Because DC-SIGN interacts carbohydrate-dependent with DENV, the antiviral activity of the three plant lectins, HHA, GNA and UDA, recognizing monosaccharides present in the backbone of N-glycans on the DENV E-protein, was evaluated.A consistent dose-dependent antiviral activity was observed in DC-SIGN transfected Raji cells against DENV-2 analyzed by flow cytometry (detecting the presence of DENV Ag) and RT-qPCR (detecting viral RNA in the supernatants) [42].
Next, the antiviral potency of the three plant lectins was determined against all four serotypes of DENV, of which DENV-1 and DENV-4 are low-passage clinical virus isolates, in both Raji/DC-SIGN + cells and in primary immature MDDC.The use of MDDC has much more clinical relevance than using a transfected cell line.MDDC resemble DC in the skin [83] and mimic an in vivo DENV infection after a mosquito bite.Moreover, cells of the hematopoietic origin, such as DC, have been shown to play a key role for DENV pathogenesis in a mouse model [84].A dose-dependent and a DENV serotype-independent antiviral activity of HHA, GNA and UDA in MDDC was demonstrated as analyzed by flow cytometry (Figure 9).These CBAs proved about 100-fold more effective in inhibiting DENV infection in primary MDDC compared to the transfected Raji/DC-SIGN + cell line.When DENV is captured by DC, a maturation and activation process occurs.DC require downregulation of C-type lectin receptors [85], upregulation of costimulatory molecules, chemokine receptors and enhancement of their APC function to migrate to the nodal Tcell areas and to activate the immune system [86].Cytokines implicated in vascular leakage are produced, the complement system becomes activated and virus-induced antibodies can cause DHF via binding to Fc-receptors.Several research groups demonstrated maturation of DC induced by DENV infection [87,88].Some groups made segregation in the DC population after DENV infection, the infected DC and the uninfected bystander cells.They found that bystander DC, in contrast to infected DC, upregulate the cell surface expression of costimulatory molecules, HLA and maturation molecules.This activation is induced by TNF-α and IFN-α secreted by DENV-infected DC [40,89,90].Instead, Alen et al. observed an upregulation of the costimulatory molecules CD80 and CD86 and a downregulation of DC-SIGN and MR on the total (uninfected and infected) DC population following DENV infection [61].This could indicate that the DC are activated and can interact with naive T-cells and subsequently activate the immune system resulting in increased vascular permeability and fever.When the effect of the CBAs was examined on the expression level of the cell surface markers of the total DC population, it was shown that the CBAs are able to inhibit the activation of all DC caused by DENV and can keep the DC in an immature state.Furthermore, DC do not express costimulatory molecules and thus do not interact or significantly activate T-cells.An approach to inhibit DENV-induced activation of DC may prevent the immunopathological component of DENV disease.However, since plant lectins are expensive to produce and not orally bioavailable, the search for non-peptidic small molecules is necessary.PRM-S, a highly soluble non-peptidic small-size carbohydrate-binding antibiotic is a potential new lead compound in HIV therapy, since PRM-S efficiently inhibits HIV replication and prevents capture of HIV to DC-SIGN + cells [91].PRM-S also inhibited dose-dependently DENV-2 replication in MDDC but had only a weak antiviral activity in Raji/DC-SIGN + cells [61].Actinohivin (AH), a small prokaryotic peptidic lectin containing 114 amino acids, exhibits also anti-HIV-1 activity by recognizing high-mannose-type glycans on the viral envelope [92].Although DENV has high mannose-type glycans on the E-protein, there was no antiviral activity of AH against DENV infection.Other CBAs such as microvirin, griffithsin and Banlec have been shown to exhibit potent activity against HIV replication [93][94][95], but these CBAs did not show antiviral activity against DENV.Previously, it has been shown that the CBAs HHA, GNA and UDA also target the N-glycans of other viruses, such as HIV, HCV [79] and HCMV [80].This indicates that the CBAs can be used as broadspectrum antiviral agents against various classes of glycosylated enveloped viruses.Although, the three plant lectins did not act inhibitory against parainfluenza-3, vesicular stomatitis virus, respiratory syncytial virus or herpes simplex virus [79].Together, these data indicate a unique carbohydrate-specificity, and thus also a specific profile of antiviral activity of the CBAs.
Molecular target of the CBAs on DENV
It was demonstrated in time of drug addition assays that the mannose binding lectin HHA prevents DENV-2 binding to the host cell and acts less efficiently when the virus had already attached to the host cell.It was shown that HHA interacts with DENV and not with cellular membrane proteins such as DC-SIGN on the target cell.The potency of HHA to inhibit attachment of DENV to Raji/DC-SIGN + cells is comparable to its inhibitory activity of the capture of HIV and HCV to Raji/DC-SIGN + cells [79].CBAs could thus be considered as unique prophylactic agents of DENV infection.
To identify the molecular target of the CBAs on DENV, a resistant DENV to HHA was generated in the mosquito cell line C6/36 by Alen et al. (HHA res DENV).Compared with the WT DENV, two highly prevalent mutations were found, namely N67D and T155I, present in 80% of all clones sequenced.Similar mutational patterns destroying both glycosylation motifs (T69I or T69A each in combination with T155I) were present in another 10% of the clones analyzed.The N-glycosylation motif 153N-D-T155 is conserved among the majority of all flaviviruses, while a second N-glycosylation motif 67N-T-T69 is unique among DENV [96].In the HHA res virus both N-glycosylation motifs were mutated either directly at the actual N-glycan accepting a residue of the first site (Asn67) or at the C-proximal Thr155 being an essential part of the second N-glycosylation site [97], thus both N-glycosylation sites on the viral envelope protein can be considered to be deleted.This indicates that HHA directly targets the N-glycans on the viral E-protein.In fact, all clones sequenced showed the deletion of the N-glycan at Asn153.However, 10% of the clones sequenced had no mutation at the glycosylation motif 67N-T-T69, indicating that this glycosylation motif [96] has a higher genetic barrier compared to 153N-D-T155.Though there are multiple escape pathways to become resistant to HHA, it seems not to be possible to fully escape the selective pressure of favoring a deglycosylation of the viral E-protein.In addition, there were no mutations found either apart from the N-glycosylation sites of the E-protein or in any of the five WT DENV-2 clones passaged in parallel.This is not fully unexpected as flaviviruses replicate with reasonable fidelity and DENV does not necessarily exist as a highly diverse quasispecies neither in vitro nor in vivo [98,99].
There are some contradictions in terms of necessity of glycosylation of Asn67 and Asn153 during DENV viral progeny.Johnson et al. postulated that DENV-1 and DENV-3 have both sites glycosylated and that DENV-2 and DENV-4 have only one N-glycan at Asn-67 [100].In contrast, a study comparing the number of glycans in multiple isolates of DENV belonging to all four serotypes led to the consensus that all DENV strains have two N-glycans on the E-protein [101].However, mutant DENV lacking the glycosylation at Asn153 can replicate in mammalian and insect cells, indicating that this glycosylation is not essential for viral replication [96,102].There is a change in phenotype because ablation of glycosylation at Asn153 in DENV is associated with the induction of smaller plaques in comparison to the wild type virus [96].Asn153 is proximal to the fusion peptide and therefore deglycosylation at Asn153 showed also an altered pH-dependent fusion activity and displays a lower stability [103,104].In contrast, Alen et al. showed that the mutant virus, HHA res lacking both N-glycosylation sites, had a similar plaque phenotype in BHK cells (manuscript submitted).It has been shown that DENV lacking the glycosylation at Asn67 resulted in a replicationdefective phenotype, because this virus infects mammalian cells weakly and there is a reduced secretion of DENV E-protein.Replication in mosquito cells was not affected, because the mosquito cells restore the N-glycosylation at Asn67 with a compensatory sitemutation (K64N) generating a new glycosylation site [96,105].These data are in contrast with other published results, where was demonstrated that DENV lacking the Asn67-linked glycosylation can grow efficiently in mammalian cells, depending on the viral strain and the amino acid substitution abolishing the glycosylation process [102].A compensatory mutation was detected (N124S) to repair the growth defect without creating a new glycosylation site.Thus, the glycan at Asn67 is not necessary for virus growth, but a critical role for this glycan in virion release from mosquito cells was demonstrated [102].However, HHA resistant virus was found to replicate efficiently in mosquito and insect cells indicating an efficient carbohydrate-independent viral replication in these cell lines.A possible explanation for the differences between our data and data from previous studies could be that the mutant virus has been generated in mosquito C6/36 cells (during replication under antiviral drug pressure) and not in mammalian cells (after introducing the mutations by site-directed mutagenesis).In addition, in previous studies, other amino acid substitutions were generated, resulting in different virus genotypes and subsequently resulting in poorly to predict virus phenotypes.
The glycosylation at Asn67 is demonstrated to be essential for infection of monocyte-derived dendritic cells (MDDC), indicating an interaction between DC-SIGN and the glycan at Asn67 [96,106].Also the HHA res DENV was not able to infect efficiently DC-SIGN + cells or cells that express the DC-SIGN-related liver-specific receptor L-SIGN.Interestingly, MDDC are also not susceptible for HHA res DENV infection, indicating the importance of the DC-SIGN-mediated DENV infection in MDDC.Moreover, cells of the hematopoietic origin, such as DC, are described to be necessary for DENV pathogenesis [84].If the CBA resistant DENV in not able to infect DC anymore, it can be stated that the CBAs interfere with a physiologically highly relevant target.DC-SIGN is postulated as the most important DENV entry receptor until now.The entry process of DENV in Vero, Huh-7, BHK-21 and C6/36 cell lines is DC-SIGN-independent and also carbohydrate-independent.Indeed, HHA res DENV can efficiently enter and replicate in these cell lines.HHA res DENV lacking both N-glycans on the envelope E-glycoprotein is able to replicate efficiently in mammalian cells, with the exception of DC-SIGN + cells.
The HHA res virus was used as a tool to identify the antiviral target of other classes of compounds as it could replicate in human liver Huh-7 cells.The use of Huh-7 cells has much more clinical relevance than using monkey (Vero) or hamster (BHK) kidney cells.The HHA res DENV was found cross-resistant to GNA, that recognizes like HHA, -1,3 mannose residues.UDA, which recognizes mainly the N-acetylglucosamine residues of the Nglycans, also lacked antiviral activity against HHA res DENV in Huh-7 cells.This indicates that the entire backbone of the N-glycan is deleted.Likewise, pradimicin-S (PRM-S), a smallsize -1,2-mannose-specific CBA, was also unable to inhibit HHA res DENV.This demonstrates that PRM-S targets also the N-glycans on the DENV envelope.In contrast, ribavirin (RBV), a nucleoside analogue and inhibitor of cellular purine synthesis [74], retained as expected wild-type antiviral activity against HHA res DENV.This argues against that there would be compensatory mutations in the non-structural proteins of DENV which are responsible for an overall enhanced replication of the viral genome [107,108].SA-17, a novel doxorubicine analogue that inhibits the DENV entry process [109], was equipotent against WT and HHA res DENV.The SA-17 compound is predicted to interact with the hydrophobic binding pocket of the E-glycoprotein which is independent from the Nglycosylation state of the E-glycoprotein [109].These data confirm the molecular docking experiments of SA-17.
Generally, the function of glycosylation on surface proteins is proper folding of the protein, trafficking in the endoplasmic reticulum, interaction with receptors and influencing virus immunogenicity [110].Virions produced in the mosquito vector and human host may have structurally different N-linked glycans, because the glycosylation patterns are fundamentally different [101,111].N-glycosylation in mammalian cells is often of the complex-type because a lot of different processing enzymes could add a diversity of monosaccharides.Glycans produced in insect cells are far less complex, because of less diversity in processing enzymes and usually contain more high-mannose and paucimannose-type glycans.DC-SIGN can distinguish between mosquito-and mammalian cellderived alphaviruses [112] and West Nile virus [68], resulting in a more efficient infection by a mosquito-derived virus, but this was not the case for DENV [101].
Although the CBAs HHA and GNA are not mitogenic and not toxic to mice when administered intravenously [113], caution must be taken in the development of the CBAs to use as antiviral drug in the clinic.First, the natural plant lectins are expensive to produce and hard to scale-up, but efforts have been made to express CBAs in commensal bacteria which provide an easy production process of this class of agents.Second, there can be a systemic reaction against the lectins such as in food allergies against peanut lectin or banana lectin [114,115].Third, the CBAs can recognize aspecifically cellular glycans and could interfere with host cellular processes.But, DENV glycosylation is of the high-mannose or pauci-mannose type, which is only rare on mammalian proteins.The synthetic production of small non-peptidic molecules, such as PRM-S, with CBA-like activity, could overcome the pharmacological problems of the plant lectins.Therefore, PRM-S forms a potential lead candidate in the development of more potent and specific DENV entry inhibitors.
Conclusion
In conclusion, besides active vector control in tropical and subtropical regions, there is an urgent need for antiviral treatment to protect half of the world's population against severe DENV infections.DC-SIGN is thought to be the most important DENV receptor and that the DC-SIGN-DENV envelope protein interaction is an excellent target for viral entry inhibitors such as the CBAs.Resistance against HHA forces the virus to delete its N-glycans and subsequently this mutant virus is not able anymore to infect its most important target cells.
Thus the CBAs act in two different ways: prevention of viral entry by directly binding Nglycans on the viral envelope and indirectly forcing the virus to delete its N-glycans and loose the capability to infect DC.The plant lectins provided more insight into the entry pathway of the virus into the host cell.Hopefully some of these future derivatives with a comparable mode of action will reach clinical trials in the near future.
Figure 1 .
Figure 1.Global distribution of dengue virus infections in 2011.Contour lines represent the areas at risk (Source: WHO, 2012).
Figure 3 .
Figure 3. Mechanism of antibody-dependent enhancement (ADE).During a secondary infection caused by a heterologous virus, the pre-existing heterotypic antibodies can cross-react with the other DENV serotypes.The non-neutralizing antibody-virus complex can interact with the Fc-receptor on monocytes or macrophages.This will lead to an increased viral load and a more severe disease.Figure derived from Whitehead et al. [7].
Figure 4 .Figure 5 .
Figure 4. Schematic overview of the DENV membrane fusion process.(A) Pre-fusion conformation of the E-protein consists of homodimers on the virus surface.(B) Low endosomal pH triggers dissociation of the E-dimers into monomers which leads to the insertion of the fusion peptide with the endosomal target membrane.(C) A stable E-protein trimer is folded in a hairpin-like structure.(D) Hemifusion intermediate in which only the outer leaflets of viral and target cellular membranes have fused.(E) Formation of the post-fusion E-protein trimer and opening of the fusion pore allows the release of the viral RNA into the cytoplasm.Modified from Stiasny et al. [26].
Figure 6 .
Figure 6.Structure of DC-SIGN.DC-SIGN, mainly expressed by human dendritic cells in the skin, is composed out of four domains: (A) cytoplasmic domain containing internalization signals, (B) transmembrane domain, (C) 7 or 8 extracellular neck repeats implicated in the oligomerization of DC-SIGN and (D) carbohydrate recognition domain which can interact calcium-dependent with a variety of pathogens.
Figure 7 .
Figure 7. Infection of MDDC by DENV.Monocytes isolated from PBMCs were untreated (A) or treated with 25 ng/ml IL-4 and 50 ng/ml GM-CSF (B) for 5 days prior to DENV-2 infection.Two days after infection the cells were permeabilized and analyzed for DC-SIGN expression and DENV infection by confocal microscopy and flow cytometry.Uninfected cells were stained with a PE-labeled monoclonal DC-SIGN-antibody (red).Infected cells were stained with a mixture of antibodies recognizing DENV-2 E-protein and PrM protein (green).Nuclei were stained with DAPI (blue).Infected monocytes (A) and MDDC (B) were analyzed by flow cytometry to detect DENV-2 positive cells.The values indicated in each dot plot represent the % of DENV-2 positive cells.(C) MDDC were preincubated with 10 µg/ml of isotype control IgG2a, anti-DC-SIGN or anti-MR antibody for 30 minutes before DENV-2 infection.Viral replication was analyzed by flow cytometry.% Inhibition of viral replication SEM of 4 different blood donors is shown.
Figure 8 .
Figure 8. Structure of GNA and UDA.GNA is isolated from the snow drop and is a tetrameric protein.UDA is isolated from the stinging nettle and is a monomeric protein composed out of hevein domains.
Figure 9 .
Figure 9. Dose-dependent antiviral activity of HHA, GNA and UDA in DENV-infected MDDC.MDDC were infected with the four serotypes of DENV in the presence or absence of various concentrations of HHA, GNA and UDA.DENV infection was analyzed by flow cytometry using an anti-PrM antibody recognizing all four DENV serotypes (clone 2H2).% of infected cells compared to the positive virus control (VC) SEM of 4 to 12 different blood donors is shown.(Adapted from Alen et al. [61]). | 2018-04-15T10:46:58.787Z | 2012-11-21T00:00:00.000 | {
"year": 2012,
"sha1": "5acc5890469ef9c4f4c0f5447c9063aa7c241999",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/41108",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5acc5890469ef9c4f4c0f5447c9063aa7c241999",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
258291602 | pes2o/s2orc | v3-fos-license | Factored Neural Representation for Scene Understanding
A long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). Code and data are available at http://geometry.cs.ucl.ac.uk/projects/2023/factorednerf .
Introduction
Scene understanding from video capture has a long history in content creation.It subsequently enables editing by replaying the content from novel viewpoints and allowing object-level modifications.The task is particularly challenging in the dynamic context of moving and deforming objects when observed through a moving (monocular) camera.Traditional approaches make simplifications by assuming the scene to be static [CL96], or requiring access to a variety of priors in the form of object templates [CC13,RPMR13] and investigate applications in the context of generative models [KRWM22a].However, such representations often lack interpretability, require multiview input, fail to provide scene understanding, and do not provide object-level factorization or enable object-level scene manipulation.
We introduce factored neural representation.This object-level scene representation supports interpretability and editability while capturing geometric and appearance details under object movement and viewpoint changes.Our approach does not require any object template, deformation prior, or pretraining object NeRFs.Starting from an RGB-D monocular video of a dynamic scene, we demonstrate how such a factored neural representation can be robustly extracted via joint optimization by leveraging off-the-shelf image-space segmentation and tracking information.Factorization is provided through object-level neural representations and object trajectory and/or deformations.
Technically, we formulate a global optimization to simultaneously build and track per-object neural representations along with a background model while solving for object trajectories and camera path.Further, we model deformable bodies (e.g., a moving human) by adapting the learned neural representation over time.Our proposed representation combines the advantages of object-centric representations and motion tracking, thereby allowing per object manipulation, without having to pay the overhead of separately building object priors or requiring 3D supervision, and naturally integrates information from a monocular input over time across the neural representations to recover from occlusion.For example, Figure 1 shows a factored representation obtained by our method by operating on a monocular RGB-D sequence [BXP * 22] of 60 frames, along with some edits.
We evaluate on both synthetic and real scenes.We compare to competing methods and show that ours can produce better object representations and camera/object trajectories.Note that prior methods often focus on only rigid motion and separated optimization [WLNM21 .We relax many of these restrictions and demonstrate that our factorized representation naturally enables edits involving object-level manipulations.In summary, we introduce a neural factored scene representation and develop an end-to-end algorithm involving a joint optimization formulation to factorize monocular RGB-D videos directly.
Related Work
Scene reconstruction using traditional methods.Aggregating raw scans while simultaneously estimating and accounting for underlying camera motion is an established way of acquiring largescale geometry of rigid scenes (e.g., KinectFusion [IKH * 11], Vox-elHash [NZIS13]).This paradigm has been extended for dynamic scenes by simultaneously segmenting and tracking multiple (rigid) objects (e.g., CoFusion [RA17], MaskFusion [RBA18], MidFusion [XLT * 19], EmFusion [SS19], RigidFusion [WLNM21]) or, decoupling the handling of objects and human motion (e.g., Mixed-Fusion [ZX17]).These methods explicitly track and represent geometry, without or with textured colors, do not support joint optimization, and need special handling for multiple objects.Neural representations through image guidance.In the context of joint material and geometry representation, differential rendering directly optimizes neural implicit representations using only RGB images for supervision.This is achieved by either ray tracing or volumetric rendering based approaches.Ray tracing accounts for explicit surface intersection and calculates gradient on the surface using implicit differentiation [NMOG20, YKM * 20], max pooling [LSCL19], or unfolding sphere tracing [KJJ * 21,LZP * 20,JJHZ20].However, for objects with complex topologies, these methods suffer from hard-topropagate local gradients.In contrast, volumetric rendering [Max95,HMR19], leading to Neural Radiance Fields (NeRF) [MST * 20], integrates density and color samples along rays by modeling a radiance field and employs a coarse-to-fine sampling scheme to focus on surface density, without explicitly distilling the underlying geometry.When converted from the density field, the learned implicit geometry is usually noisy and inaccurate.Again these approaches focus on isolated objects.[TLV21], neural fusion fields [TLLV22].We also develop a dynamic NeRF representation that can be extracted, without requiring pre-training and parametric templates, simultaneously with reconstruction for each object instead of only supporting novel-view rendering with a single dynamic element.Further, we aggregate information across views to recover from occlusion.Once trained, our factored representation can be viewed from novel camera paths and used to make changes to object trajectories and placements.
Image Formation Model
Before introducing the optimization formulation in Section 4, we present our image formation model to produce a rendered image from a factored neural representation.
where δ i is the depth distance between two adjacent samples and σ i is the predicted point density.Recall that point opacity α i represents the opacity of the point position p(s i ), while the transmittance T i indicates the cumulative transmittance before a ray hits the i-th sample point.Looping over all the image pixels {uv}, we obtain where we use a shorthand ψ j := ψ(p(s j )) for the j-th sample and Φ is the sigmoid function.Here, we represent the rendering function as , where the function f θ again can be probed } and any query ray r from the current camera, we first intersect each objects' bounding box B i to obtain a sampling range and then compute a uniform sampling for each of the intervals.For each such sample p, we lookup feature attributes by re-indexing using local coordinate T −1 i p, resort the samples across the different objects based on (sample) depth values, and then volume render to get a rendered attribute.Background is modeled as the 0-th object.See Section 3 for details.For objects with active nonrigid flag, we also invoke the corresponding deformation block (see Section 4 and Figure 5).The neural representations and the volume rendering functions are jointly trained.
Starting from a monocular RGB-D sequence {I(t)}, we extract a factored neural representation F that contains separate neural models for the background and each of the moving objects along with their trajectories.For any object tagged as nonrigid, we also optimize a corresponding deformation block (e.g., human).First, in an initialization phase, we assume access to keyframe annotation (segmentation and AABBs) over time, propagate the annotation to neighboring frames via dense visual tracking and optical flow, and estimate object trajectories.Then, we propose a joint optimization formulation to perform end-to-end optimization using a customized neural volume rendering block.The factored representation enables various applications involving novel view synthesis and object manipulations.Please refer to the supplemental showing reconstruction quality and applications.
to produce only view-dependent color samples f θ (p(s i ), Π) := c i and ψ represents the learned SDF function.
Attributes rendering.By replacing point color c i with any other attribute a i , such as depth [XHKK21, ZPL * 22] or semantic labels [ZSM * 21], volumetric rendering can be generalized to render depth or semantic segmentation, respectively.Specifically, for any attribute a i and ray r, we simply compute an attribute as Volume rendering with factored neural representation.Our proposed factored representation } for a background model f 0 and the foreground objects { f i , i ∈ [1, k]}, which can be probed to output density and color attributes.Each model, the background or any foreground object, can be probed to output color attributes with corresponding AABB (axis aligned bounding boxes) {B i }, transformations {T i } to map the AABB local coordinates to the global coordinate system, and implicit SDF functions {ψ i } to produce density samples.We now define the rendering function I := R(Π, F, {ruv}) using our factored representation F, with background and superscripts i ∈ [1, k] denoting the k foreground objects.Figure 2 illustrates the process.For each ray r, for each intersected model, computed using its AABB B i , we obtain SDF density values using uniform samples and perform inverse transform sampling to generate 128 samples per ray.For the background (i = 0) or foreground (i ∈ [1, k]) samples, we obtain f i (T −1 i p i (s j ), Π) := c i j and density σ i j using Equation 2 using ψ i using the remapped samples T −1 i p i (s j ), expressed in the local coordinate systems of the objects.We collect the samples across the background and all the intersecting objects, sort the samples based on their depth values, and volume render the colors/attributes as described earlier (see Equation 1).
Algorithm
As input, we take in RGB-D frames, denoted by {I(t) := (Ct , Dt )} with color Ct and depth Dt frames at time t, of scenes with one or more moving objects, where objects can be moving rigidly or nonrigidly (e.g., humans).We assume access to keyframe annotation over time, containing instance segmentation, axis-aligned bounding boxes (AABBs), and rigid or nonrigid flags.In an initialization step, this information is used to extract initial camera and object trajectories and instance masks over time in the camera space.As output, we produce a factored neural representation F of the scene, where for each object we produce a neural representation along with its estimated object trajectory, and for a nonrigid object also an associated deformation function.In Section 5, we use these inferred factored representations to directly render novel view synthesis or perform object-level manipulations.
To obtain such a factored representation, we have to address several challenges.First, the extracted segmentation information from the RGB-D frames is imperfect; hence, any information or supervision (e.g., segmentation loss) derived from them leads to error accumulation.Second, we must recover from artifacts in initial pose estimation, especially in scenes with insufficient textures to guide the camera calibration stage.Auto-focus, color correction, and error accumulation in real captures pose further challenges.Third, since we only use monocular input, the input provides partial information in the presence of occlusion, both in shape and appearance.Without priors, we have to recover from the missing information by fusing information across the (available) frames.Finally, we allow objects to exhibit nonrigid motion (e.g., human walking) and have to factorize object deformation from object motion.In the following, we present how to set up a joint optimization, with suitable initialization and regularizers, involving object tracking, neural representations, and volume rendering to solve these challenges.
Initialization.We use an off-the-shelf visual tracker [WZB * 19] with keyframe annotation, including instance segmentation and AABBs, to propagate the keyframe segmentation across the frames.
To get an initial registration, we run an optical flow network [TD20] to find initial correspondences and solve for frame-to-frame rigid alignment using iterated closest point (ICP) approach.The registration information across frames provides object trajectory {T i (t)} estimates.
Joint optimization.We now introduce the main loss terms to capture reconstruction quality and additional regularizers to get a desired factored representation.
Reconstruction loss: We render color and depth images using current (multi-object) neural factored representation as described in Section 3. Note that the object trajectories T i are indexed by frame times, i.e., T i (t).We compare the sampled color C and depth D attributes in a set of minibatch samples P against the estimated attributes using the L1 reconstruction loss, i.e., We render the current background and foreground neural objects to produce RGB and depth attributes and sum them up over the individual frames.
Free-space loss: One approach to check the factorization quality is to compare the predicted object segmentation, computed using the current re-projection of objects' transmission, against the input segmentation.However, this approach leads to poor results as segmentation estimates are noisy.Instead, we focus on the complement space and define a free-space loss (cf., [XHKK21]) to penalize density values in regions indicated to be free according to the raw depth information.For any point sampled from any of the objects, we want identify free-space samples using depth D(r).Specifically, we constrain the integrated weights of each free-space sample p ∈ P free , before reaching the object point (i.e., zero-isosurface of ψ i ), to be zero using L1 loss.We found this loss to be better than a crossentropy segmentation loss in the joint-training setting.Specifically, where P free = {p(s, r)|s < Dt (r)} . (4) Non-rigid deformation: In order to handle non-rigid objects, we additionally incorporate a deformation block, for objects marked with flag nonrigid.Specifically, we adopt a state-of-the-art bijective deformation network proposed by Cai et al.
[CFF * 22], which consists of three sub-networks, each predicting a low-dimensional deformation.Given an input 3D point, each sub-network selects one axis, predicts a 1D displacement, and infers a 2D translation and rotation for the other axes.These sub-networks are sequentially invoked in the XYZ axis order.Note that this block gets directly optimized via the reconstruction loss and is not supervised with ground truth deformation.
Surface regularizers: In order to regularize our network to output a canonical model, we employ auxiliary losses to constrain our geometry models to be actual surfaces by penalizing the implicit functions ψ i to (i) be a true signed distance field (i.e., using Eikonal loss) ; (ii) requiring the surface points (i.e., points within ±ε of the zero level set of the SDFs denoted by Ωε(ψ i )) to have normals in the direction of normals n(x) estimated from the input RGB-D [GKOM18]; and (iii) surface points to have zero implicit values.These auxiliary losses does not slow down the optimization since they can be directly calculated without performing volumetric rendering.Putting them together we get, L surface (F) := 1 where P Bi and P Ωi denote the randomly sampled spatial points and surface samples in the object bounding box B i , respectively.Finally, we arrive at the full optimization problem as, min We use λ 1 = 0.1, λ 2 = 1.0, and λ 3 = 0.1 in our experiments where λ 1 < 1 due to noisy depth input.Recall that the factored representation } maintains a specialized model for the background and each of the object trajectories T i (t) being time dependent.
Evaluation
We evaluated Factored Neural Representations on a variety of synthetic and real scenes, in the presence of rigid and nonrigid objects.In each case, we start with only RGB-D sequences, without access to any object template.
Dataset.We tested on two types of datasets, synthetic and real.As synthetic dataset, we propose a new dataset using public available CAD models [CFG * 15, GBB * 22] and render RGB-D sequences using Blender [GBB * 22, Com18] with simulated sensor noises [HWMD14].To inject motion, we manually edit camera motion, rigid object motion, and combine non-rigid motion from the DeformingThings4D [LTT * 21] dataset.As representative examples, we present three sequences, SYN-SCENE A, B, and C, each spanning for 90-100 frames and containing multiple dynamic objects.For these synthetic sequences, we have access to ground truth data (e.g., object trajectory, object segmentation, deformation model).This new dataset will be made publicly available on publication.
As real dataset, we use the BEHAVE [BXP * 22] dataset, which provides human object interaction RGB-D videos with keyframe annotation.We crop and evaluate the first non-occlusion sequence in each scene to avoid the object re-identification issue.Figure 4 shows some representative frames.Model size and implementation details.We report the model size of our methods and comparisons.IMAP uses 0.9MB (FG/BG); NICESLAM uses 76MB (FG) and 135MB (BG) with 32 3 +64 3 grid resolutions for foreground and 32 3 +80 3 for background.In contrast, our model takes 5.7MB for the whole scene.We train all methods using our training framework on a single Nvidia RTX 3090 GPU.We do not use input depth to guide ray sampling for any of the methods as we observed that this reduces models' generalization ability.Instead, at each training iteration, we perform inverse transform sampling and sample 256 rays with 128 points per ray.
Comparison.We compare our approach against different competing alternatives.Existing monocular approaches can be categorized as either employing an MLP (e.g., IMAP [SLOD21]), or using multiresolution feature grids (e.g., NICESLAM [ZPL * 22]).Since these competing methods do not support joint optimizing multiple objects, we additionally provide the segmentation and poses generated from our initialization step (see Section 4) and manually run them multiple times to reconstruct background and dynamic objects.Note that we modified the ray sample step of IMAP and NICESLAM to accept object segmentation input, and we use L1 segmentation loss when training foreground models.For both IMAP and NICESLAM, we employ the open-source network implementation [ZPL * 22] in our training framework instead of their multi-threads SLAM framework, which contains several optimizations (e.g., view-purging) for real-time applications.Note that IMAP, with provided background and object segmentation information, can be seen as an upper bound for performance of a method like RigidFusion [WLNM21].
Evaluation metrics.We compare different methods across a range of metrics.We evaluate novel view rendering quality using PSNR, SSIM, and L1 for reconstruction quality in Table 1 and Table 2.We also qualitatively evaluate resynthesis quality under the authoring of updated object trajectory as well for addition or deletion of objects from the factored scenes in Figure 8.
Qualitative evaluation.In Figure 6, 7, and 8, we qualitatively compare our method against alternative approaches (IMAP and NICES-LAM).Note that although the comparison approaches jointly learn for scene geometry and appearance, they assume the scenes to be static.In other words, these methods provide only partial factorization into scene models and camera trajectories.Thus, we run them multiple times with the same segmentation and pose initialization to reconstruct background and dynamic objects.Please check the supplemental webpage for result comparisons.and NICESLAM [ZPL * 22] on our synthetic sequences using the validation cameras.See Table 1 for quantitative evaluation.Note that the other methods fail to produce any reconstruction for the nonrigidly moving human.Further, our results are higher in quality and capture finer geometric (e.g., the handle of the green bag) and appearance details (e.g., shading on the yellow monkey face).Please note that Scene C shows a challenging validation frame where the human undergoes a strong deformation.Handling it requires more regularization to force the network to learn the non-rigid motion. 2 for quantitative evaluation.
Table 2: Reconstruction error on BEHAVE.We report total scene reconstruction errors using the training camera k0 due to the lack of validation views and per-frame annotation.Our method consistently produces better reconstruction quality benefiting from the proposed joint optimization and the deformation module.See Figure 7 for qualitative evaluation.Our method produces better quality on both synthetic and realworld scans, both appearance and geometry.Figure 6 demonstrates our joint optimization scheme improves the object segmentation leading to clearer geometry.Figure 7 shows the comparison of scene reconstruction on the BEHAVE sequences with large missing depth areas.Our method generalizes better than the comparison.In Figure 8, we also present the extracted object motion trajectories in R 3 as recovered by our initialization step.For the synthetic example, we add groundtruth trajectories (gray colored) for comparison.Note that since we do not perform any loop closure, the trajectory estimates degrade over a longer distance due to error accumulation.
Quantitative evaluation.We present a quantitative comparison in Table 1 for reconstruction quality using the validation cameras, sepa-rately for RGB and depth channels.Notably, our method consistently outputs better reconstruction than others (IMAP and NICESLAM), indicating that our sampling scheme extracts a proper factorization and hence avoids overfitting to training views.In the absence of ground truth and validation views, we cannot run quantitative evaluation for real sequences (Figure 7).
Ablation study.In Table 3 and Figure 10, we conduct an ablation study using our synthetic dataset.While the commonly employed segmentation loss [WLL * 21, YGKL21, CFF * 22] can constrain the object shape through the rendered mask (weights of each sampled ray), it blocks the foreground reconstruction in joint optimization.The surface regularizers can stabilize the geometry models and improve both color and depth reconstruction.Our final setting (with surface regularizers and freespace loss) has the best full-scene reconstruction quality.The segmentation loss fights with thee reconstruction loss in the joint training setting, and the implicit networks fail to learn object surface.Therefore, we replace the segmentation loss with freespace loss allowing the network to optimize all objects and learn correct object geometry.
Model size and reconstruction quality.We conduct another ablation study to examine the effect of model size using our synthetic dataset.We adjust the hidden dimension size and set up three models: small, medium, and big, with 370K, 1381K, 5342K parameters, respectively.We trained all models for 100K iterations and observed that the reconstruction quality increased linearly when more parameters were used.The result color PSNR values are 22.9 (small), 23.1 (medium), and 23.18 (big); and the depth L1 errors are 0.35 (small), 0.33 (medium), and 0.32 (big).Reconstruction quality and enabled applications on two synthetic scenes.Please refer to the supplemental videos.Here we show frames for the output RGB, depth, and underlying recovered geometries (extracted by running Marching cubes on the estimated implicit representations ψ i ).We also show the recovered trajectories, along with corresponding ground truth trajectories.Recall that the 0-th object being the background, and {T 0 (t)} represents the camera path.Any stationary object gets reconstructed in the background layer in our factorization.We observed some artifacts caused by unseen geometry (e.g., move objects examples in the second and third rows) and ambiguous decomposition (e.g., the blue box in the first row), because we only have access to monocular and partially occluded input.Table 3: Ablation study on our synthetic dataset.We evaluate total scene reconstruction errors using the validation cameras on our synthetic dataset.Segment.Loss: supervise the rendered masks (weights of each sampled ray) using the input segmentation [WLL * 21, YGKL21, CFF * 22].Recon.Loss: color and depth reconstruction loss.Surface Reg. and Freespace Loss: the surface regularizer and the loss described in Section 4. We observed that the commonly employed segmentation loss is unsuitable for supervising multiple implicit networks (iv) and causes a large performance drop.Also, the freespace loss is not performed well in single object setting (v) due to the lack of background information.See Figure 10 for the visualization.Our setting (vi) performs well and is able to handle imperfect segmentation input.(ii) object level manipulation by changing one or more object trajectories; (iii) deleting objects by removing them from the factored representations.Note that the scene-specific learned renders are held fixed during any of the edits.While we only train with monocular input, our model can still support editing and output reasonable reconstruction.These edit modes are be applied separately or in parallel, and test the quality of the scene understanding (i.e., factorization) by revealing unseen object parts and configurations.These editing operations are non-trivial because our model is supervised using monocular input containing large motion.Removing the artifacts in Figure 8 will be interesting future work.
Conclusion
We have presented factored neural representation along with a joint optimization formulation that allows to separate a monocular RGB-D video into object level encodings, without requiring access to additional shape or motion priors.We demonstrated how to directly obtain object level coupled geometry and appearance encoding, along with object trajectories and deformations.The factorized representation directly supports novel view synthesis along with authoring edits on object trajectories.Our work has limitations that we want to address in future works, as discussed next.
Joint camera and object tracking.In our current implementation, we do not optimize the camera obtained during the initialization phase.It would be interesting to jointly finetune the initial estimates, possibly by loop closing and locally linearizing the transformation estimates to simplify the resultant optimization.
Inter object interactions and shading.In this paper, we do not model object-object or object-background effects.For example, we do not explicitly model shadows [WZT * 22], reflections [GKB * 22], transparency [IAKG20], or object interactions arising from human affordance considerations.In the future, it would be a possibility to model these in the volume rendering step.
Better architecture.At present, we modeled object functions of the form f θ simply using MLPs.More recent alternatives and localized versions like hashing [MESK22] or direct functions (e.g., Relu-Fields [KRWM22b]) can be alternatively explored.However, the challenge would then be to effectively integrate information across multiple frames to model deformations, possibly by dynamically reindexing the local grid-based representations.
Shape priors.As our method does not rely on any object or motion priors, it cannot recover from significant occlusions.We plan to regularize the problem by incorporating data priors, and possibly reducing the dimensions of the variables by working in a learned latent space.However, even deciding which representation to use to anchor such a learned shape space for arbitrary objects still remains an open research topic.
Modeling dynamic scenes.Recent works have trained global object NeRF from monocular input [LNSW21, GSKH21], capture dynamic effects by overfitting to a global 4D space-time volume [XHKK21, CJ23, FKMW * 23], and explicitly capture human interactions [JJS * 22, SGF * 22].Researchers have investigated the effect of segmentation, tracking, and NeRF modeling tasks in other efforts.Notable examples include monocular with foreground and background decomposition [MBRS * 21, YLSL21, WZT * 22, SCL * 23], modeling rigid objects with a planar background model [OMT * 21, KGY * 22], egocentric video segmentation Volume rendering.To render an image I from a given camera setup Π, volume rendering [Max95, MST * 20] maps each image pixel to form a camera ray r.Points are sampled on each such ray and sorted based on their depth values to produce a rendered color C(r) as the integration of the sampled point colors {c i } weighted by the corresponding point density {σ i } and (accumulated) transmittance {T i }.Note that samples along a ray r := (o, d), going through point o along a unit direction d, are parameterized as p(s i ) := o + s i d for increasing scalar depth samples s i ∈ R + .Using the samples, we discretize the continuous formulation using the quadrature approxi-mation as: where the function f θ , typically modeled by an MLP [MST * 20], can be probed to produce density and color samples as f θ (p(s i ), d) := (σ i , c i ).Typically, only the color values are view dependent.Volume rendering with implicit surface.Implicit surface representation, such as occupancy or signed distance fields, can also be used with volume rendering [WLL * 21, OPG21, YGKL21] and provides an inductive bias for modeling surface geometry.We found this more suitable for object-level factored representation as we can easily regularize the optimization to encode object surfaces instead of producing volumetric clouds.Here, we employ the signed distance field formulation proposed by Wang et al. [WLL * 21] and convert the signed distance value ψ to the density values by assigning non-zero values near the zero level set of the modeled surface geometry:
Figure 2 :
Figure 2: Rendering neural factored representation.Given a factored representation F{( f i , ψ i , B i , T i ) ki=0 } and any query ray r from the current camera, we first intersect each objects' bounding box B i to obtain a sampling range and then compute a uniform sampling for each of the intervals.For each such sample p, we lookup feature attributes by re-indexing using local coordinate T −1 i p, resort the samples across the different objects based on (sample) depth values, and then volume render to get a rendered attribute.Background is modeled as the 0-th object.See Section 3 for details.For objects with active nonrigid flag, we also invoke the corresponding deformation block (see Section 4 and Figure5).The neural representations and the volume rendering functions are jointly trained.
Architecture. Figure 5 shows our network architecture.For the geometry network, we use an SDF field with geometric initialization [GYH * 20], weighted normalization [SK16], Softplus activations, and a skip-connection MLP.The input coordinates and view directions are lifted to a high dimensional space using positional encoding [MBRS * 21].For rigid objects, we use SE3 representation, i.e., a quaternion and a translation vector.For non-rigid objects, we use bijective deformation blocks [CFF * 22] with Softplus activations.For the color MLP network, we use ReLU activation.
Figure 5 :
Figure 5: Our network architecture.We use sine positional encoding as NeRF [MBRS * 21].The number of rigid and non-rigid motion blocks depends on the objects' motion labels.We employ a bijective deformation block [CFF * 22] for each non-rigid object.Unlike [PSH * 21, CFF * 22], we do not predict ambient coordinates in the non-rigid motion block.
Figure 6 :
Figure 6: Comparisons of scene and object reconstruction on our synthetic dataset.Visually comparing ours against IMAP [SLOD21]and NICESLAM [ZPL * 22] on our synthetic sequences using the validation cameras.See Table1for quantitative evaluation.Note that the other methods fail to produce any reconstruction for the nonrigidly moving human.Further, our results are higher in quality and capture finer geometric (e.g., the handle of the green bag) and appearance details (e.g., shading on the yellow monkey face).Please note that Scene C shows a challenging validation frame where the human undergoes a strong deformation.Handling it requires more regularization to force the network to learn the non-rigid motion.
Figure 7 :
Figure 7: Comparisons of scene reconstruction on the BEHAVE dataset.Visually comparing our results against IMAP [SLOD21] and NICESLAM [ZPL * 22] on the BEHAVE sequences.Notably, our method generalizes better when the scene contains large missing depth areas, showing the learned geometry model is constrained well (see the wall in the training views).See Table2for quantitative evaluation.
Applications.We demonstrate three different editing modes in Figure 8: (i) novel view synthesis by changing the extracted camera
Figure 8 :
Figure 8: Reconstruction and Applications.Reconstruction quality and enabled applications on two synthetic scenes.Please refer to the supplemental videos.Here we show frames for the output RGB, depth, and underlying recovered geometries (extracted by running Marching cubes on the estimated implicit representations ψ i ).We also show the recovered trajectories, along with corresponding ground truth trajectories.Recall that the 0-th object being the background, and {T 0 (t)} represents the camera path.Any stationary object gets reconstructed in the background layer in our factorization.We observed some artifacts caused by unseen geometry (e.g., move objects examples in the second and third rows) and ambiguous decomposition (e.g., the blue box in the first row), because we only have access to monocular and partially occluded input.
Figure 9 :
Figure 9: Dynamic scene reconstruction.We demonstrate our full scene reconstruction exhibiting non-rigid object deformation across different time indexes.Note how ours can recover plausible movement of the movement of the limbs across time.
Figure 10 :
Figure 10: Ablation study and reconstruction results.The number indicates the setting in Table3.Surface regularizers (iii) enforce the network to learn geometry.The segmentation loss in joint setting (iv) performed poorly due to the conflicted signals between each network, which may require per-object rendering during training. | 2023-04-24T01:15:16.297Z | 2023-04-21T00:00:00.000 | {
"year": 2023,
"sha1": "e76698076586c6430f13f9a990d7405fde9a2efc",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cgf.14911",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "b73eebd8170f764da2620ac6304f4698f7dcbd87",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14230248 | pes2o/s2orc | v3-fos-license | Equilibrium phase behavior of polydisperse hard spheres
We calculate the phase behavior of hard spheres with size polydispersity, using accurate free energy expressions for the fluid and solid phases. Cloud and shadow curves, which determine the onset of phase coexistence, are found exactly by the moment free energy method, but we also compute the complete phase diagram, taking full account of fractionation effects. In contrast to earlier, simplified treatments we find no point of equal concentration between fluid and solid or re-entrant melting at higher densities. Rather, the fluid cloud curve continues to the largest polydispersity that we study (14%); from the equilibrium phase behavior a terminal polydispersity can thus only be defined for the solid, where we find it to be around 7%. At sufficiently large polydispersity, fractionation into several solid phases can occur, consistent with previous approximate calculations; we find in addition that coexistence of several solids with a fluid phase is also possible.
During the past few decades, a great deal of effort has been devoted to studies of the phase behavior of spherical particles, and in particular of the freezing transition, where the particles arrange themselves into a crystal with long-range translational order. The simplest system for studying this transition is one where the particles act as hard spheres, exhibiting no interaction except for an infinite repulsion on overlap. This scenario can be realized experimentally, using e.g., colloidal latex particles sterically stabilized by a polymer coating [1]. Hard spheres constitute a purely entropic system; the internal energy U vanishes, and F = −T S. Phase transitions are thus entropically driven; nevertheless, monodisperse (i.e., identically sized) hard spheres exhibit a freezing transition, where a fluid with a volume fraction of φ ≈ 50% coexists with a crystalline solid with φ ≈ 55% [2].
For colloidal hard spheres, there is inevitably a spread in the particle diameters σ, which are effectively continuously distributed within some interval. The width of the diameter distribution can be characterized by a polydispersity parameter δ, defined as the standard deviation of the size distribution divided by its mean.
The effect of polydispersity on the phase behavior of hard spheres has been investigated by experiments [1,2], computer simulations [3,4,5,6], density functional theories [7,8], and simplified analytical theories [6,9,10,11,12,13,14]; Ref. [6] has a more detailed bibliography of earlier work. These studies have revealed that, compared to the monodisperse case, polydispersity causes several qualitatively new phenomena. First, it is intuitively clear [9] that significant diameter polydispersity should destabilize the crystal phase, because it is difficult to accommodate a range of diameters in a lattice structure. Experiments indeed show that crystallization is suppressed above a terminal polydispersity of * Electronic address: moreno.fasolo@kcl.ac.uk † Electronic address: peter.sollich@kcl.ac.uk δ t ≈ 12% [1,2]. Theoretical work suggested that this arises from a progressive narrowing of the fluid-solid coexistence region with increasing δ, with the phase boundaries meeting at δ t [8,10] in a point of equal concentration [13]. Bartlett and Warren [13] also found re-entrant melting on the high-density side of this point: for δ just below δ t , they predicted that compressing a crystal could transform it back into a fluid, as sketched in the inset of Fig. 1 below. However, none of these theoretical studies fully accounted for fractionation [15], i.e., the fact that coexisting phases generally have different diameter distributions; in fluid-solid coexistence, one typically finds that the solid contains a higher proportion of the larger particles. Beyond the resulting difference in mean diameter, fractionation implies that coexisting phases can also have different polydispersities δ. Indeed, numerical simulations that allow for fractionation show that a solid with a narrow size distribution can coexist with an essentially arbitrarily polydisperse fluid [5,16], suggesting that the concept of a terminal polydispersity is useful only for the solid but not for the fluid. Fractionation has also been predicted to lead to solid-solid coexistence [11,12], where a broad diameter distribution is split into a number of narrower solid fractions. This occurs because the loss of entropy of mixing is outweighed by the better packing, and therefore higher entropy, of crystals with narrow size distribution; accordingly, as the overall polydispersity of the system grows, the number of coexisting solids is predicted to increase.
Previous work as described above leaves open a number of questions. The drastic and differing approximations for size fractionation used in the studies of reentrant melting and solid-solid coexistence [11,12,13] leave the relative importance of these two phenomena unclear. In [13] fractionation was allowed, but coexisting phases were implicitly constrained to have the same δ; calculations that account fully for fractionation remain restricted to highly simplified van der Waals free energies [14]. Numerical simulations have been carried out at constant chemical potential distribution [4,16]; in con-trast to the experimental situation, the overall particle size distribution can then change dramatically across the phase diagram, limiting the applicability of the results.
Our goal in this letter is to calculate the equilibrium phase behavior of polydisperse hard spheres on the basis of accurate free energy expressions, taking full account of fractionation and going beyond previous work on fluidsolid and solid-solid coexistence. The experimentally observed behavior of hard sphere colloids will of course also depend on non-equilibrium effects, e.g., the presence of a kinetic glass transition [17], anomalously large nucleation barriers [18] or the growth kinetics of polydisperse crystals [19]. Nevertheless, the equilibrium phase behavior needs to be understood as a baseline from which non-equilibrium effects can be properly attributed. Also, more of the equilibrium behavior may be observable under microgravity conditions, where the glass transition is shifted to higher densities or even absent [20].
Our calculations will show that the fluid cloud curve, which locates the onset of phase coexistence coming from low density, continues to large polydispersities δ: the point of equal concentration found in [13] disappears together with the predicted re-entrant melting. Instead of returning to a single-phase fluid at high volume fractions, the system splits into two or more fractionated solids, consistent with the simplified calculations of [11]; coexistence of several solids with a fluid phase appears as a new feature.
In general, the total free energy (density) of a polydisperse system consists of an ideal and an excess part (f ex ). In units where k B T = 1, Here ρ(σ) is the density distribution, i.e., ρ(σ) dσ is the number density of particles with diameters between σ and σ +dσ. Equilibrium requires equality of the chemical potentials µ(σ) = δf /δρ(σ) and of the pressure Π = −f + dσµ(σ)ρ(σ) among all coexisting phases a = 1 . . . P . Particle conservation adds the condition that, if phase a occupies a fraction v (a) of the system volume, then For the fluid, the most accurate free energy approximation available at present is the BMCSL generalization [21,22] of the monodisperse Carnahan-Starling equation of state. This is truncatable in the sense that the excess free energy only depends on the four moments ρ i = dσ σ i ρ(σ) (i = 0 . . . 3) of the density distribution [23]; ρ 0 is the total number density, (π/6)ρ 3 = φ the volume fraction, and ρ 1 /ρ 0 =σ and ρ 2 /ρ 0 = σ 2 give the mean and mean-square diameter. For the crystalline solid, Bartlett [10,24] assumed that the same truncatable structure holds; an approximate excess free energy (depending only on the same ρ i ) can then be derived from simulation results [25] for bidisperse hard spheres. Implicit in the use of data from [25] is the assumption that the crystal has a substitutionally disordered f.c.c. structure.
We adopt the BMCSL and Bartlett free energies for our calculation; the appropriate branch for a given ρ(σ) is selected by taking the minimum of the fluid and solid free energies. Since the excess free energies depend only on the ρ i , the excess chemical potentials µ ex (σ) take the form For the solid, Bartlett [24] derived µ 0 and µ 3 from the small and large σ limits of the Widom insertion principle [26]. However, because of the approximate character of the excess free energy, µ ex (σ) then does not obey the thermodynamic consistency requirement δµ ex (σ)/δρ(σ ′ ) = δµ ex (σ ′ )/δρ(σ). To avoid this, we assign all excess chemical potentials by explicitly carrying out the differentiation in (2).
Our computational approach is based on the moment free energy method [27,28,29], which maps the full free energy (1), with its dependence on all details of ρ(σ) through the ideal part, onto a moment free energy depending only on the moments ρ i . For truncatable free energies this locates exactly the cloud points, i.e., the onset of phase separation coming from either a singlephase fluid or solid, as well as the properties of the coexisting "shadow" phases that appear there. Inside the coexistence region, one in principle needs to solve a set of highly coupled nonlinear equations [15] and the moment free energy method gives only approximate results. However, by retaining extra moments with adaptively chosen weight functions [29,30,31], increasingly accurate solutions can be obtained by iteration. Using these as initial points, we are then able to find full solutions of the exact phase equilibrium equations. Care is taken to check that solutions are globally stable, i.e., that no phase split of lower free energy exists [29]. We are able to calculate coexistence of up to P = 5 phases, which so far has been possible only for much simpler free energies depending on a single density moment (see e.g., [29]).
Below we present results for a symmetric triangular parent density distribution, i.e., ρ (0) (σ) increasing linearly from zero for σ ∈ [1 − w, 1] and decreasing linearly for σ ∈ [1, 1 + w], with w = √ 6δ. The mean diameter of 1 fixes our length unit. Other distributions could be considered, but for the moderate values of δ of interest here one expects them to give qualitatively similar results, based on the intuition that for narrow size distributions δ is the key parameter controlling the phase behavior [9]. Fig. 1 shows our results for the cloud and shadow curves. The fluid cloud curve continues throughout the whole range of polydispersities that we can investigate: even at δ = 14%, a hard sphere fluid will eventually split off a solid on compression. Fractionation is key here; as indicated in Fig. 1, the coexisting shadow solid always has a smaller polydispersity, with δ never rising above 6%. This fractionation effect prevents the convergence of 1: Cloud (thick) and shadow (thin) curves, plotted as polydispersity δ versus volume fraction φ; dashed lines link sample cloud-shadow pairs. The fluid (F) cloud curve continues up to the largest δ that we study. The solid (S) cloud curve has two branches, with onset of F-S and S-S coexistence at low and high volume fractions, respectively. Inset: Sketch of the phase diagram of [13], showing re-entrant melting and the point of equal concentration.
the solid and fluid phase boundaries, along with the resulting re-entrant melting [13] (Fig. 1-inset). These findings are in qualitative accord with numerical simulations for the simpler case of fixed chemical potentials [5,16]. In particular, the terminal polydispersity δ t cannot be defined as the point beyond which a fluid at equilibrium will no longer phase separate; δ t only makes sense as the maximum polydispersity at which a single solid phase can exist. As in [5] we also find that the coexisting fluid always has a lower volume fraction than the solid, along with (not shown) a lower mean diameter.
Coming from the single-phase solid, decreasing density at low polydispersities leads to conventional fluidsolid phase separation. At higher δ, however, the solid cloud curve acquires a second branch at higher densi- ties. This is broadly analogous to the re-entrant phase boundary found in [13], but with the crucial difference that the system phase separates into two solids rather than a solid and a fluid. The two branches meet at a triple point. Here the solid cloud phase coexists with two shadow phases, one fluid and one solid, as marked by the squares in Fig. 1. From Fig. 1 the triple point, at δ t ≈ 7%, also gives the terminal polydispersity beyond which solids with triangular diameter distribution are unstable against phase separation. As explained, other distributions should give similar values of δ t .
In Fig. 2 we show the full phase diagram for our triangular parent distribution. In each region the nature of the phase(s) coexisting at equilibrium is indicated. The cloud curves of Fig. 1 reappear as the boundaries between single-phase regions and areas of fluid-solid or solid-solid coexistence. Starting from the latter and increasing density or δ, fractionation into multiple solids occurs. The overall shape of the phase boundaries in this region is in good qualitative agreement with the approximate calculations of [11]. However, the coexisting solids do not necessarily split the diameter range evenly among themselves as assumed in [11]; see the sample plot in Fig. 3 of the normalized diameter distributions n(σ) = ρ(σ)/ρ 0 of four coexisting solids. In fact, plotting δ vs φ for all coexisting solids across the phase diagram, we find points that cluster very closely around the high-density branch of the solid cloud curve in Fig. 1. Coexisting solids with lower volume fraction φ thus tend to have higher polydispersity δ, as in the example in Fig. 3; this conclusion is intuitively appealing since higher compression should disfavor a polydisperse crystalline packing. Note that in Fig. 2, at larger δ than we can tackle numerically, coexistence of P > 4 solids would be expected since each individual solid can only tolerate a finite amount of polydispersity. However, from Fig. 2 such phase splits would occur at increasing densities and eventually be limited by the physical maximum volume fraction φ max ≈ 74%. Also, at higher δ more complicated single-phase crystal structures, with different lattice sites occupied preferentially by (say) smaller and larger spheres, could appear and compete with the substitutionally disordered solids we consider. Finally, a new feature of the phase diagram in Fig. 2 is the coexistence of a fluid with multiple solids. The triple point on the solid cloud curves already indicated the existence of a three-phase F-S-S region; as in the case of solid-solid phase splits, more solid phases then appear with increasing δ. Fig. 4 shows again that the fractionation behavior is non-trivial: while the coexisting fluid is enriched in the smaller particles as expected, it also contains "left over" large spheres that did not fit comfortably into the solid phases and thus ends up having a larger polydispersity (10.4%) than the parent (8%).
In conclusion, we have calculated the phase behavior of polydisperse hard spheres, using accurate free energies for the fluid and solid phases and solving exactly the resulting equilibrium conditions. Fluid-solid coexistence has been identified for fluids with polydispersities up to δ = 14%. This shows clearly that the experimentally observed suppression of crystallization above δ = 12% is a non-equilibrium effect, probably caused by increased nucleation barriers at large δ [18]. For the solid, a terminal polydispersity remains well-defined as the maximal value beyond which instability to phase separation sets in; for triangular diameter distributions this turns out to be δ t ≈ 7%. Instead of the re-entrant melting predicted in an approximate treatment of fractionation effects [13], we find that sufficiently polydisperse solids split into two fractionated solids on compression. At higher volume fractions and polydispersities, multiple solids can coexist; coexistence of a fluid with several solids appears as a new feature. Fractionation effects are nontrivial, with solids splitting the diameter range unevenly among them and coexisting fluids sometimes having larger polydispersities than the parent. Overall, our calculated phase diagram unites, clarifies and extends the previous separate predictions of polydispersity effects on fluid-solid coexistence and solid-solid fractionation. Numerical simulations may offer the best avenue for testing our predictions but will need to be carried out at fixed parent size distribution [32] to detect the complex fractionation phenomena we find. For the future it would be exciting to unify our predictions with those for fluid-fluid demixing, but this will be very challenging since the latter only occurs at polydispersities δ of order 100% [33,34], far outside the range studied here.
It is a pleasure to thank Paul Bartlett for providing his code for the solid free energy. Financial support through EPSRC grant GR/R52121/01 is acknowledged. | 2015-03-21T17:44:09.000Z | 2003-05-09T00:00:00.000 | {
"year": 2003,
"sha1": "c24cbc4b5897cda46add3a347237a6b25538ca18",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0305211",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c24cbc4b5897cda46add3a347237a6b25538ca18",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Medicine"
]
} |
136261154 | pes2o/s2orc | v3-fos-license | Effect of shear span-to-depth ratio on the shear behavior of BFRP-RC deep beams
This study investigates the shear behavior of deep concrete beams reinforced with basalt fiber reinforced polymer (BFRP) bars for flexure without web reinforcements. The experimental testing performed herein consisted of a total of 4 short beams, three of which were reinforced with BFRP and one beam was reinforced with steel bars. The primary test variable was the shear-span-to-effective-depth ratio (a/d) and its influence on the beams’ mid-span deflections, shear capacity, load-deformation relationships and the failure modes.
Introduction
Infrastructure deterioration due to corrosion of steel reinforcement is one of the biggest challenges facing the construction industry. Fiber Reinforced Polymer (FRP) bars are good candidate to replace steel bars in construction. The number of studies on the response of RC structural elements reinforced with FRP bars is rising. Current research on BFRP bars has proven their efficiency in bond durability [1][2] and to reinforce slender beams [3][4]. However, the study of short (deep) beams reinforced with BFRP bars is lacking in literature.
Slender beams and flexural structural elements reinforced longitudinally with FRP bars suffers in practice from large deformations and wide cracks due to reinforcements low modulus of elasticity. On the contrary, load transfer in short beams differs greatly from that in slender ones. Tied arch action is the mechanism by which short beams transfer loads where the concrete extending from the loading point to the nearest support act as compression diagonal struts and the longitudinal reinforcements act as the tie. Given the high tensile strength of FRP bars compared to regular steel ones, it is believed that FRP bars would greatly enhance the tied arch mechanism in short beams.
Omeman et al. [5] reported the results of testing twelve short beams reinforced with steel and carbon-FRP (CFRP). The study examined variables such as a/d, reinforcement ratio (ʌ), concrete compressive strength (f ' c) and beams effective depth (d). It was found that CFRP-beams exhibited higher shear capacities in comparison to their steel counterparts. Also, CFRP-beams had larger crack width than steel-beams near failure. Abed et al. [6] experimentally investigated the response of nine GFRP-reinforced short beams. The study considered the effect of variables such as a/d ratio, ʌ, f ' c and d on the beams shear capacity and overall response. It was noted that GFRP-beams exhibited lower stiffness and larger mid-span deflections when compared to their steel counterparts. This was attributed to the low modulus of elasticity of the GFRP bars compared to regular steel. Moreover, twelve short beams reinforced with GFRP bars were tested and analyzed by Andermatt and Lubell [7]. The study examined the influence of a/d ratio, f ' c and d on the short beams response and on the accuracy of Strut-and-Tie Models (STM). The STM method adopted by ACI 318-08 [8] and CSA S806-02 [9] were utilized to predict the capacity of tested beams. It was pointed out that all tested beams developed tied arch mechanism. Such an observation was derived from the uniform strains along the reinforcements and the concrete crack width and orientation. Moreover, the CSA S806-02 STM provided good predictions for the tested specimens shear capacity in comparison to ACI 318-08 STM.
In the present work, the shear response of short beams reinforced with BFRP is experimentally investigated. The effect of shear span-to-depth ratio (a/d) on beams' midspan deflections, shear capacity, load-deformation relationships and the failure modes are highlighted.
Experimental program
Three BFRP-reinforced beams were tested in addition to one steel-reinforced beam that served as a reference for the BFRP-beams. The test matrix of the experimental program is shown in Table 1.
Test specimen and setup
The test specimen was 2000 mm long with a rectangular cross section of 140 mm wide. The effective depth of the beams was kept the same for all four beams at 260 mm. Different span-to-depth ratios, a/d = 1.15, 1.48, or 1.82, were attained by changing the distance between the loading points. A concrete cover of 45 mm measured from the soffit of the beam to the center of the reinforcing bars was maintained in all of the tested beams. The beams were tested under four-point loading configuration with a clear span of 1000 mm and 500 mm overhang on each side to provide sufficient anchorage length for the Fig. 1. All beams were instrumented at mid-span with 5 mm strain gauges bonded to the tensile bars. The beams were loaded at a constant rate of 0.6 kN/s in load control until failure occurred using an Instron actuator of 2000 kN capacity. Cracks initiation and propagation were manually recorded for each specimen at each stage of loading. Beams deflections were measured by means of linear variable differential transducers (LVDTs) mounted at beams mid-span. A data acquisition captured the readings of strain gauges and LVDTs at all stages of loading. The actual diameters of the BFRP bars used in this study were measured by volume displacement as recommended by the ACI 440.3R-04 committee [8]. The 12 mm BFRP bars have a uniform sand coating with shallow spiral indentations spaced at 2.75 mm along their surface. The mechanical properties of these BFRP bars were obtained by conducting tensile tests for 5 specimens. The average ultimate stress and elastic modulus were measured as 1230 MPa and 46.2 GPa, respectively. Shear failure of the BFRP-reinforced beams B1-BFRP was accompanied by the top concrete crushing between the two point loads. In beam B2-BFRP, concrete in compression crushed under the point load that connected the shear failure crack to the support. It is worth noting that beams that showed top concrete crushing exhibited a more ductile failure at ultimate than other beams that failed with only diagonal shear cracks.
Load-deflection relationships
The load-deflection relationships of the tested BFRP-beams and the control Steel-beam are shown in Fig. 3. The BFRP-reinforced beams exhibited almost linear load-deflection curves after cracking unlike the steel beams that showed elasto-plastic load-deflection curves until failure occurred. It can be noted from Fig. 3 that increasing the a/d ratio increases the midspan deflection measured at any stage of loading. This finding was explained by the tendency of beams having large a/d ratios to behave as a flexure-critical member rather than a shear-critical one. Smaller deflection was exhibited by the steel-beam which is noted from the higher post-cracking stiffness compared to BFRP counterpart. The small postcracking stiffness encountered in the BFRP-reinforced beams was attributed to the low modulus of elasticity of the BFRP bars compared to that of the steel bars, which resulted in wider cracks.
All of the BFRP-beams showed an abrupt failure when their ultimate shear capacities were attained as illustrated in Fig. 3. The brittle failure of the BFRP-specimens can be noticed from the load-deflection response of beam B3-BFRP. The sudden drop at ultimate indicated the collapse of the beams once their ultimate strength was achieved without showing any residual deformations. On the contrary, B1-BFRP showed a more ductile behavior that was attributed to the concrete crushing in compression between the two point loads prior to collapse. The load-deflection response was also consistent with the strain gauge readings along the bars. It is worth pointing out that the recorded strains showed no sign of anchorage problems in any of the beams. Prior to cracking, the strains in the tensile reinforcement were insignificant. However, after cracking the strains increased significantly as a result of the formed cracks. Strains recorded in the steel-reinforced bars were less than those recorded in their BFRP-reinforced counterparts at all stages of loading
Strength analysis
The tested beams shear capacities and maximum deflections are summarized in Table 1.
The results suggest that BFRP-beams recorded slightly higher ultimate loads than the steel counterpart. It should be emphasized that the deflection at ultimate load of the BFRP-beams was higher than that of their steel counterpart as a result of the low modulus of the BFRP bars compared to steel ones. Fig. 4 correlates the tested beams shear strength to their a/d ratio. A linear inverse proportionality can be noted between shear strength to their a/d ratio. Also, decreasing a/d ratio by 19% from 1.82 for specimen B3-BFRP to 1.48 for specimen B2-BFRP lead to an increase in the shear capacity by 100% from 79.5 to 159 kN, respectively. Additional reduction of the a/d ratio by 22% to a ratio of a/d = 1.15 in B1-BFRP resulted in 34% capacity increase. Where Here Ȝ is the concrete density factor, ߶ is the concrete resistance factor, and ߩ ௪ is the longitudinal reinforcement ratio percentage, ݀ ௩ is the effective shear depth taken as the greater of 0.9d or 0.72h (mm), ܧ ி is the modulus of elasticity of FRP bars (MPa), ܸ is the factored shear (N), ܯ is the factored moment (N.mm), d is the effective depth (mm), and the quantity ܸ ݀ ܯ Τ is the inverse of ܽ/݀. The correlation between ܸ and a/d in equation
Conclusions
This paper reports on the experimental testing of Basalt-FRP reinforced deep beams and a steel RC counterpart for reference. The main variable in consideration is the shear span to effective depth ratio (a/d). It was found that all beams failed by a diagonal crack that extended from the support to nearest loading point. Also, higher ductility was noted near failure for beams that showed top concrete crushing in addition to typical diagonal shear crack. The overall BFRP-RC beams exhibited higher mid-span deflections and less stiffness compared to their steel counterparts. This is attributed to the low modulus of elasticity of the BFRP bars in comparison to steel bars. Increasing the a/d ratio increased the mid-span deflection due to the tendency of beams with large a/d ratios to behave as a flexure-critical member rather than a shear-critical one. Moreover, a linear correlation between the shear capacities and the cubic root of a/d ratio was evident from experimental result which agrees with the CSA S806-12 code. This correlation is attributed to the inclination of the angle between the diagonal compression struts and the horizontal tie which resulted in improving the beam's arch-action mechanism. | 2019-04-29T13:17:42.411Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "ab434717cf027ca143e888817b8bb0a2e336716a",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/34/matecconf_ascm2017_01012.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8eb7d2ec6f18448902299092abc6f1bd719fb88d",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
153466100 | pes2o/s2orc | v3-fos-license | The Role of Unemployment in the Run of Life Chances in Hungary
This paper studies the connection between health—especially life expectancy—and unemployment in Hungary. Unemployment and health are recognised as being linked, though the relationship is complex. Unemployment happens to many people mainly in the period of crisis and can be a stressful and depressing time of life. On the one hand, the general state of health of the Hungarian people is worse than justified by the level of economic development. On the other hand, the role of the present economic crisis is to be predicted in the future run of health condition. Moreover, it would probably result health deterioration for those social groups who are most affected by unemployment and poverty. The study consists of two major structural parts. The theoretical part provides an insight to the specific literature, while the empirical chapter examines the link between socioeconomic indicators, unemployment, and life chances with correlation and regression calculations.
Introduction
Health, social welfare, and economy are notions that are tightly interconnected and complement each other, therefore, health conditions of the human resources have an essential role in economic and social processes.Low educational qualification, unfavourable labour market position, low income, and unemployment go together with deteriorating living conditions, consequently it results in risking people's health.The effects of the credit crunch in the fall of 2008 and those of the fully evolved economic crisis in 2009 were most apparent in financial, economic, and labour market mechanisms.Due to the economic crisis, the shortfalls in investments and industrial production caused the rise of unemployment, mostly affecting poor and vulnerable social groups.Most of the European countries should be prepared for the treatment of the direct and indirect social, health, and health care consequences of the crisis.Particularly Central and Eastern European countries face serious challenges, where already existing healthcare conflicts would reappear and health inequalities would become more acute.
The bad health conditions of the region's population and its shorter life expectancy compared to the Western-European average, crisis factors of health care inherited from socialism, and inadequate financing together mean a problem for health politics, which could not find an efficient solution even 20 years after the transition.Therefore, the role of the present economic crisis is to be predicted in the future run of health condition.On the one hand, it would probably result in health deterioration for the social groups who are most affected by unemployment and poverty.On the other hand, decreasing income and low-key consumption would result in limited possibilities for health conscious lifestyle.Thirdly, health might be considered as an asset to keep its position on the labour market, but in prevention and health protection large social differences will appear.
Methods and Data
The most important aim of the paper is to interpret the correlation of the change of labour market position and state of health.This paper intends to analyse the supposed relation between unemployment and run of life chances with the help of statistical indicators and bibliographical references.
The analysis is based on the approach to define the social determinants of health.Social determinants of health are the economic and social conditions under which people live which determine their health.These circumstances are shaped by the distribution of money, power, and resources at global, national, and local levels.The factors of the social-economic environment are firstly responsible for the development of life circumstances and social situation.One of the social determinants of health is unemployment.Social determinants of health including employment/unemployment have been recognized by researchers and World Health Organization.According to the so called Health Field Concept of the Lalonde model, environmental factors that determine the state of health are qualification, employment, and unemployment.This model was one of those models which has proven that the population's state of health is mostly neither determined by the quality of health care, nor by the development of the health care system, but by the effect of the environment and lifestyle [1].
The social determinants of health are mostly responsible for health inequalities-the unfair and avoidable differences in health status seen within and between countries.Responding to increasing concern about these persisting and widening inequities, WHO established the Commission on Social Determinants of Health (CSDH) in 2005 to provide advice on how to reduce them.The definitive work on the social determinants is the 2008 report from WHO "Closing the Gap in a Generation: Health Equity through Action on the Social Determinants of Health" [2].
In the presentation of life expectancy changes, I took the index of average life expectancy at birth, which determines life chances in a complex way, since it is determined by death rates.Life expectancy means the average number of years to be lived, calculated from birth or from a particular age, so this is an average number of years that a newborn is expected to live if current mortality rates continue to apply [3].The index reflects the overall mortality level of a population.It summarizes the mortality pattern that prevails across all age groups-children and adolescents, adults and the elderly [4].Life expectancy is influenced by death rates, so they are even compound indexes for life chances.Economic circumstances also affect life expectancy.For example, life expectancy in the wealthiest areas is several years longer than in the poorest areas.This may reflect factors such as lifestyle as well as access to medical care.
Mortality rates depend on age distribution: if population ages, mortality rate increases.Comparing the mortality of the population's different age distributions is possible by calculating the life expectancy at birth, thus the mortality level of a population can be defined with one data.The better the mortality circumstances are, the higher the average life expectancy rates are at birth.
What tendencies have characterized life expectancies in Hungary during the last decades?How significant regional differences can be experienced regarding life expectancies in Hungary?Do regional differences unequivocally prove Western-Eastern splitting in health inequalities?How did the economic regime and the appearance of unemployment influence life chances after 1990?How strong is the connection between unemployment and life chances?Does the present crisis have an influence on life chances at all?Does favourable social-economic environment and low unemployment always mean better life chances?
To answer these questions, I used the regional analytical methods.In order to explain cause and effect correspondences I justified the link between socioeconomic indicators, unemployment, and life chances with correlation and regression calculations.I took the data from the publications and online database of the Hungarian Central Statistical Office and from databases of the European Union.In order to describe tendencies, I analyzed historical data, while to prove regional differences, I tried to obtain the most recent data from 2009.The level of examination of the statistical analysis is the county (NUTS level III).
The significance of the results of the enquiry lies in the regional characteristics of public health processes and socioeconomic factors determining health state.The aim was a multidisciplinary approach, using applied methodology and analysing configuration of life chances.In effect, to be aware of the population's health and illness correlation and its role in social-economic context is indispensable in regional development, social-economic planning, and political decision making.Similar regional analyses can be the starting point for a nationwide Health Impact Assessment.
The Impact of Unemployment on Health Inequalities
For the evaluation of risk factors and their prevention, it is of essential importance to analyse the social situation's role regarding health state and its effect on the health system.Social inequalities related to health are present in every country and mostly depend on macroeconomic conditions.The interpretation of the social factors defining health inequalities (Figure 1) presumes that during a crisis, not only the labour market position and the level of income counts from a health point of view, but also the level and growth of already existing social and health inequalities.
The most important question is whether the social network protecting the poor and those liable to poverty is appropriate enough and whether the gap between the availability of health services is getting bigger.Health inequalities are always linked to economic inequalities, the unfairness of the distribution system, bad labour market positions, difficulties in the availability of health care and education, disadvantaged living and life conditions, and no chance of a healthy life [5].
Unemployment is the factor that has the strongest influence on health.The situation is the worst in case of middle aged males who become unemployed for the first time during a crisis period [6].Indeed, unemployment makes you sick as it has a negative effect on the individual's identity, emotional world, and self-esteem.It increases the feeling of hopelessness, depressive symptoms, and the risk of suicide [7].When crisis hits, everyone is about to keep one's workplace.Therefore, permanent uncertainty and increased stress result in physical and psychical diseases.Stress caused by unemployment results in the spread of riskful behaviour (medicine consumption, alcoholism, excessive smoking).The suicide rate of young or middle-age groups often correlates with the changes of unemployment rate."Unemployment does significantly affect suicide rates, but in a way that varies for income: In a positive manner for highincome countries, but in a negative manner for low-income countries" [8].
It was apparent during the 2009 economic crisis that beside growing unemployment, in some economic sectors the incomes decreased even for the employed due to for example necessity leave and a reduced working-time.The disadvantages of income drop were increased by the fact that families and households got indebted and their subsistence was in danger.As a consequence, the consumption, mainly the consumption of healthy products and the demand for health related services, decreased.One possible explanation is that in uncertain periods people tend to neglect their health thus health care and prevention does not reach the required level.Decreasing revenue makes it more difficult for the sick to reach the medical services [9].
The effect of the economic crisis on the health system can be seen on the availability of health services [10].Unequal availability in geographical sense raises the question of regional inequalities, while in sociological sense it does so for the equality of chances.Unequal availability of health services is determined by the individual's sociocultural conditions [11].To give access to medical services is one of the most effective ways to reduce poverty and social differences.Health system plays an important role in protecting labour force, therefore the resources for benefits should not be reduced in case of crisis.Consequently, investments in health and the health care system have an advantageous effect on social stability, thus on economy at the same time [12].In time of an economic crisis, the growth of unemployment acts together with a decrease in revenues from health insurance, therefore during these periods the costs of maintaining a healthcare system increase.At the same time, the WHO's warning is an important message for the national health systems, institutions, political decision makers: "The healthcare system plays an important role in labour force protection.In case of economical crisis, people renounce private medical services, and rather turn to state financed medical services, although in most countries state financed health care is overburdened and underfinanced.The first goal to reach is that governments should maintain their state financed health care in time of economic crisis as well, and they have to take measures to protect vulnerable and poor social levels."[2].
Those Central European countries that suffer the most, received immediate financial help from International Monetary Fund.IMF credits control governments to spend more on healthcare, the population's health state is worse than in Western European countries; on the other hand, mass unemployment appeared in spring-summer 2009.Economical stimulating programs to liquidate the consequences of the crisis can go together with the development of health care and the boost of health care background industry.The condition of all of these can be that economic development programs should be able to integrate health development as well.
The effect of the economic world crisis on health and health care system can be positive and negative at the same time.As for the future, the interpretation of possible positive effects, a "best case scenario", should be considered as a research objective.The mechanisms are complex and complicated, but the following assumptions must be kept in mind.
The economic crisis may contribute to the valorization of health in several ways.In the cycle of economic prosperity people work more, the order of priority changes, and there is a tendency to take care less about one's health.During a crisis, if the unemployed takes the difficulties in a balanced way physically and psychically, health and health care becomes important because of the lack of fast lifestyle and the fast changing situations and in order to start work again.Crisis can result in reinforcing people's survival instinct.The effects of former world economical crises on health were examined, and it could not be clearly proven that the number of cardiac diseases and mortality caused by cirrhosis of the liver increased due to the world crisis and that more people were taken to psychiatric institute than usual.From the statistics it can be seen that 1% growth in the rate of unemployment during crisis decreased mortality rate by 0.5%, which practically means that from hundred thousand people 5 people survived compared to usual periods [13].
The recent economic crisis with increased unemployment led to adverse economic and social implications in some countries of Central Europe including Hungary, too.After the economic downturn in these countries there were a lot of speculations about causes and effects, actions and reactions in the connection of economic recession and health.Most researchers agree that involuntary job loss increases the risk of psychiatric disorder and its somatic sequelae [14].But in reality, is there a link between financial crisis and health?The social determinants of health are the circumstances of daily life-the conditions in which people are born, grow, live, work, and age-and the structural drivers of those conditions (unfair distribution of power, money, and resources).Both the conditions of daily life and the structural drivers will be influenced by the financial and economic crisis [15].
The Hungarian Health Inequalities by Life Expectancy
The marked deterioration in the health status of the Hungarian population has been going on since the middle of the 1960s.The general health status of the Hungarian people is worse than justified by the level of economic development.
The adult mortality rate in Hungary is one of the worst among the European countries.Due to the very disadvantageous mortality rate of the middle-aged Hungarian males population [16] Hungary has a very bad situation in the European continent."The mortality situation in Hungary, which had been worsening for decades, developed into an epidemiological crisis by the early 1990s, and it presently hits the whole adult population" [17].On the other hand, the negative natural population growth rate, the very low birth rate, and the ageing population has also turned to a demographic crisis in Hungary at the beginning of the 1990s [18].Hungary's economy has been experiencing significant transitional difficulties after 1990.Its social effects as the relevant problems of unemployment and poverty among low-income population groups have gone together with their "health recession."Jointly the role of the epidemiological, the demographic and the new economic crisis have shown some unique trends in the Hungarian health indicators over recent years.
Life expectancy in Hungary is among the lowest in Europe.From 1996 onwards there was a trend towards better life chances, but they are still a very long way from corresponding figures for wealthier Western European countries.Furthermore, large variations of life expectancy can be experienced in different parts of the country.The trend in life expectancy in Hungary has a similar pattern to most other Central and Eastern European countries and shows some characteristic features.The average life expectancy at birth was only 62 years in 1945, but as in all of the European states after the Second World War a downward trend in mortality rate was seen, which led to an increase period in life expectancy at birth [19].This favourable tendency was caused by the decreased number of maternal, neonatal and infant mortality, because of the developement of the preventive strategies and implements for the infectious diseases since the beginning of the 20th century in Europe.
The average life expectancy at birth and its changes continuously depended on the improvement or the worsening of the mortality situation in Hungary in the second half of the 20th century.The remarkable improving was mainly experienced until the beginning of the 1970s.Naturally, the result of this positive trend was the advantageous life chances among Hungarian middle-aged population.However, the substantial improvement was followed by a marked deterioration of life expectancy at the end of the 1970s, because from 1966 the main health indicators changed for the worse.The deterioration of the Hungarian life expectancy reached its bottom in 1985, but this could not be followed by a period of upswing due to the change of regime and the socioeconomic transformation (Figure 4).Nevertheless, the role of the transition caused another bottom in 1993.In this year the life expectancy fell to unprecedented levels: for men 64.5 and for women 73.8 years (69.2 years both sexes combined).The fall in life expectancy in 1989-1993 has been largely due to a sharp rise in premature mortality of the middle-aged male's population.The moderation of mortality rate after 1993 resulted in that the life chances could increase again over 70 years from the second half of the 1990s.Thus, it could increase over 71 years from 2000, over 72 years from 2002, and over 73 years from 2006.Now the average number of it is 74.0 years, for males is 70.1 years and for females is 77.9 years in Hungary (2009).According to the latest available data, the average life expectancy at birth in Hungary remains among the lowest in European Union (Figure 2).
Life expectancy in Hungary shows characteristic regional variations (Figure 3), a feature which is also typical of other indicators of health status.Life expectancy in Hungary has been increasing recently but in a geographically uneven distribution [20].Broadly speaking, the life chances in the Eastern part of the country are a great deal worse than that of the population in the Western part of Hungary.The difference between the average life expectancy of the counties of the best and the worst values is more than 2.5 years [21].The life chances and its regional differences within Hungary are influenced by the socioeconomic situation of the counties.The relative position to each other has not or hardly changed in the past 15 years.The most advantaged and the worst disadvantaged counties were the same at the beginning of the 1990s and nowadays, too.The mortality trends have remained disadvantageous for North Eastern Hungarian counties (Borsod-Aba új-Zemplén, Szabolcs-Szatmár-Bereg) and for Southern Transdanubian counties (especially Somogy county).Unemployment in Hungary mainly affects these regions.Of those who are unemployed now in Hungary, 45 percent live in these undeveloped rural regions in the Eastern and South Western part of the country [22].One of the most interesting things about the widening health gap between the Eastern and the Western halves of Hungary is that it had already begun to evolve during the 1970s and 1980s, but also has suggested the common origins of the health trends and the uprisings of 1989.Considering the significant mortality and life chances data, it is impossible to disregard the fact that in the Eastern part of Hungary the number of people in multiply disadvantaged position is very high, thus struggling with many economic (e.g., unemployment) and social problems (e.g., ethnical minority groups).In the Eastern half of Hungary the percentage people belonging to the upper strata of the social hierarchy is mostly low.
Unemployment and Life Expectancy in Hungary after 1990
The scale of the health differences within Hungary is surprising.The following regional analysis finds a middle-strong relationship between unemployment and life expectancy.These health differences structure is not confined to differences between the poor and the rest of society, but instead run right across society with every level in the social hierarchy having worse health than the one above it.This is the main point that health differences have typical pattern due to the socioeconomic spatial position of the Hungarian counties.I also found out what I expected, which is that huge gaps in health exist between Eastern and Western counties according to the regional inequality of Hungary.With the development of capitalism after 1990, the economic and social differences among the regions of Hungary increased.Economic deterioration has become especially intensive in Eastern and rural Hungary [23].
The disadvantaged life expectancy in Hungary presently hits the whole adult population, but its spatial inequalities are influenced by the connection between life expectancy and economic development.
The early 1990s labour market in Hungary was characterized by the rapid decline of employment and the fast rise of unemployment.Unemployment rate reached its peak in 1993 (12.1%) together with the highest mortality rate (14.6%) and the shortest life expectancy (69.2 year) (Figures 4 and 5).Improvement in unemployment started from the middle of the 90s in Hungary, due to which unemployment decreased continually until 2004.During the 15 years following the Second World War life expectancy at birth grew spectacularly in Hungary, and for 30 years-while in the European Union there was a constant rise-it has not changed.Nowadays Hungarians can expect to live four and a half years longer than in the period of the economic regime.The growth of life span was the smallest in Northern Hungary after 1996.
Compared to the early 90s, the root cause of unemployment changed in the present crisis.As a consequence of the changement of regime, employment in agriculture and industry became restricted, but from the end of the 90s, the proportion of unemployed higher education graduates has increased."The lowly qualified were the losers of the transition not only in terms of labour market and life circumstances, but in life chances as well" [24].The present crisis has significantly affected highly qualified labour force, the employees of tertiary and quaternary sector.Thus in the last decade the social groups with favourable health index and better life expectancies appeared as unemployed.According to a more optimistic forecast, this is the reason why growth of life expectancy at birth will stagnate in Hungary as a result of the crisis, but it will not decrease.However, it could also cause some problems in the future as life expectancy in Hungary is some 6-7 years behind the Western European average.
Based on a correlational matrix (Table 1) we can have an overview of the connection between health and economic factors of the middle-aged population.GDP per capita for middle-aged males shows the closest connection with health and life expectancy.Employment rate shows the closest in connection with health, but it is significant for men.Thus employment determines men's "healthy" life expectations more.
It is justified by the fact that men's healthy life expectancy is in close connection with the unemployment rate.From the examined economic factors, unemployment rate determines the most significantly life chances, mainly for middle-aged males, as 60% of unemployed in Hungary are men.In the connection between average life expectancy at birth and economic development, GDP per capita, average income, and unemployment rate are equally determinative [16].The correlation coefficient measuring the strength of the link between unemployment rate in counties and It is not true for the whole country that higher unemployment goes together with lower life expectancy (Figure 6).The situation is the worst in Borsod-Aba új-Zemplén county, diovascular diseases, the cerebral diseases, and death caused by brain haemorrhage showed a similarly strong correlation with unemployment.The situation was the least favourable in Borsod-Aba új-Zemplén, Szabolcs-Szatmár-Bereg, and N ógrád counties, where beside the high rate of unemployment, the mortality rate was also the highest in the country.
Males Females
The suicide rate in Hungary has shown a steady decline from 45.9 per 100,000 in 1984 to 31.7 in 1997, a fall of more than 30%.This decline was greater after 1990 when the rate was 39.9 per 100,000 and when the political and economic changes in Eastern Europe began [26].Unfortunately, Hungary still carries high suicide rate and has not moved away from ranking fifth in the world in suicide rates over the past five years.In Hungary since 1990, there have been significant increases in unemployment and poverty.The suicide in Hungary is associated with possible risk factors such as unemployment and socioeconomic decline [27].Among those people who commit suicide the rate of unemployment is app.9%-10%.The correlation between suicide and unemployment rate is 0.4657 (2008).The most unfavourable region in Hungary by suicide is Southern Great Plain (Figure 8).
The average income of an active population living on a given area is in correlation with health state.Those with higher income can spend more on health conscious life, have more possibility to healthy lifestyle, and have a more favourable access to-for example, private-health care.There was a typical correlation in the 90s between monthly average income and the life expectancy of men at the age of 45 (R 2 = 0.478; 1994-1998 average).
Conclusion
The health status of the Hungarian population has been extremely unfavourable for many years.Regarding certain diseases and causes of death, Hungary is in a negatively outstanding position in international statistics.Hungary has one of the lowest life expectancy rates at birth among the member states of the European Union (Chenet et al. [28]).The poor ranking of Hungary on the list of life expectancy at birth in the European Union (EU-27) has not changed during the last 35 years, but the size of deviation-expressed in years-from other countries has changed substantially (Balogh et al. [29]).The low life expectancy is mainly due to high mortality rate from cardiovascular diseases.In the morbidity pattern, the diseases of the circulatory system have very high share.Hypertension is almost an endemic disease in Hungary, and ischemic heart disease is the dominant factor of mortality in Hungary.
The "Western-Eastern gradient" of the Hungarian economic environment can influence the spatial structure of life expectancy.The concept of socioeconomic health differences refers to the systematic differences in health between people with different positions in the social stratification.Important is that these differences in health are not confined to differences between the highest and the lowest social classes [30].Health follows a social gradient: the higher the position in the social hierarchy, the lower the risk of ill health and premature death (Marmot and Wilkinson [31]).Over the past decades, evidence of a social gradient in life expectancy has accumulated in Hungary.This widening relative gap is mostly due to a faster decline in mortality among people of higher socioeconomic status than the decline among those of lower socioeconomic status.
Figure 7 :
Figure 7: The relative position of Hungarian counties by the connection between unemployment rate (%) and average life expectancy at birth (year), 2009.Data source: Hungarian Regional Statistical Yearbook, 2009. | 2019-05-15T14:33:05.595Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "d70fc2186a7399515f590a00f604651460e8f6a6",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2011/130318.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d70fc2186a7399515f590a00f604651460e8f6a6",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Economics"
]
} |
55468730 | pes2o/s2orc | v3-fos-license | The Ebstorf Map : tradition and contents of a medieval picture of the world
The Ebstorf Map (Wilke, 2001; Kugler, 2007; Wolf, 2004, 2006, 2007, 2009a, b), the largest medieval map of the world whose original has been lost, is not only a geographical map. In the Middle Ages, a map contained mystic, historical and religious motifs. Of central importance is Jesus Christ, who, in the Ebstorf Map, is part of the earth. The Ebstorf Map contains the knowledge of the time of its creation; it can be used for example as an atlas, as a chronicle of the world, or as an illustrated Bible.
Origin
The original of this mappa mundi from the 13th century AD (Wolf, 2006), measuring 3.58 by 3.56 m (= 12.74 m 2 ), was discovered around 1830 at the convent of Ebstorf (Germany, Lower Saxony, in the Lüneburger Heide region) and named after it.The map was created in this monastery, which was first founded as a convent of canons around 1160 and soon after, around 1190, refounded as a convent for Benedictine nuns (Dose, 2012).Opinions differ not only concerning the exact time or time period of its creation, but also concerning its authorship, patronage and ultimate purpose.
Rediscovery, first publication, loss, and reproduction
After the rediscovery of the map an unknown hand cut out pieces from the top right-hand corner; parts missing on the left are due to damage done by mice during storage.Thus the map is incomplete (Fig. 1).The original consisted of 30 single pieces of sheepskin parchment that had been sewn together and rolled up.In 1834 it was taken to the Vaterländisches Archiv in Hanover and later added to the map collection of the Historischer Verein für Niedersachsen there, which was founded in 1835.In 1838 the first measures to preserve the map were taken, in 1888 at the Königliches Kupferstichkabinett in Berlin it was taken apart, cleaned, smoothed and stretched.Then it was kept in single pieces put into frames in a chest of drawers at the Hauptstaatsarchiv in Hanover.
After a first description by Blumenbach (1834), Sommerbrodt (1891) and Miller (1896) created facsimile editions, the former consisting of 25 phototypes, the latter as a lithography, and in 1898 a coloured version followed.
Mappa mundi in general
Even centuries before the Ebstorf Map was made, maps of the world existed, mappae mundi, of varying compactness of information.There were e.g. the four-continent map (one continent was yet to be discovered) by Krates of Mallos (2nd century BC) (Seyffert et al., 1956) zonal maps like the five-zone map by Macrobius (late 4th century AD) (Stahl, 1952), the wheel-shaped tripartite maps by Isidore of Seville (ca.560-636) (Möller, 2008) Al-Istakhri (around 934) (Harley and Woodward, 1994), Lambert of St Omer Published by Copernicus Publications.
Special features of the Ebstorf mappa mundi
The Ebstorf mappa mundi clearly differs from its predecessors, not only in size and in the compactness of information shown by the large number of entries -2345, of which 1500 are texts and 845 pictures (500 buildings, 160 rivers, lakes, seas and other waterways, 60 islands or mountains, 45 people or mythical persons, and 60 animals) -but also in the twofold representation of Christ and numerous religious motifs.This map is orientated towards the east like all medieval mappae mundi -with the exception of the Islamic maps orientated towards the south.In it, the earth forms Christ's body: on top, in the east, there is his head, on the left in the north and on the right in the south there is one hand each, one hand showing a stigma, and at the bottom in the west there are his feet.Sicily, heart-shaped and placed on the right slightly below the centre, can be interpreted as Christ's heart.This means that Jesus Christ keeps the world together, and is part of this world, like humankind whom he rules in eternity, and the world is part of Christ as his body.In a gold-framed area in the centre of the map the Resurrection is shown within the walls of the heavenly Jerusalem, a city towering above all the world -thus the text of the caption -as the first of all cities (Fig. 2).Biblical motifs included in the Ebstorf mappa mundi are e.g.Paradise with Adam and Eve, showing the Fall of Man tempted by the serpent -which is shown here as male and bearded -(Fig.3a)1 , Noah's Ark on Mount Ararat (Fig. 3a), or the Israelites' way through the Red Sea (Fig. 3b), as well as places from the Old Testament like Hebron (Fig. 3d), Mount Sinai (Fig. 3b), Babylon with the Tower of Babel (Fig. 3b), or Sodom and Gomorrah, sunk in the Dead Sea (Fig. 3b), and from the New Testament places like Bethlehem (Fig. 3d), Nazareth (Fig. 3b), Cana (Fig. 3b), Capernaum (Fig. 3b) or Gethsemane, also places related to the life and travels of the apostle Paul, and finally graves of apostles (e.g.Fig. 3a).
Contents of the Ebstorf Map
Elaborated from the writings known and accessible in the 13th century, the Ebstorf Map contains the knowledge of the time, from theology to geography, secular history, the history of salvation, well-founded as well as legendary and mythical knowledge.This spans from Adam and Eve to Alexander the Great, to the origins of the Saxons and the Crusades; only the religious world of Islam is excluded.The most recent source integrated is Gervasius of Tilbury's Otia imperialia (Stiene, 2009), written in 1214 and dedicated to Emperor Otto IV, Gervasius being identified by some as the author of the Ebstorf Map (Wolf, 2004(Wolf, , 2009a)).The Ebstorf Map can be viewed under different aspects: geographically as a map, didactically as an encycoplaedic teaching means, iconographically as a depiction of God's creation of the world, in the context of the history of piety as a devotional image, politically as a symbol of power, and synoptically as a chronicle of the world projected onto parchment; in a shorter way as an atlas, as a world chronicle, as an illustrated Bible, but also as a collection of myths and legends, thus a book of anecdotes and for entertainment, and even as a zoological handbook, because it mentions and shows animals -mam-mals (e.g.elephant, giraffe, bear, lion, antelope, horse, elk), reptiles (e.g.snake, chameleon, crocodile), birds (e.g.parrot, pelican, crane, ibis), insects (e.g.ants) and fabulous creatures (e.g.dragons).In contrast to all this animal life, flora is only depicted sparingly and is only used for purposes of decoration.All this is spread over the continents: Asia (on top, in the east), Europe (on the left, in the north) and Africa (on the right, in the south).Asia covers the largest part of the map, followed by Europe; special features of the continents and countries shown have been put into texts and pictures.The Black Sea and the Mediterranean (in a T shape) divide Europe from Asia (Fig. 3c, d); the Mediterranean also divides Europe from Africa (Fig. 3d), and the Red Sea forms the border between Asia and Africa (Fig. 3b, d).Next to the twelve winds in the margin of the map the ocean is outlined; this is a hint of the medieval knowledge of the spherical shape of the earth which was to be represented as a circle if showing the surface of the earth.Although it is orientated along the usual travel routes, the Ebstorf Map is not geographically accurate; also, it is not possible to identify all its entries beyond any doubt.
In Asia (Fig. 3a, b), depictions of cities (e.g.Antiochia (Fig. 3a), Ephesos (Fig. 3a), Lo Yang (Fig. 3a), Samarkand, (Fig. 3a)) alternate with Biblical motifs (Jesus' travels, the grave of St Bartholomew, Fig. 3a), real people (the Chinese, Fig. 3a) and mythical peoples (e.g.cannibals, Fig. 3a, Amazons, Fig. 3a); furthermore, there are the stages of Alexander's expedition against the Persian Empire (334-324 BC) such as Persepolis (Fig. 3b), which was destroyed by him in 330 BC.Africa (Fig. 3b, d) appears as a largely unknown continent: whereas Cairo (Fig. 3b), Alexandria (founded by Alexander the Great in 332/331 BC, Fig. 3d) and Marrakesh (Fig. 3d) and ancient cities like Carthage (Fig. 3d, destroyed by Rome in 146 BC), the Atlas Mountains (Fig. 3d) and the sources of the Nile (Fig. 3b) are shown, it is pictures of animals and fabulous peoples that are dominant, such as the "cave dwellers" (trogodytes, Fig. 3d) or "snake eaters" (ophiophagi, Fig. 3d).Apart from the Bible, Asia and Africa also reflect Greek and Roman ancient history; medieval Asia is integrated by including Baghdad (Fig. 3b), the political centre of the Islamic world, Damascus (Fig. 3a) as a trade centre and further cities situated on trade routes such as the Silk Road.
Europe (Fig. 3a, c, d) -shown from the Urals (Fig. 3a) to the Strait of Gibraltar (Fig. 3d), from the Northern Ocean (Fig. 3a, c) to Sicily (Fig. 3d), including cities (e.g.Novgorod, Riga, Kiev, Antwerp, Paris, Saragossa/Zaragoza (Caesar Augusta), Rome, Athens, Venice), mountains and mountain ranges (e.g. the Pyrenees, Mount Etna), rivers (e.g.Don, Vistula, Danube, Loire, Po) and islands of the Mediterranean -is dominated by the Holy Roman Empire (Fig. 3c).Shown from Frisia to Zurich, from Prague to Aachen, within the empire cities and rivers like the Rhine and the Main are depicted, and the island of Reichenau in Lake Constance with its three monasteries, which is especially accentuated; within the empire, Saxony is shown especially large and detailed (Fig. 3c): in addition to the central cities of the Guelphs, Brunswick with its lion and Lüneburg with its banner, the convent of Ebstorf with the graves of the martyrs is depicted, and so are cities -not only bishops' sees -such as Bremen, Verden, Essen, Paderborn, Hanover, Hildesheim, Gandersheim, Goslar, Quedlinburg, Halberstadt, Magdeburg, Erfurt, Naumburg; apart from the Weser river the rivers Elbe, Saale, Leine, Aller, Oker, Ilmenau, Ohre, Bode, Fulda and Gera are also shown.
Outside the actual map in the four corners or spandrels there are additional marginal texts containing information e.g. on the concept behind the map, the creation of the world represented as a map, the cosmos as the framework behind the map, the macrostructure of the world, but also about animal life (e.g.land-dwelling animals, reptiles, birds) (Fig. 4), about minerals, islands and much more.The writing on the map -majuscules and minuscules in black and red -can be classified as Gothic book hand: in the geographic labels bigspaced majuscules in red are used for continents, small majuscules in red for countries and regions (some also spaced), further regions being in black, whereas black minuscules are used for cities, mountain ranges and rivers (all the text inserts can be found on the web page given in footnote 1).
Unanswered questions
Despite all attempts of the last few decades to find out about the initiator, the author and the patron of this mappa mundi and also about its purpose (means of education?-decoration?-devotion?),no decisive conclusions can be reached.It also remains obscure when and why the map was hidden or forgotten at the convent of Ebstorf -perhaps this happened in the context of the Protestant Reformation and in connection with the transformation of the monastery into a Protestant religious institution for ladies.
Figure 1 .
Figure1.The whole Ebstorf mappa mundi(Kugler, 2007).The whole map symbolises Christ with his head on top (east), his hands to the right (south) and left (north), and his feet at the bottom (west).In the middle Jerusalem is depicted with the rising Christ on a blue background.Below is the T-shaped Mediterranean, separating Asia (to the east, top), Europe (to the northwest, lower left), and Africa (to the south, lower right).A high-resolution version (4047x3964 pixel) is available at http://www.landschaftsmuseum.de/Seiten/Museen/Ebstorf1.htm(Abb.2).
Figure 3a .
Figure 3a.Upper left quadrant of the map.East is towards the top, north to the left.In the middle of the map (in this quadrant in the lower right corner) the heavenly Jerusalem is depicted (Fig.2).This part of the map shows Asia.
Figure 3b .
Figure 3b.Upper right quadrant of the map (partly overlapping with Fig.3a).At the bottom left is the heavenly Jerusalem; at the bottom right Christ's left hand can be seen which points toward south.This part of the map also depicts Asia.
Figure 3c .
Figure 3c.Lower left quadrant of the map (partly overlapping with Fig. 3a and b).Christ's right hand can be seen in the upper left corner and his right foot in the lower right corner (pointing towards west).This quadrant depicts Europe.Part of the map was nibbled off by mice.
Figure 3d .
Figure 3d.Lower right quadrant of the map (partly overlapping with the other three figures).The T-shaped Mediterranean with many islands is painted in blue.In the middle, the heart-shaped island of Sicily symbolises Jesus' Sacred Heart.To the right of it Africa is depicted.
Figure 4 .
Figure 4. Example of a text insert about the pelican.The English translation of the Latin text reads: the pelican is an Egyptian bird which does, if it is true, kill his young ones and mourn them for three days, and then revives them with his own blood. | 2018-12-15T11:02:14.019Z | 2014-07-11T00:00:00.000 | {
"year": 2014,
"sha1": "86142bdd43bcefe3876055ae233662b0739aa76a",
"oa_license": "CCBY",
"oa_url": "https://hgss.copernicus.org/articles/5/155/2014/hgss-5-155-2014.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "86142bdd43bcefe3876055ae233662b0739aa76a",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"History"
]
} |
233243286 | pes2o/s2orc | v3-fos-license | Breath analysis for the detection of digestive tract malignancies: systematic review
Abstract Background In recent decades there has been growing interest in the use of volatile organic compounds (VOCs) in exhaled breath as biomarkers for the diagnosis of multiple variants of cancer. This review aimed to evaluate the diagnostic accuracy and current status of VOC analysis in exhaled breath for the detection of cancer in the digestive tract. Methods PubMed and the Cochrane Library database were searched for VOC analysis studies, in which exhaled air was used to detect gastro-oesophageal, liver, pancreatic, and intestinal cancer in humans, Quality assessment was performed using the QUADAS-2 criteria. Data on diagnostic performance, VOCs with discriminative power, and methodological information were extracted from the included articles. Results Twenty-three articles were included (gastro-oesophageal cancer n = 14, liver cancer n = 1, pancreatic cancer n = 2, colorectal cancer n = 6). Methodological issues included different modalities of patient preparation and sampling and platform used. The sensitivity and specificity of VOC analysis ranged from 66.7 to 100 per cent and from 48.1 to 97.9 per cent respectively. Owing to heterogeneity of the studies, no pooling of the results could be performed. Of the VOCs found, 32 were identified in more than one study. Nineteen were reported as cancer type-specific, whereas 13 were found in different cancer types. Overall, decanal, nonanal, and acetone were the most frequently identified. Conclusion The literature on VOC analysis has documented a lack of standardization in study designs. Heterogeneity between the studies and insufficient validation of the results make interpretation of the outcomes challenging. To reach clinical applicability, future studies on breath analysis should provide an accurate description of the methodology and validate their findings.
Introduction
Cancer is one of the leading causes of premature death. With an increasing worldwide life expectancy, the prevalence of cancer and its burden on society is growing 1 . Early-stage cancers are often asymptomatic and therefore difficult to detect. Treatment options for cancer, and ultimately their success rates, are greatly dependent on the disease stage at the time of diagnosis. The 5-year survival rate of stage I colorectal cancer is approximately 97.7 per cent, but it drops to 43.9 per cent for stage IV. Similar reductions in 5-year survival rate are seen in other cancer types, including gastro-oesophageal, liver, and pancreatic cancers 2 . These survival rates indicate the importance of screening programmes in detecting cancers in an early stage. Current screening and diagnostic techniques are often invasive and not patient friendly. This review focuses on the detection of digestive tract malignancies, including colorectal, gastro-oesophageal, liver, and pancreatic cancers, by analysis of exhaled air.
Colorectal cancer is one of the largest causes of cancer-related deaths. Screening for colorectal cancer with known tumour markers, such as carcinoembryonic antigen or cancer antigen 19.9, are not ideal owing to low sensitivity and specificity 3,4 . Faecal blood tests, such as the guaiac faecal occult blood test and the more recent faecal immunohistochemical test (FIT), can be used as screening tools for colorectal cancer 5 . In 2014, a national screening programme was introduced in the Netherlands using the FIT, leading to earlier diagnosis of colorectal cancer. In the event of a positive test, which indicates an increased risk of colorectal cancer, colonoscopy is recommended 6 . Although the FIT is non-invasive and has a sensitivity of over 80 per cent, a malignancy is found during colonoscopy after only about 8 per cent of positive tests 7 .
Next to colorectal cancer, gastric carcinoma is a common digestive tract malignancy with reported late diagnosis and high mortality rates 8 . In high-incidence countries, including Japan, screening programmes using gastroscopy have shown a decrease in mortality 9 . However, a major drawback of this screening programme is the invasive character of the endoscopic procedures used and the risk of complications.
Liver cancer is far less common. Screening using regular ultrasound imaging is performed in patients with underlying risk factors, such as chronic viral hepatitis or alcohol intake 10 . Although it is non-invasive, its sensitivity is relatively low and is operator-dependent 11 .
The same holds for pancreatic cancer. Pancreatic tumours in less than 20 per cent of patients are operable at the time of diagnosis 12 , and screening (using endoscopic ultrasonography or MRI) is currently recommended only for patients with a genetic predisposition 13 . However, the prognosis is poor, symptoms are associated with disease progression, and deaths from the disease are increasing globally 1,14,15 .
There is a general need for improvement in screening techniques for digestive tract malignancies. The sensitivity and specificity of most screening tools are not high enough to reach clinically valuable post-test probabilities in a screening setting. Thus, it remains a challenge within global healthcare to develop more suitable diagnostic tools for tumour detection 16,17 .
In recent years, detection of cancer by analysis of volatile organic compounds (VOCs) in body materials has shown promising results. VOC analysis has a long-standing history in medical research. By 1971, the Nobel Prize winner Linus Pauling 18 had detected 250 different compounds in breath using gas chromatography. Today, the value of VOC analysis in exhaled breath has been examined as monitoring tool in many diseases 19 , including the heart transplant rejection breath test 20 .
VOCs are carbon-based organic molecules, and their presence in exhaled breath can be divided into exogenously or endogenously derived compounds, according to their origin. Exogenous VOCs originate from environmental factors, such as food and beverage consumption, smoking, or other environmental exposures. Endogenous VOCs are produced as by-or endproducts of human or microbial metabolism. Apart from breath, VOCs can be detected in sweat, blood, tissue samples, urine, and faeces 21,22 . At present, more than 800 different breath VOCs have been registered in the Chemical Abstracts Service system 22 . The composition of VOCs in exhaled breath can be altered owing to pathological processes such as the presence of cancer. Tumourassociated inflammation leading to enhanced oxidative stress, altered glucose metabolism, and redox regulation in cancer cells can lead to different VOC signatures in patients with cancer [23][24][25] . Breath analysis methods might be able to identify 'breath signatures' specific to those with cancer. This could be of value in clinical practice.
Analysis of VOC profiles can be performed using a variety of analytical platforms 26 . Currently, the most common systems in use are gas chromatography mass spectrometry (GC-MS), proton transfer reaction mass spectrometry (PTR-MS), and selected ion flow tube mass spectrometry (SIFT-MS). In addition, pattern recognition sensor systems are emerging that detect total VOC-binding patterns instead of individual VOCs. The latter systems are commonly referred to as an electronic nose or E-nose 26,27 . All systems have their strengths and limitations. Systems that allow selective quantification of VOCs are usually more laborious, require trained personnel, and are expensive in comparison to systems that register unselective VOC binding patterns, such as portable E-nose systems 17 .
The non-invasive nature of breath analysis makes it interesting for clinical use. Despite a long history of breath research, there are currently only a few applications in the clinic. This review provides an overview of the current literature on the identification of digestive tract cancer by means of VOC analysis in exhaled breath. The aim was to examine the diagnostic performance of VOC analysis and also to identify potential pitfalls in order to improve future research in this field.
Search strategy
An electronic search of PubMed and the Cochrane Library was performed in May 2019. Neoplasm, cancer, tumour, electronic nose, volatile organic compounds, VOC, exhaled breath, predictive value of tests, sensitivity, and specificity were used as search terms, and were combined using AND-OR combinations.
Studies of cancer diagnosis that met the following criteria were included: at least two different groups of patients were included in the study, with regard to the presence of cancer; the index test was analysis of endogenous VOCs in exhaled breath; and the disease type was cancer of the digestive tract (oesophagus, stomach, liver, pancreas, and bowel). Studies were excluded if they were published before 2000, were not performed in adult humans, did not analyse malignant diseases, or analysed biofluids (such as breath condensate, urine, blood, and faeces).
The selection of potentially eligible articles was performed according to the PRISMA guidelines 28 . Discrepancies between the selections were solved in a consensus meeting between the reviewers. The following information was gathered independently and tabulated from the articles by type of cancer: author(s), year of publication, index test, reference test, method of data analysis, comparison groups, sensitivity, specificity, accuracy, and area under the curve (AUC). All VOCs identified in the studies were tabulated.
Quality assessment
The methodological quality of the articles was assessed by means of the Quality Assessment of Diagnostic Studies 2 tool (QUADAS-2) 29 ; a modified version was used 30 (Table S1). The assessment was performed by two independent researchers and discrepancies were resolved by consensus.
Results
A total of 7114 studies were identified by the search in PubMed. After applying the eligibility criteria 21 articles were identified. Two articles were retrieved by manual search, and finally 23 articles were included in the review (Fig. 1).
Quality assessment of the studies
An overview of the results of quality assessment is provided in Table 1 and Fig. 2. The risk of bias was highest for patient selection. The most common reasons for unclear or high risk of bias were unclear specification, or issues regarding the eligibility criteria. For the index test criterion, the most common reason for a high-risk assessment was not having performed a blinded validation of the diagnostic model.
For flow and timing, the most common reason for high risk of bias was not having attempted to limit exogenous and endogenous influences on VOC composition. Regarding the applicability of the studies to the study question, the overall applicability concern was scored tolerantly and assessed as relatively low.
however, in four studies 37,38,40,44 , oesophagogastroduodenoscopy to rule out malignancy was not performed in controls. Only one study 45 included patients with liver cancer (30 patients). The patients had histologically proven stage I-V cancer and were compared with a group of healthy volunteers, and with a group of patients with hepatitis B-induced liver cirrhosis. The two control groups did not receive the same reference test as the cancer group. Patients with hepatitis B and cirrhosis were untreated and the disease confirmed histologically or cytologically. The healthy volunteers, however, who were the patient's relatives and hospital staff with no history of cancer or other chronic disease, did not undergo any reference test.
Two studies 46,47 included patients with histologically proven pancreatic cancer (25 and 65 patients respectively). The control groups consisted of perceived healthy controls in one study 47 , and patients suspected to have pancreatic disease who were scheduled for pancreatic imaging and found to be negative for malignancy in the other 46 .
Six studies 48-53 analysed breath samples from patients with colorectal cancer. The study population ranged from 20 to 65 patients with cancer. Patients with stage I-IV disease were included in all but one study [48][49][50][51][52] ; the other study 53 included only patients with stage I-III tumours. The control group consisted of healthy controls in four studies [49][50][51]53 . One study 52 compared VOCs from patients with colorectal cancer with those from patients with head and neck cancer (squamous cell carcinoma) or breast cancer. The remaining study 48 was a follow-up analysis in which patients with colorectal cancer were compared with those with colorectal cancer from the original study 49 , who meanwhile had been treated and declared tumour-free.
In addition, the follow-up patients were compared with healthy controls. All studies used histologically proven colorectal cancer as reference.
In general, many factors were heterogeneous across the studies. The eligibility criteria were sometimes not described clearly. Some studies included benign disease, whereas this was an exclusion criterion in other studies. There was also no consensus regarding how to deal with co-morbidities, and the timing of the index test compared with the reference test was not always at the same stage of the diagnostic process.
Patient preparation and sample collection
Measures to reduce influences of ambient air were taken in 21 of 23 studies (Table S2). Performing a lung washout was done in 10 of 23 studies, and sampling ambient air as a reference value was performed in 9 of 23. Nineteen of 23 studies described having taken measures to limit the influence of food and/or beverages. The timing of fasting before breath collection ranged from 2 h to more than 24 h. Withholding from alcohol consumption and/or smoking before measurement was mentioned explicitly in 10 of 23 studies, and was at least recorded in 17 of 23 studies. Other preparatory measures described were withholding from physical exercise, being in an emotional balance, gurgling with water before breath collection, and restraining from the use of toothpaste.
The timing of breath collection in the diagnostic process differed between the studies. Sample collection was performed using the following systems: Mylar V R bags, Tedlar V R bags, syringes, inert steel bags or chambers, nalophan sampling bags, BioVOC tm breath sampler, directly into PTR-MS instrument, and directly into e-nose. Research groups then stored and analysed the samples themselves or transported them to a laboratory that had access to the required analytical platform.
Analytical platforms and data analysis
A variety of methods were used to analyse VOCs from exhaled breath ( Table 2). GC-MS and sensor array systems were most often used to analyse exhaled breath (8 studies) 32 40 , trichloro(phenethyl)silane field effect transistor (1 study) 37 , and IMR-MS (1 study) 47 . Only three studies 34,36,52 used sensor systems for analysis: AEONOSE (eNose company) (2 studies) and breath analyser (Figaro, USA) (1 study). Data analysis was performed by a variety of techniques, involving the following methods: principal component analysis (PCA), probabilistic neural networks, partial least squared discriminant analysis (PLSDA), discriminant function analysis, artificial neural networks, Fisher least discriminant function analysis, least shrinkage and selection operator logistic regression (LLR), Mann-Whitney U test with LLR, predictive probability models, Mann-Whitney U test with binary logistic regression model, PCA with PLSDA with variable importance in the projection model, t test with ANOVA, and PCA with stepwise discriminant analysis. A detailed explanation of these methods is beyond the scope of this review.
Diagnostic test performance and validation
A summary of the diagnostic performance of the individual studies is provided in Table 2. The results were divided into four groups based on the cancer type studied. Data on sensitivity, specificity, accuracy, and AUC were retrieved from the articles. Where a study compared the index group with multiple reference groups, the results of these comparisons are also included. Five authors did not report diagnostic performance. The sensitivity ranged from 66.7 to 100 per cent, whereas specificity ranged from 48.1 to 97.9 per cent. In most studies, the sensitivity and specificity were lower in the validation phase than the training phase. Owing to heterogeneity of the studies, no meta-analysis could be performed. Internal, external or cross-validation was performed in one third of the studies ( Table 2).
Both the largest (484 patients) 32 and the smallest (30) 40 studies, including patients and controls, analysed VOCs from patients with gastro-oesophageal cancer.
Reported volatile organic compounds
In total, 106 different VOCs were identified. For most VOCs, there was a statistically significant difference in presence between the groups. Some of the identified VOCs were only significant within a subgroup. Of the VOCs recorded, 32 were identified by more than one study (Table S3). These VOCs were either found to be cancer type-specific in multiple studies (19 VOCs), or were found in different cancer types (13 VOCs) and were therefore more general cancer VOCs. The VOCs that were identified in the most studies (4 studies each) were decanal, nonanal, and acetone.
Thirteen compounds identified in multiple studies were described for different cancer types. Acetone was found to be significantly different in the oesophageal cancer/gastric cancer, pancreatic cancer, and colorectal cancer groups. 2-Methylpentane, 3-methypentane, 4-methyloctane, dodecane, decanal, and nonanal were found in the oesophageal cancer/gastric cancer and colorectal cancer groups. Pentane,undecane,tetradecane, hexane, ammonia and 1,2,3-trimethylbenzene were found in the pancreatic cancer and colorectal cancer groups. The remaining 19 VOCs were found only in studies of the same cancer.
Discussion
The diagnostic performance of breath analysis for diagnosing cancer has shown promising results, with good sensitivity and specificity. The potential use of breath analysis as a non-invasive test that can be applied clinically may differ for each specific type of digestive tract malignancy as it depends on the cancer prevalence and existing diagnostic alternatives. Breath analysis could be considered as an additional screening tool to supplement faecal blood testing in colorectal cancer, or a screening tool for gastric cancer in countries with a high incidence, such as Asian countries, including Japan. Another option could be monitoring of patients with Barrett's oesophagus to detect a potential conversion to malignancy. Breath analysis might be of special interest for pancreatic cancer, as its incidence is rising and the prognosis is poor, partly because it is often missed in the early stages 14 . A non-invasive test with the ability to distinguish between benign and malignant masses would be welcome. Despite the amount of research already done, there is currently no breath test being used for the detection of gastrointestinal tract malignancies, and the majority of clinical investigations are proof-ofconcept studies. Most of these studies have been performed in small populations using different analytical techniques with poor standardization. VOCs are a product of metabolic processes and so their presence in exhaled breath greatly depends on the metabolic state of the patient. Alterations in breath profiles could not only be induced by cancer but also by other potential endogenous and exogenous influences, such as fasting status, microbiome, smoking, medication, co-morbidities, and exposure to varying ambient air pollutants; all these issues should be taken into consideration when designing a diagnostic study on breath analysis 21 .
Several initiatives are under way to develop protocols for standardization of sampling and analytical measurements in the International Association of Breath Research 54-56 and the European Respiratory Society 57 . In a recent review 30 , a proposed framework for conducting and reporting future studies investigating the role of VOCs in cancer diagnosis was formulated. Applying standardization would contribute to improved quality of individual studies and enhance comparison between studies, leading to faster implementation of this promising diagnostic tool in clinical practice.
Although there is an abundance of possibilities for performing VOC analysis, a disadvantage in most of the currently available studies is possible overestimation of the predictive value and lack of external validation. Prediction models generally perform better on data on which the model was developed than on new data.
Owing to relatively small sample sizes in most of the studies, there is a lack of external validation leading to a possible reduction in reproducibility 58 . According to the TRIPOD statement 59 , it is highly recommended for studies of prediction models to at least perform internal validation of the findings. Truly reliable results will only be generated by also validating the results externally.
There are many different analytical methods being used in studies of VOCs, and a distinction can be made between the so called real-time and offline analysis techniques 22 . The majority of the included studies used an offline combination of GC-MS systems with a sensor array system. An advantage of this approach is that specific discriminative VOCs can be identified and used to develop sensor systems applicable to clinical settings. However, certain conditions must be fulfilled for development of a breath test for use in the clinic. For clinical use, it is most important that the device is easy to carry, gives quick results, is non-invasive, should not be susceptible to environmental influences, and has both a high sensitivity and specificity.
VOCs that appeared in multiple studies might have the most discriminative value for discriminating cancer from non-cancer conditions. Some VOCs, such as acetone, 2-methylpentane, 3methylpetane, decanal, nonanal, pentane, and tetradecane, were identified in studies of different cancer types. This suggests that VOCs can be cancer type-specific, but also general markers for cancer. The vast majority of the VOCs, however, were only identified in single studies. Of the single VOCs that were identified in multiple studies, including decanal, nonanal and acetone, not all can be attributed directly to certain (patho)physiological processes. However, it is known that cancers often show metabolic abnormalities, such as dysregulation of glucose, fatty acid, and amino acid metabolism 60 . One should keep in mind that not only cancers but also other metabolic abnormalities might cause alterations in breath profiles. For example, an increase in acetone can be a result of diabetic ketoacidosis. However, acetone is a ketone strongly related to fatty acid oxidation. Fatty acids consist of a carboxyl group and a hydrocarbon chain that can be saturated or unsaturated, and are required for synthesis of membranes and signalling molecules in cellular proliferation, as seen in cancers [60][61][62] .
Headspace analysis of healthy intestinal epithelial cells and colonic cancer cells has already shown differences in release of VOCs. This indicates that metabolic abnormalities of cancer cells might contribute to the differences in exhaled breath profiles 63 . As the pathophysiological mechanisms that lead to the altered VOC production in patients with cancer have not yet been elaborated sufficiently, it remains difficult to determine the origin of the distinctive VOCs.
More recent studies using sensor systems, such as the Aenose, have shown promising results of exhaled breath analysis for diagnosing malignancies. However, these studies were unable to identify individual compounds as they used sensor measurements that were analysed using pattern recognition techniques 64 . Additionally, they can be criticized for showing poor linear reproducibility of the results and they also seem to be particularly sensitive to exogenous influences, such as humidity 17 .
As for use in clinical practice, it would be of interest to determine whether a breath test could be applied not only to distinguish between healthy patients and those with cancer, but also between similar diseases such as cancer and benign conditions of the same organ 22 . Therefore, one should consider also including patients with benign diseases in breath analysis studies. During the review process, an additional study 65 was published that met the search criteria for the present analysis. Breath analysis was performed using the Aenose for diagnosing colorectal cancer. The final model for distinguishing colorectal cancer from healthy controls showed a sensitivity of 95 per cent and specificity of 64 per cent, with an AUC of 0.84. Benign conditions such as advanced adenoma, non-advanced adenomas or hyperplastic polyps were also taken into account. Although the Aenose was able to distinguish patients with colorectal cancer from healthy controls, it was not able to differentiate colorectal cancer from advanced adenomas, or advanced adenomas from non-advanced adenomas, suggesting that the VOC profiles are too similar 65 . A different study 66 using the Aenose for a known precursor of oesophageal carcinoma, Barrett's oesophagus, had shown promising results, with a sensitivity of 91 per cent and specificity of 74 per cent for differentiating patients with Barrett's oesophagus from healthy controls. These findings demonstrate that exhaled breath analysis may be of use in the early detection of precancerous conditions, enabling better surveillance or earlier treatment. However, as discussed above, a number of steps still need to be taken to develop clinically applicable breath tests.
Currently, multiple systems are used for VOC detection, which have similar diagnostic performance. However, comparison and pooling of the studies proved to be difficult in the present analysis owing to wide heterogeneity between the studies. A consensus on how studies that analyse VOCs in exhaled breath should be performed will greatly advance progress in this field.
The appearance of some of VOCs in multiple studies of the same cancer type, but also different cancer types, suggests that there could be tumour-specific and also general cancer-associated VOCs. Further studies are needed to determine whether such VOCs could be used to improve cancer diagnostics.
Funding
This study was supported by the Dutch Digestive Foundation (MLDS career development grant CDG16-12 to T.L.) | 2021-04-16T06:17:10.265Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "8c285f4680a72d81ff1ce1623ea7015b14159800",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/bjsopen/article-pdf/5/2/zrab013/37061971/zrab013.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43f8386cad0ced99b5077857537b6e95362a8817",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221782097 | pes2o/s2orc | v3-fos-license | Personal hygiene risk factors for contact lens-related microbial keratitis
Objective Microbial keratitis is a sight-threatening complication of contact lens wear, which affects thousands of patients and causes a significant burden on healthcare services. This study aims to identify compliance with contact lens care recommendations and identify personal hygiene risk factors in patients who develop contact lens-related microbial keratitis. Methods and analysis A case–control study was conducted at the University Hospital Southampton Eye Casualty from October to December 2015. Two participant groups were recruited: cases were contact lens wearers presenting with microbial keratitis and controls were contact lens wearers without infection. Participants underwent face-to-face interviews to identify lens wear practices, including lens type, hours of wear, personal hygiene and sleeping and showering in lenses. Univariate and multivariate regression models were used to compare groups. Results 37 cases and 41 controls were identified. Showering in contact lenses was identified as the greatest risk factor (OR, 3.1; 95% CI, 1.2 to 8.5; p=0.03), with showering daily in lenses compared with never, increasing the risk of microbial keratitis by over seven times (OR, 7.1; 95% CI, 2.1 to 24.6; p=0.002). Other risks included sleeping in lenses (OR, 3.1; 95% CI, 1.1 to 8.6; p=0.026), and being aged 25–39 (OR, 6.38; 95% CI, 1.56 to 26.10; p=0.010) and 40–54 (OR, 4.00; 95% CI 0.96 to 16.61; p=0.056). Conclusion The greatest personal hygiene risk factor for contact lens-related microbial keratitis was showering while wearing lenses, with an OR of 3.1, which increased to 7.1 if patients showered daily in lenses. The OR for sleeping in lenses was 3.1, and the most at-risk age group was 25–54.
INTRODUCTION
Contact lenses for visual correction offer many benefits to the 4 million wearers in the UK, yet contact lens-related microbial keratitis (CLMK) is a frequent cause of unilateral visual impairment. [1][2][3] Severe cases can result in permanent vision loss, a need for corneal transplant or loss of the eye. In all healthcare systems, CLMK poses a significant healthcare challenge as patients require intensive topical antimicrobial therapy and close monitoring of treatment response. [2][3][4] Despite advances in contact lens technology, the incidence of CLMK has remained consistent at around 4 per 10 000 daily contact lens wearers per annum. [5][6][7] Poor contact lens hygiene is a known contributor to microbial keratitis. In a study by Brewitt et al,8 66% of complications observed in contact lens wearers were attributed to poor hygiene practices. There is great variation in contact lens hygiene awareness and recognition of the risks among regular contact lens wearers. Aftercare practices and demographic trends of contact lens wearers have been previously investigated to identify risk factors for microbial keratitis. 2 9-12 This study aims to identify patient demographics and current
Key messages
What is already known about this subject? ► Contact lens-related microbial keratitis (CLMK) causes significant burden on patients and healthcare services. Previous papers have identified certain risk factors for developing CLMK, such as type of contact lenses worn, hand hygiene and overnight wear.
What are the new findings?
► Our case-control trial is unique in that it uses faceto-face interviews to, not only identify contact lens hygiene practices, but to also capture patient opinions and experiences. We demonstrate for the first time the dose-dependent effect of showering in contact lenses. Showering in contact lenses increases the risk of CLMK (OR 3.1), while showering daily in lenses compared with never showering in lenses, increases the risk of microbial keratitis by over seven times (OR 7.1). Sleeping in lenses and being aged 25-39 are also significant risks. Our study also shows that despite most contact lens wearers buying their lenses from opticians and having regular follow-up appointments, contact lens wearers continue to perform poor hygiene practices.
How might these results change the focus of research or clinical practice? ► Focusing attention on improving contact lens education of infection and retention of information may help improve compliance with lens wear practices, which may help reduce incidence of CLMK.
Open access compliance with contact lens care recommendations by contact lens wearers in the UK. The study also aims to identify modifiable risk factors for patients who develop CLMK, including, types of lenses worn, lens wearing habits, aftercare habits and water exposure. Further aims included analysing patient opinions and experience on contact lens wear and microbial keratitis. Our study is unique in that we used face-to-face interviews, to be able to accurately capture patient hygiene practices and experiences of CLMK.
METHODS
In this study, we interviewed contact lens wearers to compare contact lens hygiene practices in lens wearers with CLMK (cases) and lens wearers without infection (controls). Ethics committee approval was obtained from University of Southampton Ethics and Research Governance Online (ERGO reference: 14394).
Participant recruitment
Contact lens wearers attending University Hospital Southampton Eye Casualty between October 2015 and December 2015 were identified. A convenience sampling method was adopted, whereby patients who were identified to be contact lens wearers at triage were approached to take part. Participants were included if they were aged 18-75 and had worn refractive or cosmetic contact lenses for the last 30 days before attendance. Participants with therapeutic lenses, other ocular surface disease, herpes simplex keratitis, significant mental illness or learning disability were excluded.
Cases of CLMK were defined as contact lens wearers having a diagnosis of microbial keratitis made by an ophthalmologist for the first time or within the preceding 1 month prior to interview. Only patients with active infection, and who were still being treated or followed up for CLMK were included. Microbial keratitis was defined as 1. a positive culture from a corneal scrape or 2. a corneal infiltrate and overlying epithelial defect associated with either i. the lesion being within central 4 mm of the cornea, or ii. uveitis. Controls were defined as contact lens wearers attending Eye Casualty for non-contact lens-related problems, and who had no previous history of corneal or infective complications from contact lens wear.
Data collection and questionnaires
Participants were given a patient information sheet and consent was obtained. A single trained researcher who had a medical background but was external to the eye department, conducted face-to-face interviews in a private room using a standardised questionnaire. The questionnaire was internally validated 13 by the research team, after trialling it in a pilot study with patients.
Both patient groups (CLMK and controls) were asked the same questions about demographics, and risk factors relating to contact lens wear and aftercare. Patients with CLMK were asked additional questions relating to their infection. This is summarised in table 1. Participants were able to withdraw from the study at any time.
Definitions of categories
Frequency of wear and frequency of performing a certain hygiene practice was categorised into daily (7 days a week), few times a week (between once to six times a week), few times a month (less than once per week, but more than once a month) and few times a year (less than once in a month to once in a month).
Data analysis
Data analysis was done using SPSS V.22. Mann-Whitney U and Pearson χ 2 test were used to compare demographic data. Risk factors were first analysed individually using simple binomial logistic regression to determine ORs, 95% CIs and p values. Risk factors with a significance p<0.2 were considered in the multivariate model using stepwise multiple logistic regression. Only risk factors with significance p<0.05 were included in the final model.
Patient and public involvement
Patients were involved in the study design. The questionnaire was designed by the research team and was trialled in a pilot study with patients with CLMK. The questionnaire was further improved based on the patient priorities and experiences of CLMK identified during the pilot study. Our study was designed to be conducted via face-to-face interviews, due to patient preference.
Demographics and contact lens types
Seventy-eight participants were recruited into the study (41 controls, 37 cases of CLMK), and no participants dropped out. Patient demographics and baseline characteristics are shown in table 2. Soft monthly disposable contact lenses were the most commonly worn (43%) contact lens type in our cohort. Table 2 also shows the breakdown of contact lens type and frequency of wear.
Multivariate analysis
The multivariate analysis model showed that age, contact lens type and showering in lenses were risk factors which reached statistical significance ( Visual outcomes and attitudes towards contact lenses after CLMK In our cohort, the majority of patients felt that their infective episode had not resulted in significant visual loss. About 55.6% of patients with CLMK (n=20) felt that their infective episode had affected their quality of life, and of these patients, the breakdown of how their life was affected is shown in figure 1A. Figure 1B shows subjective visual outcomes following CLMK. Most patients (86.5%, n=32) had not considered discontinuing contact lens wear after an infective episode of microbial keratitis. Of the few patients who wished to discontinue contact lens wear (13.5 %, n=5), the greatest reason was fear of having another infection (n=3), fear of permanent sight loss (n=1) and recurrent memories of symptoms (n=1).
Responsibility of contact lens education
Participants were asked if they were told the risks of infections when first prescribed contact lenses, and nearly half of both patients with CLMK and control groups responded with either 'no' or 'not sure'. The responses were not statistically different between controls and patients with CLMK ( figure 1C). Participants were asked whom they felt was responsible for providing education about contact lens-related complications. Ninety-two (n=71) per cent of respondents felt that contact lens education was the responsibility of the 'optician', 13.0% (n=10) stated 'self' and 1.3% (n=1) stated 'doctor' (with some participants choosing more than one option). Participants were asked how they thought advice and instructions about contact lens wear should be given. About 54.5% (n=42) of participants felt written information, 68% (n=52) felt verbal information and 48% (n=37) felt demonstrations (48.1 %, n=37) would help improve education.
Compliance with annual contact lens aftercare appointments with optician About 80.8% (n=63) of all participants in the study were compliant with attending appointments at least annually. About 83.7% (n=31) of patients with microbial keratitis, and 78% (n=32) of controls reported that they were attending appointments at least annually.
Risk factors for CLMK
Our study is unique in that, not only does it investigate risk factors for microbial keratitis, but it also analyses the Open access opinions of patients after corneal infection. This gives useful insight into how contact lens practitioners can improve patient education and compliance. This was only possible with face-to-face interviews as it allowed for a lot of detail to be gathered from participants, and also ensured full completion of the questionnaire. Completing the questionnaire did not lengthen waiting times, which meant that no patients dropped out of the study. Our most significant risk factors for CLMK identified included showering in contact lenses, being aged 25-54 and wearing certain soft contact lenses.
Monthly contact lenses were the most frequently used contact lens type in our patient cohort. All forms of contact lens wear increase the risk of microbial keratitis but monthly and extended wear contact lenses have previously been shown to increase risk of sight loss. [1][2][3] Although monthly disposable lenses also increase the risk of infection, this did not reach statistical significance. In our patient group, 10.8% of patients reported significant sight loss, while 56.8% reported no change in their vision.
Pseudomonas aeruginosa is the most commonly identified pathogen among contact lens wearers followed by
Open access
Gram-positive organisms. 3 P. aeruginosa is able to adhere and colonise contact lens materials during lens wear, survive in contact lens storage cases and has resistance to contact lens disinfectants. 14 Acanthamoebae are free-living Open access cysts, forming ubiquitous protozoa found in air, dust, soil and fresh water. They are highly resistant to disinfection with chlorine and are thus not eradicated from tap water. 15 16 For this reason, showering, swimming or washing contact lenses in fresh water can be considered risk behaviours. In our study, showering while wearing lenses was identified as a significant independent risk factor for CLMK. The univariate regression model showed the OR for showering in lenses was 3.1 (95% CI, 1.2 to 8.5; p=0.025), with a dose-dependent effect. The OR for showering in lenses daily, compared with never, was 7.1 (95% CI, 2.1 to 24.6; p=0.002). The OR for showering daily in lenses in the multiple regression model was 13.73 (95% CI, 2.35 to 80.07; p=0.004). Equally, our study showed that sleeping in contact lenses increased the risk of microbial keratitis (OR, 3.1; 95% CI, 1.1 to 8.6; p=0.026) in the univariate model but this was not significant in the multivariate model. The effect of sleeping in lenses was replicated from previous studies, 9 10 12 but these studies looked at overnight wear, whereas our study looked at sleeping in lenses for different amounts of time. The effects of contact lensrelated hypoxia are likely increased in sleeping patients as oxygen diffusion is compromised when eyes are shut for a long time. Studies have shown that hypoxia can lead to increased binding of Pseudomonas to the cornea when a contact lens is present. 17 Following an episode of CLMK, very few of our patients considered discontinuing contact lens wear. Of those whose quality of life or vision had been affected by the infection, 80% (n=20) wished to continue wearing their lenses, demonstrating the benefits that contact lens wear provide but also the importance of instilling good contact lens hygiene awareness and reinforcing this information when attending eye casualty. A large number of our participants (92.2%, n=72) identified the optician as being responsible for providing information about contact lens-related complications. Nearly half of all participants in both control and CLMK groups could not recall or were unsure if they were told specifically about the risks of contact lens-related infections when first prescribed their contact lenses ( figure 1c).
Under guidance from College of Optometrists UK, contact lenses can only be fitted and prescribed by optometrists, doctors and contact lens opticians. Dispensers of contact lenses are required to give training and information about lens care, hygiene and wear schedules before lenses can be dispensed. About 89.2% (n=33) of the patients who developed microbial keratitis, stated that an optician supplies them with their contact lenses. The British Contact Lens Association (BCLA) recommends contact lens aftercare appointments at least annually. As shown in table 3, non-compliance with annual aftercare appointments was not found to be a risk factor for microbial keratitis. There was a high level of reported compliance in attending annual follow-up appointments, in both cases and control group. A 2010 Australian study 18 looking at contact lens compliance found similar results.
These findings are rather confusing, as despite regular follow-up with opticians and perceived good concordance with BCLA recommendations, patients' understanding and retention of contact lens hygiene and risk behaviour remains low. As patients are likely to want to continue wear lenses even after an infective episode, contact lens practitioners should focus efforts on improving patient retention of information about infections and aftercare practices, because persuading patients to stop wearing contact lenses may be ineffective.
Our study demonstrated that all three forms of information-verbal, demonstrations and written-were important for contact lens wearers to improve education about lens wear and complications. A possible way to increase awareness may be to supply printed material with each contact lens box to remind them about risks and aftercare practices.
A limitation of the study was that controls were also eye casualty attendees, presenting with other ocular problems, which could have introduced bias into the control group. These patients, however, presented with nonocular surface problems and non-contact lens-related issues, which were typical for any person attending the department. To limit recall bias in the CLMK cases group, only patients who were newly diagnosed with CLMK and still had active infection were included in the study. The questionnaire used was developed and validated by the Open access research team, and face-to-face interviews were chosen to accurately obtain data. To limit interviewer bias and limit influencing participant responses, only one researcher who was not involved in patient care, conducted the interviews in a standardised manner. A limitation was that the OR and CI ranges in the multivariate model were large. A larger sample size would be needed to calculate a more precise estimate of effect. Risk factors that could be investigated further include: overall duration (eg, in years) of contact lens wear, smoking history, socioeconomic status, ethnicity and reason for contact lens wear (hyperopia, myopia, presbyopia or cosmetic). A multicentre study with a larger sample size could reduce sample bias, help evaluate risks and demographics further, and could show trends on regional and national levels. Precision and the number of significant results may also be improved. An interesting area for future work would be to further investigate the effect of showering in contact lenses, and to identify which organisms are isolated in patients with CLMK who shower in lenses.
The major personal hygiene risk factors for CLMK include showering, especially daily, in contact lenses and sleeping in lenses. Patients aged 25-54 are the most at-risk group. Despite most contact lens wearers buying their lenses from opticians and having regular follow-up appointments, contact lens wearers continue to perform poor hygiene practices and risk developing microbial keratitis. Focusing attention on improving education of infection and retention of information may help improve compliance with lens wear practices, which may help reduce incidence of CLMK and associated sight loss.
Contributors AS assisted in study design, collected and analysed the data and is first author. CM assisted with writing the report and mentoring. RK conducted preliminary work in the pilot study. AK assisted with data collection and mentoring. PH led and designed the study.
Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Patient and public involvement Patients and/or the public were involved in the design, or conduct, or reporting or dissemination plans of this research. Refer to the Methods section for further details.
Patient consent for publication Not required.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon request.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
ORCID iDs
Anna Stellwagen http:// orcid. org/ 0000-0001-6246-6680 Figure 1 Graphs showing how the recent CLMK episode (A) subjectively affected patients' vision (B) and quality of life (more than one option could be chosen for this question) and (C) Responses for question: 'Were risks of infections explained when lenses first prescribed?' CLMK, contact lens-related microbial keratitis. | 2020-09-10T10:17:32.852Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "1f71df9921ba987f68a27a4c00c8a1cefe8d4bec",
"oa_license": "CCBYNC",
"oa_url": "https://bmjophth.bmj.com/content/bmjophth/5/1/e000476.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "138cc911097c51d3e14552d84e08e8e9bf5f0edd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1425354 | pes2o/s2orc | v3-fos-license | High degree of efficacy in the treatment of cyclic vomiting syndrome with combined co-enzyme Q10, L-carnitine and amitriptyline, a case series
Background Cyclic vomiting syndrome (CVS), defined by recurrent stereotypical episodes of nausea and vomiting, is a relatively-common disabling and historically difficult-to-treat condition associated with migraine headache and mitochondrial dysfunction. Limited data suggests that the anti-migraine therapies amitriptyline and cyproheptadine, and the mitochondrial-targeted cofactors co-enzyme Q10 and L-carnitine, have efficacy in episode prophylaxis. Methods A retrospective chart review of 42 patients seen by one clinician that met established CVS diagnostic criteria revealed 30 cases with available outcome data. Participants were treated on a loose protocol consisting of fasting avoidance, co-enzyme Q10 and L-carnitine, with the addition of amitriptyline (or cyproheptadine in those < 5 years) in refractory cases. Blood level monitoring of the therapeutic agents featured prominently in management. Results Vomiting episodes resolved in 23 cases, and improved by > 75% and > 50% in three and one additional case respectively. Among the three treatment failures, two could not tolerate amitriptyline (as was also the case in the child with only > 50% efficacy) and one had multiple congenital gastrointestinal anomalies. Excluding the latter case, substantial efficacy (> 75% response) was 26/29 at the start of treatment, and 26/26 in those able to tolerate the regiment, including high dosages of amitriptyline. Conclusion Our data suggest that a protocol consisting of mitochondrial-targeted cofactors (co-enzyme Q10 and L-carnitine) plus amitriptyline (or possibly cyproheptadine in preschoolers) coupled with blood level monitoring is highly effective in the prevention of vomiting episodes.
Background
Cyclic vomiting syndrome (CVS) is characterized by recurrent identical episodes of nausea and vomiting, with the absence of these symptoms between episodes [1]. CVS is likely common, being present in about 2% of Scottish [2] and Western Australian [3] school children. Prior to the advent of successful therapy, CVS was a disabling condition as episodes are generally severe, usually last for days, and often require intravenous fluid therapy for dehydration [1]. Frequent and prolonged school or work absences lead to academic or work disability. CVS can pose a challenge for clinicians to manage, and it is common for patients to seek help from multiple practitioners because of continued vomiting episodes.
Although the etiology is unknown, substantial parallels with migraine headache [4] have prompted therapeutic trials with anti-migraine therapies. Amitriptyline (Elavil ® ), a tricyclic "antidepressant" frequently used to treat migraine, is the most widely prescribed prophylactic medication used for the treatment of CVS, with response rates varying from 52-73% in open-label and subject recall-based studies in children and adults [reviewed in 5]. In a recent consensus statement, amitriptyline was recommended as the first-line treatment choice for CVS prophylaxis in children and adolescents age 5 years and older, while cyproheptadine is recommended in younger children [1].
Mitochondrial dysfunction is hypothesized to be a factor in the pathogenesis of both CVS and migraine headache based upon decreased respiratory complex enzymology, disease-associated mitochondrial DNA (mtDNA) sequence variants, and preferential maternal inheritance [reviewed in 5]. Physicians and other health care providers are increasingly recommending coenzyme Q10, also known as ubiquinone, a commonlyused dietary supplement that is widely available in retail settings, for the treatment of a wide variety of conditions, including mitochondrial dysfunction. Co-enzyme Q10 serves as the electron shuttle between complexes 1 or 2 and complex 3 of the mitochondrial respiratory chain [6]. In migraine, a randomized control trial demonstrated therapeutic efficacy [7]. Recently, the use of co-enzyme Q10 has been gaining in popularity among CVS patient groups. A recent subject recallbased study in CVS suggested equivalent efficacy of coenzyme Q10 and amitriptyline (~70%), but with superior tolerability of co-enzyme Q10 [5]. There was inadequate data to assess response to combination therapy with both agents.
L-carnitine is also a naturally-occurring dietary supplement that is frequently used in the treatment of mitochondrial dysfunction [6]. L-carnitine is a shuttle of long-chain fatty acids across the inner mitochondrial membrane and thus is required for fat oxidation. In addition, L-carnitine has a "detoxifying" role in shuttling accumulated intermediates of metabolism out of impaired mitochondria. One case series [8] demonstrated efficacy of L-carnitine in CVS prophylaxis.
In the author's clinical experience, episodes of nausea and vomiting diminish markedly in the vast majority of CVS patients treated with a protocol consisting of fasting avoidance, co-enzyme Q10, and L-carnitine, with the addition of amitriptyline or cyproheptidine in refractory participants over and under the age of five years, respectively. One essential aspect of this protocol is dosing based on blood levels. The current study is a retrospective chart review of the 42 CVS patients evaluated over a two-year period by the author to evaluate therapeutic responses.
Methods
A computer-generated report of all clinic patients seen by the author during the two-year period from 7/1/06 to 6/30/08 was reviewed for ICD 9 codes used by the author for CVS patients, including 536.2 and 277.87. A medical record review was performed on all cases so identified. Patients were included as participants in this study if given a diagnosis of CVS by the author based on fulfilling both the NASPGHAN [1] and Rome III [9] criteria. All participants are unrelated. All records were reviewed up until 6/30/10, allowing for a two-year follow-up period to access medium-term treatment responses. This study was approved by the Children's Hospital Los Angeles Institutional Review Board.
Participants were treated on a clinical basis, and not as part of a prospective study; however, treatment during this period was standardized as based on prior clinical experience and the literature [1]: • Dietary: All subjects were advised to make dietary changes [1], including the "3+3 diet" (3 meals and 3 snacks a day including between meals and at bedtime), and the avoidance of fasting.
• Co-enzyme Q10: Participants were treated with coenzyme Q10 (ubiquinone) in liquid or gel capsule form (from a variety of brands) at a starting dose of 10 mg/kg/day, or 200 mg, divided twice a day, whichever is smaller.
• L-carnitine: Participants were treated with Carnitor brand or generics at a starting dose of 100 mg/kg/ day divided BID, or 2 grams twice a day, whichever is smaller. A small minority of families, all with untreated free carnitine blood levels > 30 micromolar, were not treated.
• Amitriptyline: Participants age 5 years and over with continued vomiting episodes despite the above therapies were treated at a starting dose of 0.5 mg/ kg/day given at night An EKG was performed looking at the QTc interval prior to and a few weeks following starting treatment.
• Cyproheptadine: Participants under the age of 5 years with continued vomiting episodes despite the above therapies were treated at a starting dose of 0.25 mg/kg/day divided twice a day.
• Topiramate: Two participants who were refractory to all of the above measures were started on 25 mg of topiramate twice a day.
Dosages were increased every one to a few months until one of the following occurred: • Resolution of vomiting episodes • Intolerable side effects that failed a reduction in dosage followed by a slow dosage increase • The following maximum was reached (empiricallyderived): ο Co-enzyme Q10: blood level > 3.0 mg/L ο L-carnitine: free carnitine blood level > 40 micromolar ο Amitriptyline*: amitriptyline + nortriptyline blood level > 150 ng/ml ο Cyproheptadine: Dosage of 0.5 mg/kg/day ο Topiramate Dosage of 200 mg twice a day (in adolescents and adults) *Blood levels were not routinely monitored for dosages < 1 mg/kg/day as they were uniformly low in the authors' prior experience.
Efficacy was queried in terms of two parameters: episode frequency and episode duration. The efficacy category was determined by the percent improvement in the parameter demonstrating the greatest response at the time of the most-recent clinic visit prior to 6-30-10: • Resolution (episodes resolved, allowing for one episode a year with an obvious trigger, usually a febrile infection).
Results
A total of 42 participants met the study criteria. Age at the time of chart review varied from 3 to 26 years, with a median of 12 years. The age of the onset of vomiting episodes was 1 week to 15 years, with a median of 4 years. The female:male ratio was 2.2:1 (29 females and 13 males). The race/ethnicity was 28 (67%) Caucasians, 11 (26%) Hispanics, 2 (5%) African-Americans, and 1 (2%) Native-American. Several co-morbid, predominantly-"functional", conditions were common, ranging from zero (in two adults) to 16 per participant, with a median of 5.5 co-morbid conditions ( Table 1).
Nine participants were excluded from outcome analyses because they were seen in clinic only once or twice, and no follow-up data was available to determine their response to therapy, including five of the 10 adults (age > 18 years), but only 4 of the 32 children (P = 0.02). Two additional children were excluded because CVS resolved prior to starting therapy. One additional case was excluded because the parents declined prophylactic therapy and chose to continue to abort episodes with lorazepam and diphenhydramine.
Records in the remaining 30 subjects were queried for data related to treatment response ( Table 2). This included three participants over the age of 18 years who were included in the study as they are of ages commonly treated by pediatricians, and the physiology of youth in their early to mid 20s is similar to that of adolescents.
The treatment protocol failed in three cases, and was sub-optimal (50-75% response) in another case. In one of the treatment failure cases, episodes completely resolved for several months on amitriptyline alone.
Unfortunately, a prolonged QTc interval was noted, which resolved on discontinuation without adverse events. Episodes then returned, but further therapy and evaluation were complicated by severe non-compliance. Two participants on amitriptyline, co-enzyme Q10, and carnitine had tolerance issues with amitriptyline. One of them (also labeled as treatment failure) demonstrated good efficacy, yet amitriptyline was discontinued because of narcolepsy, and episodes returned. In the other case (labeled as sub-optimal), behavioral and emotional effects have limited treatment to a sub-therapeutic amitriptyline level associated with only partial efficacy. In the final case of treatment failure, no improvement was noted on the same three treatments, as well as with the further addition of cyproheptadine. This latter infant has multiple malformations, including esophageal atresia, tracheoesophageal fistula, imperforate anus, and a tethered spinal cord, as part of VATER association, and thus was excluded from further data analyses.
Six participants reported side effects with amitriptyline. In addition to the three cases discussed above in which side effects necessitated treatment discontinuation or reduction, in three other participants side effects Headache (all but one with migraine) 16 38% Abdomen 15 36% Complex regional pain syndrome 5 12% Gastrointestinal dysmotility (any) 31 74% Gastroesophageal reflux disease and/or chronic nausea 23 55% Colonic (irritable bowel syndrome, constipation, diarrhea)
40%
Other functional or autonomic-related conditions (any)
Other conditions
Any neuromuscular disorder (non cognitive) 18 43% Chronic fatigue or exercise intolerance 23 55% (increased frustration in two, one also with insomnia, dizziness in the other) did not limit treatment. One participant discontinued co-enzyme Q10 because of a pseudoporphyria rash. Such an association has not been reported, and the rash did not reappear on treatment with another brand of co-enzyme Q10. Cyproheptadine caused lethargy in one participant, and two had vague non-specific sensations while on multiple medications both related and unrelated to this study. Urine ketosis was noted in the medical record as positive in 20 out of 20 cases tested during vomiting episodes. Ketosis was not seen at baseline.
Discussion
This case series demonstrates excellent efficacy of cofactor therapy (co-enzyme Q10, L-carnitine) combined with amitriptyline. Treatment responses were suboptimal in only four cases, three of which could not tolerate adequate dosages of amitriptyline, and never achieved a "therapeutic" blood level (> 80 ng/ml of amitriptyline + nortriptyline). With the removal of the fourth case of the infant with multiple gastrointestinal malformations, substantial efficacy (> 75% response) of this protocol in children and youth > age 5 years was 19/22 at the onset of treatment, and 19/19 in participants able to tolerate amitriptyline. In the author's observations, making treatment decisions contingent of the blood levels of coenzyme Q10, carnitine and amitriptyline was very helpful in many cases, as children with sub-optimal clinical improvement always demonstrated a low level of at least one of the three agents, and increased dosing was associated with the resolution of episodes. In order to achieve these "therapeutic" blood levels and clinical efficacy, some subjects required higher-than-customary dosages, including up to 25 mg/kg/day (800 mg a day in larger subjects) of co-enzyme Q10 and 2 mg/kg/day of amitripyline. These dosages were well tolerated.
In participants under age five years, efficacy appears to be good when cofactor therapy is combined with cyproheptadine, although the number of cases reported here is small. Drug treatment varied by age in the present study and in the NASPGHAN recommendations due to expert opinion regarding low tolerability (tachycardia and increased frustration) of amitriptyline in younger children and low efficacy of cyproheptadine in older children [1].
Combining the 22 cases > age 5 years and 4 cases < 5 years, overall substantial efficacy (> 75% response) of this protocol was 23/26 at the start of treatment, and 23/23 of those who could tolerate the regiment.
Clinical [11] and molecular [12] data suggest that CVS in adults, in particular with the adult onset of vomiting episodes [12] is distinct in many ways than CVS in children. However, among the five adult cases with outcome data in the present study, all of which had the adolescent onset of vomiting episodes, two did not tolerate amitriptyline (see footnote 5 in Table 2) and in the three others episodes resolved (two with all three agents, one with amitriptyline alone). Thus, there is inadequate data in this generally-young cohort to suggest alternative management based on adult age, although there may be a higher rate of intolerance to amitriptyline in adults than in children over age 5.
The major limitation on this study is that the participants were treated on the basis of best available clinical therapy, not on a prospective clinical trial. The protocol was used as guidelines, not on a rigorous basis. For example, participants with severe disease (multiple hospitalizations) were often treated simultaneously with cofactors and medication (amitriptyline or cyproheptadine) at the first visit based on the authors' experience of frequent treatment failures on cofactors alone, while those with milder disease courses were always given a trial of cofactors alone. Some families started the therapies sequentially, and once episodes stopped or greatly diminished would elect not to treat with agents not yet attempted. A few families declined coenzyme Q10 therapy due to costs, which unlike all the other therapies in this report was rarely covered by insurance. A small number of participants were referred to the author with partial efficacy on amitriptyline or cyproheptadine, and when episodes resolved after increasing the dosage the families chose not to start one or both cofactors. These factors contributed to the complexity of the medical regiments as listed in Table 2. However this limitation does not diminish the observations herein of very-high efficacy in general using these agents in clinical practice, either alone or in combination. The participants in this study include cases diagnosed by the author in a primary care-like setting, tertiary care cases referred by local pediatricians and gastroenterologists, and quaternary care cases from other states that failed multiple previous attempts at therapy. Since most participants were ascertained in the latter two situations, the present cohort is a sicker, more-treatment-resistant population of CVS than is likely to be encountered by all but a few practitioners. Since the more mildly-affected participants often responded well to cofactor therapy alone, and that the side effects of the cofactors are generally much less than that of the medications [5 and author's experience], a trial of cofactor and dietary therapy alone may be warranted in most CVS patients encountered in clinical practice, with amitripyline or cyproheptidine added in refractory cases.
Many participants discontinued therapy at some point, and in most the episodes returned, later resolving again on renewed therapy. In the exceptional cases, vomiting episodes evolved into migraine headache, often at the time of puberty, and the same protocol was used successfully in migraine prophylaxis. No participants are known to be off therapy and without both vomiting episodes and migraine in the medium-term follow-up period of this study.
Conclusions
CVS is a disabling, common and difficult-to-treat condition. Our data suggest that a protocol consisting of mitochondrial-targeted cofactors (co-enzyme Q10 and Lcarnitine) plus amitriptyline (or possibly cyproheptadine in preschoolers) coupled with fasting avoidance and blood level monitoring is highly effective in the prevention of vomiting episodes. A prospective blinded clinical trial is needed. However, given the suggestion of efficacy and excellent tolerability, health care providers may want to consider combining these cofactors as a low-risk therapeutic option along with the NASPGHAN recommendations of amitriptyline (> 5 years) or cyproheptadine (< 5 years). A trial first of cofactors and fasting avoidance alone may be warranted in cases without a history of multiple hospitalizations for vomiting episodes. | 2017-06-17T16:01:22.161Z | 2011-08-16T00:00:00.000 | {
"year": 2011,
"sha1": "f8f9509fa6230b3d0222e24d2ed3e6d1d7b3710d",
"oa_license": "CCBY",
"oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/1471-2377-11-102",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ad275f31d27a92059589989c50e8c11b4087813",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2589394 | pes2o/s2orc | v3-fos-license | Effect of Insecticide Regimens on Biological Control of the Tarnished Plant Bug, Lygus lineolaris, by Peristenus spp. in New York State Apple Orchards
To improve biological control of Lygus lineolaris (Palisot de Beauvois) (Hemiptera: Miridae), the European parasitoid Peristenus digoneutis Loan (Hymenoptera: Braconidae) was introduced into the US in the 1980's and has become established in forage alfalfa, strawberries and apples. The objective of this study was to determine how four different insecticide management regimes affected parasitism of L. lineolaris by Peristenus spp. During the summers of 2005 and 2006, L. lineolaris nymphs were collected from New York State apple orchards using industry standard, reduced risk, and organically approved insecticides only. A ‘no insecticide’ (abandoned orchard) treatment was also included in 2006. Rates of parasitism of L. lineolaris nymphs were determined using a DNA-based laboratory technique. Results indicated that insecticide treatment had a significant effect on rates of parasitism of L. lineolaris by Peristenus spp. Compared to the industry standard treatment, rates of parasitism were higher in reduced risk orchards and lower in organic orchards. These results suggest that it is difficult to predict a priori the consequences of insecticide programs and point to the need to take into consideration the specific pests and beneficial organisms involved as well as the crop and the specific insecticides being applied.
Introduction
Apple producers in New York State and the northeastern United States are challenged by a formidable array of insect pests and diseases (Agnello et al. 2006). The intense disease and arthropod pressures, combined with the unpredictable and sometimes severe weather of the northeastern United States, create difficult growing conditions for New York State apple producers.
To control insect pests, apple growers utilize a range of chemical regimens. Standard growing practices rely heavily on broadspectrum organophosphate, carbamate, and pyrethroid insecticides (Agnello 2007). These insecticides have long caused concern because of risks to humans, the environment and beneficial arthropods (Eskenazi and Maizlish 1988;Rosenstock et al. 1991;Van Driesche and Bellows 1996;Ruberson et al. 1998;Beane Freeman et al. 2005). Scientific and public concern about organophosphate, carbamate and pyrethroid insecticides led to passage of the Food Quality Protection Act in 1996 (Public Law 104-170) which calls for an eventual elimination of many of these insecticides from use in the United States.
In response to the eventual loss or restricted use of organophosphate, carbamate and pyrethroid insecticides, the Reduced-Risk Management Program, a collaborative effort consisting of scientists, growers, representatives from the pesticide industry, and the U.S. Department of Agriculture, was formed to evaluate newer reduced risk insecticides under realistic field conditions. Reduced risk insecticides have been marketed as good alternative products because they are purported to be safer for the environment and more pest specific and they conform to the Food Quality Protection Act (Balazs et al. 1997;Pekar 1999;Hill and Foster 2003). Reduced risk insecticides labeled for use on apples include Apollo ® (clofentizine), Nexter ® (pyridaben), Zeal ® (etoxazole), Actara ® (thiamethoxam), Esteem ® (pyriproxyfen), Avaunt ® (indoxacarb), Assail ® (acetamiprid), Intrepid ® (methoxyfenozide), SpinTor ® (spinosad) and dormant oil (petroleum oil). Although frequently found to be more selective than older generation insecticides (e.g. Atanassov et al. 2003;Musser and Shelton 2003;Koss et al. 2005), research into indoxacarb (Haseeb et al. 2004), imidacloprid, thiamethoxam, acetamiprid (Williams et al. 2003Nasreen et al. 2004;Beers et al. 2005;Poletti et al. 2007) and spinosad (Nowak et al. 2001;Cisneros et al. 2002;Schneider et al. 2003) have shown them to be toxic to some beneficial arthropods (also see Stark and Banks 2001;Brunner et al. 2001). In addition, many reduced risk chemicals are significantly more expensive than standard products.
Another treatment option is producing apples under an organic regimen. While there are few large organic apple producers in New York State, there is increasing interest from growers and consumers in the burgeoning organic market across the U.S. (Peck et al. 2005). Organic apples receive a premium at market but organic growers are limited in what can be applied for arthropod and disease control. Certified organic producers can utilize nonsynthetic, naturally derived compounds made from botanicals, elemental compounds (e.g. sulfur and copper), and physical barriers. However, many of the organic compounds are more expensive than, and not as effective as, their synthetic counterparts.
As the Food Quality Protection Act is implemented, the increased cost of reduced risk insecticides will weigh heavily on apple growers. This creates an immediate need for research into alternatives that may reduce the need for insecticide sprays, such as biological control. In addition, research is needed to clarify how new reduced risk insecticides affect non target species, including natural enemies, compared to conventional, more broad-spectrum insecticides or pesticides allowed in organic apple production in the Northeast.
The tarnished plant bug, Lygus lineolaris (Palisot de Beauvois) (Hemiptera: Miridae), is an important direct pest of apples and many other crops. In apples it causes damage by feeding on the developing flowers and fruit, resulting in abscission of flower buds, fruit underdevelopment, and malformations (Prokopy and Hubbell 1981). Abscission of buds is of no economic consequence; however, underdeveloped fruit and malformations can result in culling and downgrading (Weires et al. 1985;Michaud et al. 1989). Fruit malformations from L. lineolaris can be characterized as puncture wounds, scabs or "cat facing" (Boivin and Stewart 1983a).
There are several native species of parasitoids (Hymenoptera: Braconidae) in the Peristenus pallipes species complex and two introduced European species, P. digoneutis Loan and P. rubricollis (Thomson) that parasitize nymphs of L. lineolaris (Goulet and Mason 2006). According to Day et al. (1990) the native species complex was not able to provide adequate biological control of L. lineolaris and therefore P. digoneutis was introduced from France in the 1980's. After introduction into the U.S., robust populations of P. digoneutis developed in forage alfalfa, reducing the total L. lineolaris population in that crop by 75% (Day 2005). As expected, P.
digoneutis not only expanded its geographic range but also spread from alfalfa fields to high value crops such as strawberries (Tilmon and Hoffmann 2002) and apples (Crampton 2007). Peristenus rubricollis is only occasionally found parasitizing L. lineolaris, preferring other mirids in the Adelphocoris genus, especially the introduced European alfalfa plant bug, A. lineolatus (Goeze), which it was introduced to control (Day 2005).
The objective of this study was to examine the effect of pesticide regimen on rates of parasitism by Peristenus spp. on L. lineolaris in apple orchards. We specifically compared parasitism in commercial apples produced using the industry standard insecticide regime, organically approved insecticides only, and reduced risk insecticides. We also included abandoned apple orchards where no insecticides were applied.
Regimens
During this field study, conducted in the summers of 2005 and 2006, L. lineolaris nymphs were collected from apple orchards under four different arthropod management regimens: organic, standard reduced risk and abandoned. Standard insecticide regimens are defined here as the current convention in apple production, which relies heavily on organophosphate, carbamate, and pyrethroid insecticides. The reduced risk regimen (defined above) included the following insecticides (active ingredients) clofentizine, pyridaben, etoxazole, thiamethoxam, pyriproxyfen, indoxacarb, acetamiprid, benzoic acid, spinosad and petroleum oil (see Agnello et al. 2004 andSarvary et al. 2007 for complete lists of pesticides used in reduced risk and standard orchards). Reduced risk regimens were only available in 2005. The organic orchards selected were certified organic and followed guidelines mandated by the U.S. Department of Agriculture National Organic Program (Agricultural Marketing Service, 2000). Insecticides used in organic orchards included sulfur and kaolin clay. The abandoned orchards were typically no longer under any form of management and consequently had excessive weed populations and high infestations and damage by both insect pests and disease.
Orchards
The field sites were located within nine commercial orchards in three of the major apple producing regions of New York State -Hudson Valley, Central and Niagara (Table 1 and Figure 1). Six of the nine orchards were participants in the Reduced-Risk Management Program (2005 only). At these locations, two approximately 10 acre blocks were under either the reduced risk or standard treatment regimen in a split plot design. Organic plots in the same regions were selected to compliment the split plot design. At farm N2, two organic plots were selected, labeled organic(P) and organic(G). The plot organic(G) was only available in 2005. In the Niagara region and Hudson Valley regions, organic plots were very close to the corresponding reduced risk/standard plots, sharing a common property line. However, the organic plot in central New York State (farm C1) was 45 miles from the closest reduced risk/standard plots (C2) (Figure 1). Abandoned plots were also selected from the same area. Two shared a property line with reduced risk/ standard plots and the third was within 5 miles from the organic/standard plots (farm N2).
Field Sampling
Lygus lineolaris nymph samples were collected weekly during June, July and August in 2005, except when prevented by inclement weather or pesticide re-entry restrictions. In 2006, samples were taken on four separate weeks during the season to correspond to parasitism levels based on sampling results from 2005. A sweep net was used to sample for L. lineolaris nymphs in ground cover in an area approximately 0.40 ha in size, located in the middle of each treatment block of apples. Sweeping of the orchard ground cover continued until 24 to 30 L. lineolaris nymphs were collected or 2.5 hours passed. Ground cover was similar between regimens within orchards, with the exception of organic(P) treatment at farm N2. Typical ground cover consisted of Dactylis spp., Trifolium spp. , Vicia spp., Hieracium spp., Plantago spp., Taraxacum spp., Leontodon spp., Galium spp., and grass. In organic(P) Galium spp. constituted about 80% of the ground vegetation. Nymphs collected by sweep net were preserved in 95% ethanol, and stored on ice until transferred to a -20°C freezer.
Determining adult population size
There is evidence of a weak densitydependent relationship between numbers of L. lineolaris nymphs and P. digoneutis (Tilmon 2001;Day 2005). A density-dependent relationship is important in this study because if one of the regimens had significantly more nymphs than the other regimens the interpretation of the parasitism rates would change. Since adult and nymphal population densities are related (Day 2005), relative adult densities were estimated using yellow sticky cards (Prokopy and Hubbell 1981;Boivin & Stewart 1982;Boivin and Stewart 1983b). A row within the 0.40 ha L. lineolaris collection area was arbitrarily chosen at the beginning of the study and each week thereafter five 13.5 cm x 13.5 cm sticky cards, spaced 50 meters apart, were hung in trees approximately 0.75 meters above the ground. Cards were replaced each time the site was visited, wrapped in cellophane, and stored at 4°C until examined for L. lineolaris adults.
Determining Parasitism Rates
The molecular identification of Peristenus spp. was achieved using a technique developed by Tilmon et al. (2000). Briefly, DNA was extracted from each L. lineolaris nymph and possible parasitoid larva contained within them. Polymerase Chain Reaction (PCR) primers specific to L. lineolaris were amplified, thereby confirming success of the extractions. Finally, another PCR with specific primers amplified Peristenus spp. genes that allowed parasitized L. lineolaris nymphs to be differentiated from unparasitized nymphs. Samples were stored at 20° C. DNA was extracted using DNAzol, (Invitrogen, www.invitrogen.com) following the manufacture's protocol, except the 'Lysis step' was scaled down from 1ml to 100 l. Primers C1-J-2252, C1-J-2183 and TL2-N3014 were obtained from Integrated DNA Technologies (Coralville, IA). Taq polymerase was obtained from Promega (www.promega.com).
PCR conditions as outlined by Tilmon et al. (2000) were followed.
Statistical Analysis
A General Linear Model with a log link and negative binomial distribution was used to describe the adult L. lineolaris count number and test for treatment effects. Orchards were considered a repeated measure. The negative binomial distribution was selected instead of a Poisson distribution because the assumption of variance equaling mean was violated, indicating over dispersion. Variance in the negative binomial distribution is a dispersion parameter estimated by maximum likelihood.
Another General Linear Model with a logit link and a binomial distribution regression was used to analyze the relevance of treatment on parasitism of L. lineolaris by Peristenus spp. Again, orchards were considered a repeated measure. Since it is well established that there are two seasonal peaks in parasitism because P. digoneutis is bivoltine (Day et al. 1992;Tilmon 2001;Goulet and Mason 2006) (also see Figure 2), time was coded as a categorical value. This approach allows the magnitude of parasitism to change over the season. For the purpose of this study total parasitism (all Peristenus species combined) was used, thereby creating a dichotomous response variable, (i.e. parasitized, not parasitized). Furthermore, for analyzing the effect of pesticide treatment on parasitism, Peristenus spp. was the unit of interest. All statistical analyses were preformed using SAS 9.1 (2003).
Results
Parasitism rates of L. lineolaris nymphs by Peristenus species, averaged over the different treatments and regions, indicate two distinct peaks occurring in mid-June and mid-July ( Figure 2a). The overall parasitism rates, averaged across time and orchards, ranged from 14% in organic orchards in the Niagara region to a high of 37% in an abandoned orchard in the central New York State region ( Table 2). The pattern of parasitism over the season was roughly similar among pesticide regimes (peaks in June and July), although the magnitude of peak levels differed, as well as the width of the peaks (Figure 2b-e). Reduced risk orchards stood out in that peak rates were higher in July compared to June and remained high well into August.
The logistic model for parasitism indicated that insecticide treatment was a significant predictor of parasitism ( 2 = 33.1, df = 3, P < 0.0001). Total parasitism in standard regimens was not significantly different from that in abandoned orchards (P = 0.1063). However, organic farms had significantly lower likelihood of parasitism when compared with standard (P < 0.0001). The odds ratio indicates that the likelihood of encountering a parasitized L. lineolaris nymph in an organic orchard is 0.606 times that of encountering a parasitized nymph in a standard orchard (39.4% less likely to be parasitized) (See Table 3). Reduced risk orchards had significantly higher likelihood of parasitism than standard orchards (P = 0.0413). The odds ratio indicates that a L. lineolaris nymph in a reduced risk orchard is 1.258 times or 25.8% more likely to be parasitized than a nymph in a standard orchard (Table 3).
The Pearson 2 squared test indicated the negative binomial regression model was an adequate fit for adult abundance on sticky cards ( 2 = 1055.6, df = 881; 2 / df = 1.2). The analysis indicates that pesticide treatment was a significant predictor of adult L. lineolaris count ( 2 = 35.99, df = 3, P < 0.0001) ( Table 4). There was no statistically significant difference between the numbers of L. lineolaris in standard and reduced risk orchards (P = 0.1518). There were significant differences between organic and standard (P < 0.0001) and between abandoned and standard (P = 0.0015).
The positive abandoned estimate indicates that abandoned orchards The parameter estimates are converted from log scale into odds ratios for ease of interpretation. The odds ratios for categorical variables are the likelihood of an event, in this case encountering a parasitized nymph, at a given level compared to the standard orchards in this case. had more L. lineolaris adults than standard orchards. Organic orchards also had more L. lineolaris than standard orchards, which is described by the positive organic parameter estimate (see Table 4).
Discussion
An intriguing result of this study is that there was no significant difference in parasitism between the abandoned orchards and the standard orchards. This indicates that the Peristenus spp. parasitoids were able to attack L. lineolaris nymph hosts in the standard insecticide environment as effectively as in the abandoned environment where no insecticides were applied. In fact, given the density dependent relationship between parasitism and nymph populations and the greater numbers of adult L. lineolaris in abandoned orchards compared to standard orchards, it suggests parasitism rates were effectively greater in standard orchards than abandoned orchards. Reasons for this difference are unclear. One possible explanation is that the environment (e.g., higher plant diversity) within abandoned orchards is more favorable for L. lineolaris than for Peristenus. However, additional experiments are necessary to explain this result.
The regression comparison between standard and reduced risk insecticide regimens was significant; a nymph encountered in a reduced risk orchard was 25.8% more likely to be parasitized than a nymph in a standard orchard. This result is bolstered by the relative adult density data that indicated density of L. lineolaris in reduced risk and standard orchards was not significantly different. This study indicates that, in the case of Peristenus spp, the reduced risk regimen has a less disruptive effect on biological control than the standard insecticide regimen. This result is consistent with other field studies comparing the impact of different pesticide regimes on beneficial arthropods (Balazs et al. 1997;Pekar 1999;Atanassov et al. 2003;Hill and Foster 2003;Musser and Shelton 2003;Koss et al. 2005). Some field studies, however, have found no differences in natural enemy abundance or impact between conventional and reduced-risk regimes, indicating results are likely to be crop and natural enemy dependent (Jenkins and Isaacs 2007;Sarvary et al. 2007).
An interesting result in this study is that a nymph encountered in an organic orchard was 39.4% less likely to be parasitized than a nymph encountered in a standard orchard. Organic orchards had significantly higher L. lineolaris densities relative to standard orchards and, with putative positive density dependent parasitism ( (Tilmon 2001;Day 2005), higher parasitism in organic orchards would have been expected than for When is negative the parameter it represents, in this case number of L. lineolaris in that insecticide treatment, is less then the intercept, standard orchards in this case. When is positive the parameter it represents is more then the intercept. conventional orchards. However, the opposite occurred. It is possible that organic insecticides were less lethal to L. lineolaris, but perhaps more lethal to the Peristenus spp. This could result in higher levels of L. lineolaris and lower rates of parasitism in the organic orchards. Clearly more research would be needed to fully understand the relationships across treatments.
While this study indicates that rates of parasitism are significantly higher in standard orchards when compared to organic apple orchards, Tilmon and Hoffmann (2002) found the opposite result for parasitism rates of L. lineolaris by Peristenus in strawberries. In their study, Peristenus parasitism was 5-6.5 times more likely in an organic strawberry field than in a conventional (standard) field. A key difference between these studies is that the organic strawberry fields sampled did not have any pesticides applied, whereas all of the organic apple orchards applied USDAcertified organic pesticides. This suggests that the application of organic pesticides can have a negative impact on Peristenus spp. Furthermore, a model by Kovach et al. (1992) of apple production in New York State suggested that the environmental impact of organic apple production was greater than that of standard apple production. This evidence suggests that the lower likelihood of parasitism by Peristenus spp. in organic orchards was due to organic pesticides; however, further research is required to establish cause and effect relationships of the organic pesticides on beneficial arthropods. The overall conclusion from this two-year project is that pesticide regimes in apples have significant influence on biological control of L. lineolaris by its parasitoids, although not necessarily in predicted ways. We anticipated that reduced-risk insecticides would result in increased parasitism by Peristenus spp. relative to conventional, more broad-spectrum insecticides, and this is what we found. Contrary to expectation, however, parasitism rates in organic orchards were lower than conventionally treated orchards. Hence, this suggests that it is difficult to predict a priori the consequences of insecticide programs that points to the need to take into consideration the specific pests and beneficial organisms involved as well as the crop and the specific insecticides being applied. laboratory facilities, Art Agnello for assistance with identifying potential study sites and William Day for assistance in planning the research. Any opinions, findings, conclusions, or recommendations expressed in this publications are those of the author(s) and do not reflect the view of the U.S. Department of Agriculture. | 2016-05-12T22:15:10.714Z | 2010-04-17T00:00:00.000 | {
"year": 2010,
"sha1": "600b5b66698ae3e3ad1a69e6c56714f547525126",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jinsectscience/article-pdf/10/1/36/18170802/jis10-0036.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bb0dfce55b03e87832b876abc086e75afdb1532",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
258801081 | pes2o/s2orc | v3-fos-license | Topotactically induced oxygen vacancy order in nickelate single crystals
The strong structure-property coupling in rare-earth nickelates has spurred the realization of new quantum phases in rapid succession. Recently, topotactic transformations have provided a new platform for the controlled creation of oxygen vacancies and, therewith, for the exploitation of such coupling in nickelates. Here, we report the emergence of oxygen vacancy ordering in Pr$_{0.92}$Ca$_{0.08}$NiO$_{2.75}$ single crystals obtained via a topotactic reduction of the perovskite phase Pr$_{0.92}$Ca$_{0.08}$NiO$_{3}$, using CaH$_2$ as the reducing agent. We unveil a brownmillerite-like ordering pattern of the vacancies by high-resolution scanning transmission electron microscopy, with Ni ions in alternating square-pyramidal and octahedral coordination along the pseudocubic [100] direction. Furthermore, we find that the crystal structure acquires a high level of internal strain, where wavelike modulations of polyhedral tilts and rotations accommodate the large distortions around the vacancy sites. Our results suggest that atomic-resolution electron microscopy is a powerful method to locally resolve unconventional crystal structures that result from the topotactic transformation of complex oxide materials.
I. INTRODUCTION
In transition metal oxides, strongly correlated valence electrons can couple collectively to the lattice degrees of freedom, which can lead to a variety of emergent ordering phenomena, including exotic magnetism, multiferroicity, orbital order, and superconductivity [1]. In oxides with the perovskite structure, high flexibility and tolerance to structural and compositional changes enable the controlled exploitation and manipulation of the emergent properties [2]. Oxygen vacancies, for example, can radically alter the electronic states in materials, and in turn, suppress or enhance emergent phases via charge compensation and/or structural phase transitions [3][4][5]. Understanding the formation of oxygen vacancies and their impact thus provides promising prospects for exploring new physical properties and potential future technological applications.
A prototypical example for correlated transition-metal oxides is the family of perovskite rare-earth nickelates, RNiO 3 (R = rare-earth ion), exhibiting a rich phase diagram including metal-to-insulator and antiferromagnetic transitions [6][7][8]. For R = Pr and Nd, these transitions occur concomitantly with a breathing distortion of the NiO 6 octahedra and a disproportionation of the Ni-O hybridization [9][10][11][12]. As a consequence, the material family exhibits a pronounced structure-property relationship [13][14][15][16][17][18][19] and a sensitivity to oxygen vacancy formation, which can modify the surrounding Ni-O bonds and the nominal 3d 7 electronic configuration of the Ni 3+ ions [20]. Notably, an extensive oxygen reduction of the perovskite phase towards Ni 1+ with a cuprate-like 3d 9 elec- * yu-mi.wu@fkf.mpg.de † eren.suyolcu@fkf.mpg.de tronic configuration was recently realized via topochemical methods in Nd 0.8 Sr 0.2 NiO 2 thin films, yielding the emergence of superconductivity [21]. Furthermore, superconductivity was also observed in topotactically reduced films with R = La and Pr [22][23][24], as well as for various Sr-substitution levels [25,26] and substitution with Ca ions [27]. Since these reduced nickelates with the infinite-layer crystal structure are nominally isoelectronic and isostructural to cuprate superconductors, the degree of the analogy between the two material families is vividly debated [28][29][30][31][32][33]. Moreover, vigorous efforts are ongoing to realize superconductivity not only in thin films, but also in polycrystalline powders [34,35] as well as in single-crystalline samples [36,37], while also an improved understanding of the topotactic reduction process between the perovskite and infinite-layer phase is highly desirable. In particular, the reduction involves various intermediate (metastable) phases, in which the oxygen vacancy ordering patterns and the nature of the emergent phases have not yet been clarified comprehensively.
For instance, extensive experimental and theoretical studies [38][39][40][41][42][43][44] were performed on oxygen deficient LaNiO 3−δ with 0 < δ ≤ 0.5, suggesting a transition from a paramagnetic metal to a ferromagnetic semiconductor and an antiferromagnetic insulator as a function of increasing vacancy concentration [45][46][47]. For δ ≈ 0.5, neutron powder diffraction [38] revealed that the parent perovskite crystal structure with uniform NiO 6 octahedra changed to a structure with sheets of NiO 6 octahedra and square-planar NiO 4 units arranged along the pseudocubic [100] direction [38,39], involving a 2a p ×2a p ×2a p reconstruction of the parent pseudocubic unit cell (a p is the pseudocubic lattice parameter). Yet, the detailed crystal structure for the case δ ≈ 0.25 is not known, although an electron diffraction study suggested that it involves a 2a p √ 2×2a p √ 2×2a p reconstructed supercell [48]. Moreover, for compounds with R = Pr and Nd, even arXiv:2306.02809v1 [cond-mat.mtrl-sci] 5 Jun 2023 less understanding of the oxygen deficient phases exists. Metastable structures with ferromagnetic order were initially identified for δ ≈ 0.7, with x-ray diffraction data indicating a 3a p ×a p ×3a p supercell that possibly comprises two sheets of NiO 4 square-planar units connected with one sheet of NiO 6 octahedra [49]. In a subsequent neutron powder diffraction study it was suggested, however, that the metastable phase of the Pr compound rather corresponds to δ ≈ 0.33 with a √ 5a p ×a p × √ 2a p reconstruction and one sheet of NiO 4 square-planar units connected with two sheets of NiO 6 octahedra [50].
Here, we use atomic-resolution scanning transmission electron microscopy (STEM) together with electron energy-loss spectroscopy (EELS) to investigate the oxygen vacancy formation occurring in a Pr 0.92 Ca 0.08 NiO 3−δ single crystal upon topotactic reduction. We resolve the chemical composition and the atomic-scale lattice of the crystal, identifying a 4a p ×4a p ×2a p reconstructed superstructure with a highly distorted Pr sublattice. We find that the oxygen vacancy ordering pattern corresponds to a brownmillerite-like structure with a two-layer-repeating stacking sequence of NiO 6 octahedra and NiO 5 square pyramids, suggesting an oxygen deficiency of δ ≈ 0.25. Meanwhile, quantification of the octahedral tilts and Ni-O bond angles reveals distinct periodic wavelike patterns of polyhedra coordination in different layers due to the oxygen vacancies. These results are markedly distinct from previous reports on reduced rare-earth nickelates and provide an atomic-scale understanding of the moderately oxygen deficient structure with δ ≈ 0.25, which is one of the metastable phases occurring during the topotactic reduction process towards the infinite-layer phase of the superconducting nickelates with δ = 1.
II. METHODS
Single crystals of perovskite Pr 1−x Ca x NiO 3 were synthesized under high pressure and high temperature. Specifically, a 1000 ton press equipped with a Walker module was used to realize a gradient growth under a pressure of 4 GPa, executed in spatial separation of the oxidizing KClO 4 and NaCl flux, similarly to the previous synthesis of La 1−x Ca x NiO 3 single crystals [36]. The precursor powders were weighed in according to a desired composition of Pr 0.8 Ca 0.2 NiO 3 , although the incorporated Ca content in the obtained Pr 1−x Ca x NiO 3 was lower and ranged from x = 0.08 to 0.1. The Pr 1−x Ca x NiO 3 single crystals were reduced using CaH 2 as the reducing agent in spatial separation to the crystals. The duration of the reduction was eight days, using the same procedure and conditions as previously described for the reduction of La 1−x Ca x NiO 3 crystals [36].
Single-crystal x-ray diffraction (XRD) was performed on crystals before and after the reduction. The technical details are given in the Supplemental Material [51].
Electron-transparent TEM specimens of the sample were prepared on a Thermo Fisher Scientific focused ion beam (FIB) using the standard liftout method. Samples with a size of 20 × 5 µm 2 were thinned to 30 nm with 2 kV Ga ions, followed by a final polish at 1 kV to reduce effects of surface damage. HAADF, ABF and EELS were recorded by a probe aberration-corrected JEOL JEM-ARM200F scanning transmission electron microscope equipped with a cold-field emission electron source and a probe Cs corrector (DCOR, CEOS GmbH), and a Gatan K2 direct electron detector was used at 200 kV. STEM imaging and EELS analyses were performed at probe semiconvergence angles of 20 and 28 mrad, resulting in probe sizes of 0.8 and 1.0Å, respectively. Collection angles for STEM-HAADF and ABF images were 75 to 310 and 11 to 23 mrad, respectively. To improve the signal-to-noise ratio of the STEM-HAADF and ABF data while minimizing sample damage, a high-speed time series was recorded (2 µs per pixel) and was then aligned and summed. STEM-HAADF and ABF multislice image simulations of the crystal along [100] and [101] zone axis were performed using the QSTEM software [52]. Further details of the parameters used for the simulations are given in the Supplemental Material [51].
III. RESULTS
In thin films of infinite-layer nickelates, the highest superconducting transition temperatures are realized through a substitution of approximately 20% of the rareearth ions by divalent Sr or Ca ions [25][26][27]. Accordingly, we have prepared the precursor materials for the synthesis of single crystals with a nominal stoichiometry of Pr 0.8 Ca 0.2 NiO 3 . Using a high-pressure synthesis methods [36], we obtain Pr 1−x Ca x NiO 3 crystals with typical lateral dimensions of 20 -100 µm. Yet, an analysis of the as-grown crystals by scanning electron microscopy (SEM) with energy-dispersive x-ray (EDX) spectroscopy indicates that the incorporated Ca content lies between x = 0.08 and 0.1 (see Fig. S1 in the Supplemental Material [51]). This discrepancy to the nominal Ca content suggests that different growth parameters, such as an increased oxygen partial pressure, might be required to achieve stoichiometric Pr 0.8 Ca 0.2 NiO 3 crystals. By contrast, the employed parameters yielded higher Ca substitutions as high as x = 0.16 in the case of La 1−x Ca x NiO 3 as determined on the as-grown crystal surface by EDX [36], which exhibits a less distorted perovskite structure [6].
As a next step, we use single-crystal XRD to investigate a 20 µm piece that was broken off from a larger as-grown crystal. The acquired XRD data indicate a high crystalline quality (see Fig. S2 of the Supplemental Material [51]) and can be refined in the orthorhombic space group P bnm, which is consistent with PrNiO 3 single crystals and polycrystalline powders [53,54]. The refined Ca content of the crystal is 8.6%. The refined lattice parameters and atomic coordinates are presented in the Supplemental Material [51]. Furthermore, we find that the investigated crystal piece contains three orthorhombic twin domains extracted from the refinement, with volume fractions 0.95/0.04/0.01.
Subsequently, we carry out the topotactic oxygen reduction on a batch of Pr 1−x Ca x NiO 3 single crystals for eight days, using the same conditions as previously described for the reduction of La 1−x Ca x NiO 3 crystals [36]. Single-crystal XRD measurements on a reduced 20 µm crystal indicate a significant transformation of the crystal structure after eight days. However, a strong broadening and the resulting overlap of the Bragg reflections in the XRD maps prohibit a structural refinement and determination of the symmetry by this method (see Fig. S2 [51]).
Hence, in order to investigate the topotactic transformation of the crystal lattice on a local scale, we turn to atomic-resolution STEM imaging. We examine a reduced Pr 0.92 Ca 0.08 NiO 3−δ crystal with lateral dimensions of ∼80 µm. A top-down view of the crystal is shown in Fig. 1(a). Identical TEM specimens were prepared from a region of the crystal without visible surface defects caused by the FIB process. Figure 1(b) shows a low-magnification STEM high-angle annular dark-field (HAADF) image. As a first characteristic of the topotactic-reduced crystal, we note that singlecrystalline regions in the specimen are separated by grain boundary (GB) like regions, with a width ranging from a few ten to hundred nanometers and a length ranging from a few nanometers to micrometers. The GBs exhibit mostly an amorphous structure [ Fig. 1(b) inset and Fig. S3], exhibiting dark contrast in the images that originate from diffuse scattering [55] (see Fig. S3 for more details [51]). The amorphous character of the GBs is also confirmed in EELS measurements of elemental distribution profiles across a GB, which show a reduced EELS intensity of all cations due to the deteriorating signal in structurally disordered regions (Fig. S4 [51]). The presence of GBs can be a consequence of topotactic reduction. Alternatively, the GBs might have formed already during the high-pressure growth of the perovskite phase.
Zoom-in STEM-HAADF images from areas on either side of a GB [orange and blue squares in Fig. 1(b)] are displayed in Figs. 1(c) and 1(d). The same lattice structure and orientation are observed from the different crystalline domains near the GB. One typical domain size in the crystal is found to be hundreds of nanometers. STEM-HAADF imaging over larger field of view of hundred nanometers does not reveal any regions with defects or impurity phase inside one crystalline domain, suggesting that each domain retains a high crystalline quality and a stable structural phase after the reduction process (see Fig. S3 [51]). Fig. 2(b), there is an approximate 8% difference of the distances between the FFT maxima in reciprocal space along the [002] and [020] axis (blue arrows on FFT patterns). Based on the preference of vacancy formation on the apical oxygen sites [56], this indicates a contraction along the [002] axis due to the removal of apical oxygen atoms following the topotactic reduction process. Between the FFT maxima, the satellite spots corresponding to the superstructure periodicity appear. The wave vector of the superstructure reflexes is q = 1/4 reciprocal lattice units along [020] axis and q = 1/2 along [002] axis, indicating a formation of the 4a p ×4a p ×2a p superstructure of perovskite [a = b = 16.56(32)Å, c = 7.60 (14) A]. In STEM-HAADF images, the contrast is approximately proportional to Z 1.7−2 (where Z is the atomic number) [57,58], so Pr columns (Z = 59) appear bright and Ni columns (Z = 28) exhibit a darker contrast. The fourfold superstructure arises from the periodic changes in the intensity of the STEM image. As shown in the zoomed-in image [ Fig. 2(c)], half of the Pr atoms along the [100] projection appear elongated, while the other half remain round and undistorted. Focusing on the first two Pr rows, the atoms exhibit alternating round and vertical oval shapes, while the third and fourth rows exhibit alternating round and horizontal oval shapes.
Considering the projection in our image, the distortions stem from displacements of Pr columns. Figure 2(d) indeed reveals straight and zigzag line patterns of Pr atoms alternating along the [101] direction. A closer look at the atomic arrangement in Fig. 2(f) shows that the zigzag line on the fourth column appears to be a mirror of the second, forming a four-layer repeat sequence (ABAC stacking). EELS elemental maps of Pr, Ca and Ni obtained from the crystal are displayed in Figs. 2(g)-2-(i) using Pr M 5,4 , Ca L 3,2 , and Ni L 3,2 edges, respectively. The maps show that the Pr, Ca and Ni contents are homogeneous over the structure. Integrated concentration profiles of Pr and Ca confirm the A-site cation stoichiometry and uniform distribution (see Fig. S5 [51]). This suggests that strong distortions of the Pr lattice are likely not rooted in an A/B-site deficiency or ordering, but rather originate from other factors such as oxygen vacancy formation.
To gain insights into the detailed atomic structure, high-resolution STEM annular bright-field (ABF) images were acquired for imaging lighter elements such as oxygen. Figures 3(a) and 3(b) show the simultaneously acquired HAADF and ABF images of the crystal along the [100] projection. The distribution of oxygen ions including filled and empty apical oxygen sites is clearly visible by ABF imaging. The corresponding inverse intensity profiles extracted from different layers are displayed in Fig. 3(c). The absence of image contrast at every fourth oxygen site of the Pr-O layers confirms the vacancy ordering (profiles 1 and 3), while the oxygen content remains constant in Ni-O layers (profile 2). By overlaying the yellow arrows, the ordering pattern of oxygen vacancies can be clearly visualized. Half of the NiO 6 octahedra lose one apical oxygen atom and the remaining five oxygen atoms in a pseudocubic unit cell form a NiO 5 pyramid.
Notably, a square pyramidal coordination of the Ni ion is uncommon in nickel-oxide based materials. Few compounds with Ni 2+ ions in similar five-fold coordination include KNi 4 (PO 4 ) 3 and BaYb 2 NiO 5 [59]. However, none of these compounds exhibits a perovskite-derived structure. Our investigation therefore reveals the existence of a new structural motif in marked distinction to the NiO 4 square-planar coordination in previous reports on oxygen-deficient perovskite nickelates [44,60].
A magnified view of the ABF image is shown in Fig. 3(d). The apex-linked pyramids as "bow-tie" dimer units form a one-dimensional chain running along the [001] direction. Such configuration is consistent with the brownmillerite type structure A n B n O 3n−1 with n = 4 corresponding to the A 4 B 4 O 11 phase: Layers of apex-linked pyramids are stacked in the sequence ...-Oc-Py-Oc-Py-..., where Oc denotes a layer containing only octahedra (cyan), and Py a layer only pyramids (orange). The ...-Oc-Py-Oc-Py-... sequence runs parallel to the [010] axis, so that the Py layers are at 1/4 (001) and 3/4 (001) planes with a stacking vector 1/2 [001]. In the pyramidal layers, the remaining oxygen atoms are located at the center of apical sites without causing any tilt of square pyramids. In contrast, the apical oxygen atoms in octahedral layers tend to shift towards the elongated Pr atoms, leading to large octahedral tilts. A corresponding STEM-ABF image simulation was performed based on the predicted atomic model shown in Fig. 3(e) using the multislice method [52] (see Fig. S6 [51]). The simulated image reproduces the vacant sites and distortions well as observed from the STEM measurements, confirming the main crystal structure and alternating-stacking configuration.
The distribution of oxygen vacancies can lead to modifications in bond angles and tilting of octahedra. Quantitative STEM ABF measurements were used to precisely examine the atomic structure including the oxygen positions along the [101] direction. From the inverse ABF intensity profiles taken in Fig. 4(a), as shown in Fig. 4 Ni bond angles on the octahedral layer exhibit a zigzag modulated pattern with two sub-cell repeats and an average of 161.1 • . The overall periodicity of bond angles and tilt modulations is consistent with the alternating zigzag and straight pattern on the Pr columns along the [101] direction. The simulated ABF image for the predicted model along this viewing direction also agrees well with the STEM-ABF image, confirming the polyhedral distortions in the structure [ Fig. 4(e)]. The different amplitudes of tilt and bond angles between pyramidal and octahedral layers are the result of the change in oxygen content. This is also revealed in the structure along the [100] projection [ Fig. 3(e)], where NiO 6 octahedra show highly distorted tilts in the octahedral layer, while less distorted NiO 5 pyramids are present in the pyramidal layer for the lattice accommodation due to the removal of oxygen during reduction.
IV. DISCUSSION AND CONCLUSION
The obtained insights into the vacancy order in the oxygen sublattice and the distortions of the Pr sublattice are compiled in the two schematics in Fig. 4(f), depicting a brownmillerite-like lattice along the [100] and [101] investigated in this study. Oxygen vacancies at every fourth apical site in every second row lead to a contraction of the out-of-plane lattice parameter and an ordering pattern with alternating NiO 6 octahedral and NiO 5 pyramidal layers. We note that the STEM-ABF images indicate a nominal structure of Pr 0.92 Ca 0.08 NiO 2.75 based on the oxygen vacancy ordering in one crystalline domain, and a possible nonuniform oxygen content variation can occur due to the presence of GBs. This can be the subject of future work to explore the mechanism and origin of GBs [61]. Compared to the perovskite structure, the vacancy order reduces the tilt angles and increases the Ni-O bond angles in the pyramidal environment. Also the Pr sublattice is affected by the lack of every fourth apical oxygen ion, and the resulting complex distortion pat-tern leads to a 4a p ×4a p ×2a p reconstructed superstructure. The observed distortion of the Pr cation position and the associated wavelike variation of the surrounding bond angles and polyhedral tilts is highly unusual for perovskite-related materials. Yet, highly distorted Aand B -site cation sublattices were also reported for other topotactically reduced perovskite-related materials, such as CaCoO 2 [62], enabling the realization of phases that might be categorically unattainable by direct synthesis methods.
Further studies on Pr 0.92 Ca 0.08 NiO 2.75 are highly desirable to accurately determine the reconstructed atomic positions and the crystallographic unit cell, which is likely larger than the 4a p ×4a p ×2a p supercell. For instance, high-resolution synchrotron XRD might allow to resolve the overlapping structural reflexes in our singlecrystal XRD maps (Fig. S2 of the Supplemental Material [51]) and therewith a full structural refinement might be achievable. Moreover, future STEM studies on crystals after prolonged topotactic reduction can reveal whether our alternating pyramidal/octahedral structure eventually transforms into a square-planar/octahedral structure with a √ 5a p ×a p × √ 2a p supercell, which was proposed in Ref. 50 for PrNiO 3−δ with δ ≈ 0.33.
Notably, our observed oxygen vacancy ordering with apex-linked pyramidal units is distinct from all previously identified RNiO 3−δ lattice structures, which contain stacks of alternating square-planar NiO 4 and octahedral NiO 6 sheets. Moreover, to the best of our knowledge, pyramidal coordination was generally not observed in perovskite-derived Ni compounds to date, whereas it is a common lattice motif in various oxygen-deficient transition metal oxides, including Fe [63,64], Co [65], and Mn [66,67] compounds. In particular, SrFeO 3−δ hosts a variety of oxygen vacancy ordered phases with distinct spin and charge ordered ground states [68,69]. Since a closely similar vacancy ordering pattern as in Fig. 4(e) emerges in SrFeO 3−δ for δ = 0.25 (Sr 4 Fe 4 O 11 ), an exploration of potentially emerging magnetic order in Pr x Ca 1−x NiO 2.75 will be of high interest.
In summary, we examined the topotactic transformation of a Pr 0.92 Ca 0.08 NiO 3 single crystal to the oxygenvacancy ordered Pr 0.92 Ca 0.08 NiO 2.75 phase. The transformed crystal structure contains a 4a p ×4a p ×2a p supercell and periodic distortions as well as a zigzag pattern of Pr ions along the [100] and [101] directions, respectively. The ordering of the oxygen vacancies on the apical oxygen sites forms one-dimensional chains of bowtie dimer units of NiO 5 square pyramids. These square pyramidal chains run in parallel to the [001] direction, connecting with flattened NiO 6 octahedra. Our atomicscale observation of the systematic lattice distortions and oxygen vacancies underpins an unexpected pyramidaltype brownmillerite-like phase in the nickelates after a topotactic reduction. Our results are instructive for future efforts to gain a comprehensive understanding of the topotactic reduction of rare-earth nickelates and related materials. | 2023-05-20T15:08:03.710Z | 2023-05-18T00:00:00.000 | {
"year": 2023,
"sha1": "91a467f7092bac0fb05a21cdc5afdc2d71d349dd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1103/physrevmaterials.7.053609",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4f52b3fc4c8fcb5505a5c6de9c6e42bf253d84a0",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
225054474 | pes2o/s2orc | v3-fos-license | Simplifying and Facilitating Comprehension: The “as if” Heuristic and Its Implications for Psychological Science
Simplicity is a fundamental tenet of cognition intended to cope with a complex and intricate world. Based on the writings of the German philosopher Hans Vaihinger, this article introduces a wide-ranging simplification scheme denoted the “as if” heuristic. Following this heuristic, much of our productive and constructive thoughts about the world, specifically in science, are based on idealized fictitious assumptions. Although descriptions of the world as portrayed by psychological models and theories may contain fictitious elements (antithetical or at least indifferent to the search for truth), they afford a simplification tool that facilitates our comprehension of a complex and obscured world. Numerous examples from the psychological literature in which the “as if” heuristic is apparent are presented. Specifically, we analyze the implications of exploiting the heuristic for the development of psychological constructs, theory building, and the foundations of psychological measurement. While highlighting the gains acquired from the use of the “as if” heuristic, we also discuss its possible pitfalls if not properly used.
Introduction
Human behavior is multifaceted, overly rich, and excessively complex to be understood without first simplifying it. How simplification is attained, at either the perceptual or cognitive levels, is a fundamental psychological question for which there is more than one answer. A common scheme of attaining simplification is by searching for and generating patterns. People tend to perceive patterns as units or wholes allowing them to simplify perception and cognition. This tendency can be traced back to Gestalt psychology and its laws of organization (e.g., Wagemans, Elder, et al., 2012;Wagemans, Feldman, et al., 2012). Broadly, the term Gestalt means form or pattern; an elementary tenet of the Gestalt school is the search for patterns with a robust preference for the simplest ones (Garner, 1970). For instance, the "law of Prägnanz" (law of simplicity) postulates that stimuli tend to be perceived in their most simple form. Gestalt principles, such as figure-ground, closure, proximity, or continuation, all promote in one way or another simplification while maintaining internal consistency (e.g., Rock, 1975).
More recently, psychologists have dubbed the so-called "simplicity principle" (e.g., Chater & Vitányi, 2003;Feldman, 2003) following which the search for simplicity drives a wide range of cognitive processes. The pursuit for simplicity and parsimony is further promoted in philosophy, for example, in Ockham's razor 1 according to which simpler theories are preferable (supposedly, more likely to be true). Researchers employ theories, models, and metaphors that explicitly or implicitly simplify reality so as to make it amenable to empirical investigations and formulate it in a comprehensive manner, given our limited cognitive capacity.
What are possible cognitive devices that may promote simplification? One common route is the use of heuristics, the original meaning (in Greek) referring to a method of discovery. In the past few decades, heuristics have been used and defined in several (yet highly similar) ways. For instance, in the context of problem solving, it has been described as a sort of interim reasoning supposed to 943860R GPXXX10.1177/1089268020943860Review of General PsychologyKeren and Breugelmans research-article2020 1 Tilburg University, Tilburg, The Netherlands facilitate the discovery of suitable solutions (Polya, 1945). Computer scientists and researchers of artificial intelligence (AI) consider it a shortcut method for solving problems (Newell & Simon, 1972) in a quick manner which implicitly assumes an optimal trade-off between accuracy and speed (often resulting in a good enough approximation rather than perfect solution). In psychology, the term has been advanced by the research program initiated by Kahneman and Tversky (1974) who proposed that under conditions of uncertainty, people tend to employ a "limited number of simplifying heuristics rather than more formal and extensive algorithmic processing" (Gilovich et al., 2002, p. xv).
In this article, we present yet another mean of simplification denoted the "as if" heuristic. Its central tenet is that human cognition employs simplified descriptions and theories which, strictly speaking, are often fictitious (and hence false). This heuristic enables us to capture the essentials in a suitable manner, thus offering optimal means for meaningful comprehension of the world. We implicitly pretend "as if" our portrayal of the world is precise and complete; although this is fictitious, it affords a useful and better understanding of the world around us. A common representative example of the "as if" heuristic is rationalization (e.g., Cushman, 2020). Specifically, the mind attempts to rationalize or reconstruct post hoc an event or an act "as if" it was a priori part of the person's initial goals and beliefs. As we elaborate later, the "as if" heuristic, despite being a fiction, is nevertheless useful. Indeed, regarding rationalization, Cushman notes that "this is a useful fiction: Fiction, because it imputes reason to non-rational psychological processes; Useful, because it can improve subsequent reasoning." As another example, the description of complex mental phenomena in a personified manner (e.g., "the brain decides . . . ," or "emotions make us act . . . "), while false in the sense that they do not describe what is actually occurring in terms of mental processes, acting "as if" they do may convey useful information for the purpose of understanding particular behavior. In addition to enabling us to capture the world in an abridged and eloquent (coherent) way, it also serves as a tool for facilitating communication.
The heuristic is derived from the writings of the German philosopher, Hans Vaihinger, whose work was originally published in 1911 and translated into English in 1924 (The philosophy of "as if": A System of the Theoretical, Practical and Religious Fictions of Mankind). Recent work by Appiah (2017) elaborates and expands on Vaihinger's original work. Although our conjecture of the "as if" heuristic is firmly based on Vaihinger's and Appiah's work, we differ from both authors in explicitly taking a psychological science perspective.
Vaihinger's treatise is aimed at two related goals: On one hand, it offers insight into the cognitive processes underlying people's simplification attempts, in particular scientists, for a better comprehension of our intricate world. In that respect, the manner by which he examines the "as if" construct may be seen as a pure cognitive-psychological investigation. On the other hand, the proposed conjecture regarding researchers' cognitive processes is used to understand the nature and development of scientific models and empirical explorations and how these are understood and interpreted. As such, it can be viewed as a methodological, cognitive tool for further development of psychological science.
The remainder of this article is constructed as follows. We briefly introduce Vaihinger's framework and examine its key constructs of fictions and the "as if" heuristic. The implications of this framework, in particular, for psychological science research, are then discussed and analyzed. It is proposed that encapsulated in Vaihinger's analysis is a sort of validity assessment that is essential for understanding and elucidating research in psychological science. We elaborate on several representative examples to illustrate the significance of Vaihinger's approach. The final discussion elaborates on the overall consequences of his perspective and examines its possible role in the development and explanation of empirical results and the interpretation of theories and models.
The Essentials of Vaihinger's "as if" Philosophy
Vaihinger's starting point fits well with current views of perception and cognition, namely, that human comprehension of the world does not entail an exact portrayal of reality. A cognitive representation that maps one-to-one with the outside world is an impossible task given the cognitive system's limitations. To make the world more accessible and make sense of it, the internal representation needs to be simplified. The goal of cognition or thought is "to provide us with an instrument for finding our way more easily in this world" (p. 11). Simplification, following Vaihinger, can be achieved by an implicit idealization of the world which is realized by creating fictions. Specifically, our attempts to make sense of the world as reflected in descriptions, theories, and models of it are founded on some "as if" idealization which in principle is false (i.e., fictitious) in that it does not perfectly correspond with reality. This does not mean that fictions are a necessary characteristic of all kinds of simplification. For instance, the average or mean as a descriptor of a set of observations is in some way a simplification yet it carries useful information (and in addition it can be decomposed into its parts). However, in the case of the "as if" heuristic, essential information is omitted or assumptions are included that are not known to be true. As noted by Appiah (2017), Vaihinger's main interest and contribution lies "in the role of untruth in thinking about reality" (p. 4).
Vaihinger's major claim is that fictitious idealizationor the "as if" heuristic as we refer to it-is a necessary tool for facilitating our comprehension of a complex and multifaceted world. To understand and portray reality in a coherent and eloquent way, we need to simplify by idealization implying some deviation from a perfectly veridical perception of the world. Such idealization carries some untruth yet, following both Vaihinger and Appiah, this falsehood is constructive and functional for apprehending the world in a meaningful way.
Although "fictions" in Vaihinger's thought play a major role, the concept in a scientific context carries a negative connotation being an antithesis of truth. Two observations are important in this regard. First, Vaihinger did not claim, neither do we, that all simplifications entail fictions. Second, Vaihinger's notion of fictions is not so much antithetical to truth as indifferent to it. This is because truths and fictions serve different roles in the scientific process. In Vaihinger's framework, fictions have an explicit constructive function, namely, generating mental structures that assist in forming a meaningful world around us. Fictions are mainly evaluated according to their usefulness in achieving particular scientific goals (e.g., understanding or predicting behavior), and much less according to their truth. 2 The cognitive facilitation underlying the "as if" notion is reminiscent of the supposition that "other things being equal" (ceteris paribus) that is often used in theory construction and modeling. Whether this simplification is a fiction depends on what information is left out. If the information omitted (or held equal) is not essential to what is being described, then it can be seen as a justified simplification. However, if one leaves out factors that are highly relevant to the phenomena under investigation, then the model (compared with reality) contains a fiction. Assuming that all factors and circumstances remain the same except for those that are explicitly varied, and ignoring all the potential interactions between variables, is an idealization (knowing that in reality it is unlikely to be true) yet necessary for making a theory or model workable.
Following Vaihinger, "as if" simplifications should not be evaluated by the ability to produce scientific truths but rather by the usefulness of creating a workable theory or model. It is important to realize that although fictions and truth (the ultimate goal of the scientific enterprise) may seem to be incongruent, the role of the fictitious "as if" is provisional. Specifically, it can be conceived as a "mental scaffold" used for temporary support supposed to be removed by the time that final comprehension is achieved. To illustrate, the imaginary numbers (containing i, the square root of −1), are not "real" in the observed, threedimensional (3D) space, yet they are in more dimensional worlds. Notwithstanding, imaginary numbers are regularly valuable in calculations involving the 3D world. The fictitious "as if" may be apprehended as a (cognitive) bridge between comprehension of reality and reality itself (i.e., between epistemology and ontology). Not undermining the heuristic's efficacy, people may occasionally fail to remove the scaffold at the end of the process consequently leading to unwarranted conclusions and misunderstandings as illustrated in a later section.
Vaihinger's framework and presuppositions are highly compatible with the study of selective attention, a major role of which is to handle the " . . . complexity of the information that is presented to the senses at any one time and the consequent risk of confusion and overload" (Kahneman & Triesman, 1984, p. 29). Furthermore, as already noted, Vaihinger's approach also shares a fundamental feature with Gestalt psychology: Both relate to tools that enable the cognitive and the perceptual systems to make sense of a too complex and intricate world. Like the "as if" device which aims at simplified structures (albeit fictitious ones), the Gestalt movement assumes that wholes (Gestalten) are the simplified primary units the cognitive system is aiming at. Underlying the Gestalt laws is an attempt to explore possible simplifications in exchange for some lost precision. For instance, following the law of closure, people tend to perceive and structure objects in their entirety (as wholes) even when some (relatively negligible) parts are missing. They often perceive an object "as if" it was a whole even when it does not strictly match with reality. Thus, a circle drawn using broken lines is perceived in the present terminology "as if" it was a complete circle.
The "as if" heuristic and the reasoning/decision-making heuristics share a common objective of facilitation by simplification. Although strongly associated with computer programs, the term heuristic has originally been dubbed by Polya (1945) as a sort of reasoning "not regarded as final and strict but as provisional and plausible only, whose purpose is to discover the solution of the present problem" (p. 115). Indeed, being provisional, a heuristic approach is often incomplete implying that it may be error prone. It is this provisional state that grants the fictitious "as if" a status of a temporary mental scaffold that eventually has to be removed to get closer to truth. The "as if" and decisionmaking heuristics also share the conviction regarding the efficacy of heuristics. For instance, when introducing the heuristics used by people to assess probabilities, Tversky and Kahneman (1974) claim that "people rely on a limited number of heuristic principles which reduce the complex task of assessing probabilities" and further propose that "these heuristics are quite useful" (p. 1124) although sometimes they may lead to systematic errors.
Notwithstanding their facilitating function, fictions are to some extent untrue and, as discussed later, there is a trade-off between maximizing accuracy (antonymous to fiction) and construction of an effective and useful model or theory. An important distinction for assessing this trade-off is between what Vaihinger referred to as "true" and "semi" fictions. A true (real) fiction is one that is both false and contradictory, an example of which is the square root of a negative number. This example highlights the fact that under some circumstances, even idealizations that contain contradictions may, for some purposes, be so valuable that they may overshadow other considerations such as faultless and perfect accuracy. In contrast to true fictions, semifictions, although not entirely compatible with reality, are not contradictory in themselves. Semi-fictions assume the unlikely or the "close" to real; true fictions contain the impossible (in a given world). Semi-fictions are abundant in both daily life and in the scientific discourse. For instance, a general case of semi-fictions are approximations: These are frequently used when considerations of cognitive clarity and ease of comprehension overrule minor inconsequential deviations from perfect precision. Simple linear regression is a semi-fiction because almost no behavioral phenomenon is strictly linear, yet it often captures an informative trend. Finally, many psychological constructs are not well-defined whereas researchers treat them (fictitiously) as if they were well-defined. Examples can be found in classifications that group observations into provisional categories such as the Big Five classification of personality (e.g., Hurtz & Donovan, 2000), Ekman's (1992) seven basic emotions, or Fiske's four relational models (e.g., Fiske, 2004). These classifications may have been useful in stimulating all kinds of psychological research yet they are semi-fictions in the sense that none is perfectly mapped with reality.
An insightful example of a semi-fiction brought by Vaihinger (formally from economics but with unquestionable bearings to psychology) concerns Adam Smith's seminal The Wealth of Nations, first published in 1776. It is one of the first treatises advocating the free market, and its most fundamental assumption is that people are selfish rational primates whose only goal is seeking to maximize their own interests. It was obvious to Vaihinger that Smith was well aware that this ("as if") assumption is an oversimplification-as demonstrated, for instance, by Smith's other wellknown essay The Theory of Moral Sentiments, published in 1759-and hence false. Yet, he defended Smith by proposing that subsidiary causes and partially conditional factors, such as good will or habit, were knowingly ignored to achieve simplification that was necessary to obtain a comprehensible system. Smith's assumption is thus semi-fictitious in that it is incompatible with reality but does not lead to internal contradiction.
It is more difficult to reconcile theories or models when true fictions are involved. This is a controversial and intricate issue on which scientists differ, and here we restrict the discussion to two important points. First, as Appiah correctly points out, the manner we deal with actual inconsistencies in our thought cannot be done by adopting a nonstandard logic. Rather, he suggests what he terms "functional isolation" following which "we have a large set of families of beliefs, each of which we try to keep consistent and these families are not usually brought together in deliberation" (p. 16). Second, given that models or theories are not strictly true, containing fictitious "as if" elements, the value of comparing two such models (that may even be contradictory in that they make opposite predictions) has its limitations. Specifically, as Appiah remarks, just because one theory which is not strictly true but is successful in some sense or for some purpose, one cannot infer that another (again imperfect) theory that is inconsistent with the first theory, cannot be equally successful (given that the success of any theory is not impeccable). In other words, given that many theories are based to some extent on the "as if" construct, comparison between such theories or models should be performed cautiously.
It may be important to note the similarities and differences between Vaihinger's approach to truth and fiction and that of pragmatism, best known in psychology by the work of William James. James (1907James ( /1987 discarded the idea of truth as a correspondence between our mind and reality and favored a more operational definition of "(t)rue ideas are those that we can assimilate, validate, corroborate, and verify" (p. 573). Both James and Vaihinger were pragmatic in their emphasis on the importance of scientific practice, but the difference is that for Vaihinger, fictions serve as a provisional mental scaffold meant to be dropped or replaced by more accurate, true knowledge as science progresses. An important distinction in Vaihinger's framework is between the "as if" construct and a hypothesis, the latter being usually derived from a theory or a theoretical framework. Hypotheses attempt to unveil truth and as such should be subjected to stringent empirical and logical tests. Scientific knowledge, regardless of whether obtained by deductive or inductive means, is evaluated according to criteria of truth. In contrast, the construct "as if" represents a presumption (often a fallacious one) that does not require the employment of any tests regarding its truth. 3 Thus, unlike James's pragmatism, Vaihinger's fictions imply an understanding of truth that is necessary to distinguish "as if" from hypothesis (see Suárez, 2009). Importantly, according to Vaihinger, one of the goals of fictions is to be replaced by hypotheses which can then be tested to be true or not.
The differences between Vaihinger and James also exemplify the relevance of fictions for contemporary psychological research. For instance, in models of economics and decision making, "as if" reasoning has led to theories that are not concerned with the truth of their assumptions but only with predictions (Gigerenzer, 2020). In psychology, however, such reasoning has only been used rigorously in the school of behaviorism. Most other fields of psychological research employ (latent) constructs to describe and hypothesize about psychological processes (e.g., attitudes, personality, intelligence). As illustrated in this article, many of these constructs in psychological research are most fruitfully conceived as fictions, which is the basic tenet of the "as if" heuristic.
The essence of the "as if" philosophy boils down to the following: Although the essential goal of science is to unveil the ultimate truth, the world is so intricate and complex that it is next to impossible for humans to fully and absolutely achieve that goal. The "as if" approach embraces an inescapable compromise between two incompatible goals: the aim for comprehensive and complete accuracy which is unattainable, and the need for simplification that would enable a more sensible and reasonable comprehension. Unlike Vaihinger and Appiah, in this article, we employ a psychological rather than a philosophical terminology, yet in essence our analysis is compatible with their writings. Although Vaihinger's work has recently regained some attention in the philosophy of science (cf. Suárez, 2009), few applications to psychological research have been proposed. A notable exception is the work by Smythe (2005Smythe ( , 2017 who argues, like we do here, that "as if" reasoning has a function in making "explanatory theories more accessible to our understanding, to elucidate the metaphorical basis of many psychological constructs" (Smythe, 2005, p. 300). In addition, relating to the work on psychological fictions by Adler, Smythe discusses the potential role of "as if" fictions in people's narrative self-understanding. This article is devoted for scrutinizing the role of the "as if" heuristic in the broader field of (different domains in) experimental psychology.
In what follows, we explore the implications for psychology derived from the "as if" philosophy. We discuss the major question as to how far can we accept the "as if" heuristic (and the corresponding distortions implied by it) without completely falsifying our internal perception and understanding of the world. For illustration, we analyze in more detail a few key examples from the psychological literature highlighting the role of the "as if" framework. In the final discussion, we assess the pros and cons of this framework in relation to methodological and conceptual issues in psychological science.
Implications for Psychology Entailed by the "as if" Heuristics
We start by noticing that the notion of "as if" may be related to standard concepts of validity, in particular, ecological validity (Brewer & Crano, 2014), yet it is nevertheless fundamentally different. Specifically, ecological validity is aimed at assessing the extent to which the findings obtained from a specific setting or experiment(s) are generalizable to real-life settings. As such, the question of ecological validity examines the extent to which the correspondence (resemblance) between the (often artificial) experiment setting and the external world is justified. Does it offer a satisfactory imitation of reality and does it capture the major underlying variables of the phenomenon under investigation? In contrast, the construct of "as if" refers to the correspondence between the mind's internal representation and the alleged reality. For instance, one may extend the notion of "as if" to the relation between a sample and the corresponding population. We often generalize from the sample to the population as if they were the same (indeed, as the sample gets larger, the "as if" notion gradually becomes less fictitious). It is of course legitimate to make inferences from sample results as long as one keeps in mind that the inference is probabilistic rather than deterministic. Researchers often ignore the probabilistic element of their results (especially at the interpretation and discussion stage) treating them "as if" they were population outcomes.
The notion of "as if" also differs from the concept of internal validity. The latter deals with precautions concerning causal inferences drawn from an experiment like ensuring the absence of confounding variables, safeguarding potential influences of external variables, and testing the extent to which alternative explanations can be ruled out. In contrast, the "as if" construct is impartial to tests regarding the internal logic of inferences; rather, it is assessed by its usefulness as a simplification tool facilitating the cognitive processes. Yet, it may carry potential threats to theory development and the interpretation of results if one fails to keep in mind the boundaries entailed by using this heuristic.
It is often claimed that cognitive heuristics are susceptible to different fallacies resulting in cognitive illusions (e.g., Pohl, 2004). Although cognitive illusions originated in perception, the term has been used as a metaphor in the context of reasoning and decision-making errors. Cognitive illusions are presumably resulting from underlying assumptions based on prior knowledge (regardless of its reliability) interacting with perceived reality. The term illusion is probably most suitable for the description of framing effects (e.g., Kahneman & Tversky, 1984) referring to different descriptions or frames of the same situation and the same object. For instance, the same ground beef can be described as 80% lean or 20% fat yet the two frames are not assimilated into the same cognitive structure. People evidently have a preference for 80% lean ground beef and even judge it to be tastier than 20% fat (Levin & Gaeth, 1988). Indeed, Kahneman and Tversky (1984) suggest that "in their stubborn appeal, framing effects resemble perceptual illusion" (p. 343). In the same vein, the "as if" heuristic can also be conceived as creating illusions (fictions) which, as with framing and other heuristics, people are not aware of.
In sum, the main process underlying idealization and the corresponding "as if" heuristic concerns the need for simplification caused mutually by the boundless world complexity on one hand, and human's limited cognitive resources on the other hand. Notwithstanding the usefulness of the "as if" heuristics (in Vaihinger's terminology-the usefulness of fictions or "untruth"), the question remains as to how far can we accept this heuristic (and the corresponding distortions implied by it) without completely falsifying our internal perception and understanding of the world.
In what follows, we examine several examples from psychology aimed at demonstrating both the likely advantages of employing the heuristic as well as the potential dangers involved in overusing it. We discuss a representative sample of prevalent usages of the "as if " heuristic epitomizing a large range of domains in psychology. In particular, we examine the manner by which the "as if" construct is reflected in (a) the use of psychological concepts, (b) the employment of metaphors and development of theories and models, and (c) in research methodology, specifically in measurement and data analysis.
The "as if" Heuristic as Reflected in Psychological Constructs
Psychological constructs are often elusive in that they are either imprecise or not well-defined. Utility, well-being, attitudes, processing capacity, psychological distance (to mention just a few) are all complex thorny constructs that researchers pretend to comprehend "as if" they are welldefined. On second more profound thoughts, however, their exact meaning is vague which should not be surprising, given that many psychological constructs originate from folkpsychological models (Danziger, 1997). Notwithstanding the fictitious "as if" nature of many psychological constructs, employing such constructs may nevertheless be seen as useful in Vaihinger's framework. Below, we examine some representative examples of constructs pointing out their fictitious nature and at the same time indicating the benefits obtained from their use.
Dichotomization is a par excellence example of using the "as if" heuristics reflecting the tendency to simplify and minimize cognitive effort. Formally, a dichotomy consists of a partition into two parts or subsets that are jointly exhaustive and mutually exclusive. In reality, however, these two requisites are rarely satisfied conjointly. Most dichotomies are fictitious (thus false) being either not jointly exhaustive and/or not mutually exclusive. In the most recurrent case, the two ends of the dichotomy are presented "as if" they are exhaustive while in fact there are other possibilities in between as for instance in far-close, expensive-cheap, or abstract-concrete. Furthermore, dichotomies are often constructed as if the two categories of the dichotomy are rigorously well-defined which is often a fiction. Bedford (1997) demonstrated the problem describing a physician from the mid-18th century who claimed to discover the organ system that removes toxins from the blood, which he labeled the "liver." He further claimed to discover a second organ, which circulates the blood, absorbs nutrients, expels waste products from the body, and attacks foreign invaders. For when the liver is removed, the body is still able to do all these things and more, until such time as the toxin buildup is fatal. (p. 231) The physician suggested calling this second organ "notthe-liver." Clearly, he had not discovered a second organ but has merely shown that the liver is not the only organ present in the body. The fallacy lies in the erroneous assumption "as if " what remains after damage is a coherent category. Because "not-the-attribute" may not be a unitary concept, such a practice makes the attribute under discussion vague and, as noted by Bedford, may lead to self-perpetuating false claims and misguided research. In a later paper, Bedford (2003) illustrates how the fallacy occurs in different domains of psychology (e.g., the explicit vs. implicit memory distinction) and in neuropsychological research.
The use of false dichotomies in psychology "as if" they were true is wide spread-both in methodology and data analysis as well as conceptually in theory building. Regarding the former, null hypothesis significance testing (NHST), has been practiced by many researchers (deliberately or not) "as if" NHST determines truth whenever the empirical results yield an α < .05, otherwise it is false. NHST is clearly a fiction because α < .05 is an arbitrary criterion (e.g., Wagenmakers, 2007). Does p = .049999999 entail that the null hypothesis is false but p = .0500000001 not? Although NHST comprises a fiction (with undeniable deficiencies), it is a simplification that under certain conditions may be considered a useful fiction. As noted by J. Cohen (1990), NHST procedures are attractive because they offer "a deterministic scheme, mechanical and objective" (p. 1309) resulting in an unequivocal yes/no decision. It is obviously beyond the scope of this article to discuss the replication problem, except for noting that when one removes the fictitious nature of hypothesis testing (i.e., eliminating the "as if" mental scaffold) then the predicament of the so-called replication crisis may be questionable. As another example, simple linear regression is often used to study whether the phenomenon under discussion is linear or not (a real dichotomy). In reality, psychological phenomena are rarely perfectly linear. However, the fiction in this case is valuable because we are interested in the trend, and linearity offers a simplified (albeit false) valuable knowledge.
Fictitious dichotomies are also pervasive in theorizing and the formulation of conceptual constructs of psychological phenomena. Formulations such as global versus local perceptual primacy (e.g., Kimchi, 1992;Navon, 1977), automatic versus controlled processing (e.g., , and low-versus high-level construals (e.g., Trope & Liberman, 2010) to mention just a few examples, are all fictitious dichotomies. Nevertheless, they often constitute useful simplifications that enable theory development and facilitate the construction of accessible and meaningful concepts.
Intelligence is another hallmark psychological construct that is measured by different indices mainly IQ tests. Irrespective of whether intelligence is conceived as being a single general ability or comprehended as consisting of multiple intelligences (Gardner, 1983), the concept itself is an abstract term that cannot be directly observed. Rather, it is a latent construct inferred from, for example, factor analysis of scores on many different cognitive tests. The most common interpretation of intelligence assumes that it is the cause of observed scores (i.e., a reflective model). Following this interpretation, correlations among different types of cognitive tasks emerge because of the effect of difference in intelligence implying observed differences between people in their educational attainment, job performance, or life success. Such a causal interpretation implicitly suggests that intelligence is an existent real causal factor ingrained in people beyond the realm of statistical abstraction (e.g., Borsboom, 2005).
However, besides being conceived as a general mental capability, the exact definition of intelligence remains vague and controversial. At the neurological level, many brain areas have been found to be involved during cognitive tasks and dozens to hundreds of different genetic loci have been associated with intelligence scores (Barbey et al., 2012;Savage et al., 2018). In addition, alternative models have been proposed that explain the observed correlations equally well without the need for the existence of a factor of general intelligence ( Van der Maas et al., 2006. This would imply that intelligence may as well be a statistical index, emerging from the data, that has no causal function at all (i.e., a formative construct). Furthermore, intelligence, most explicitly in its popular operationalization as an intelligence quotient, is by definition an individual differences measure. It describes differences in performance (or derived capacity) between people. It has been seriously questioned whether such inter-individual constructs can be generalized to intra-individual constructs (Molenaar, 2004;Molenaar & Campbell, 2009). Although most researchers treat intelligence "as if" it is a well-defined individual attribute, it remains to be extremely useful for explaining differences between individuals in a wide range of settings, including recruitment, and selection. Examining the veracity of the concept by strict scientific criteria would lead to the conclusion that it is a fiction and hence all its applications could be judged to be invalid. Such a radical conclusion could be averted, however, by applying the "as if" heuristic thus conceiving intelligence as a useful fiction. Specifically, we could proceed with our research "as if" intelligence is a causal mechanism in human performance. At the same time, though, we might heed Vaihinger's warning that any evidence congruent with a fiction should not be seen as evidence for the fiction's truth. As such, extrapolations from findings on intelligence measures to the capacities of individuals or groups of individuals, which have stirred-up a lot of controversy (e.g., Fancher, 1985;Sternberg et al., 2005), are not warranted. Yet, studies using different models of intelligence would be warranted, even if these models might be internally contradictory (e.g., interpreting interindividual data at an intra-individual level) or contradictory with one another (e.g., intelligence as a real cause or as a constructed statistical index).
As a final example, we examine the concept of randomness. Whether randomness is inherent in nature (e.g., the moment at which a radioactive element decays; genetics in modern biology) or whether it indicates the limits of our knowledge remains an open question. Regardless of the answer, the construct remains intricate and elusive (Ayton et al., 1989;Lopes, 1982;Nickerson, 2002) for two main reasons. First, it lacks a rigorous precise definition: It can be defined either in terms of the process underlying randomness or in terms of the characteristics of the outcome of the process. Second, and related, there are no unqualified tests to determine its existence (e.g., Bar-Hillel & Wagenaar, 1991;Falk & Konold, 1997).
Despite its indefinable nature, randomness plays a pervasive role in psychological research in two different domains: in the perception of randomness and in research methodology in which it is constructed as part of the investigation process (e.g., random selection of subjects, random choice of stimuli, or random order of presentation). In both domains, although researchers pretend "as if" they command randomness, the concept remains to be elusive. Adopting strict criteria, the concept is a fiction because it can never be rigorously verified, yet as noted by Nickerson (2002), "the usefulness of the concept, despite its shaky conceptual foundations, is remarkable" (p. 335). This usefulness stems from two sources: First, there are several main characteristics that delineate the meaning of the concept such as complete unpredictability (Neuringer, 1986) or the lack of a pattern or a principle of organization. Although these criteria cannot replace a rigorous definition, they nevertheless provide a good approximation. Second, there are evidently several qualified tests (e.g., Chaitin, 1975;Strube, 1983) that supposedly assess randomness, albeit probabilistically.
Not undermining its usefulness, the fictitious nature of randomness should not be overlooked. For instance, several studies investigated people's biases to perceive randomness by testing whether they are able to distinguish between sequences produced by random or by non-random processes (for a review, see Bar-Hillel & Wagenaar, 1991). Nickerson (2002) correctly noted that in studies on the perception of randomness, researchers do not provide their subjects with sufficient information and do not specify underlying assumptions (in the present terminology, "as if" subjects possess all the necessary information and are familiar with the underlying assumptions). He therefore stated "that conclusion drawn from performance data regarding the ability of people to produce or perceive random sequences must be considered tenuous" (p. 351). Given the sizable number of studies pertaining to people's deceptive and biased perception of randomness, researchers are reminded that studies employing concepts which Vaihinger referred to as fictitious, may be useful but should be interpreted with care.
The "as if" Heuristic as Reflected in Metaphors and Theoretical Models
A common simplifying strategy in behavioral sciences is adopting a particular setting or a specific metaphor serving as a model for a much broader class of human situations. For instance, the image of a hungry rat learning to negotiate a maze served for many years as the prime model of how people adopt to their circumstances.
Metaphors are pervasive in both daily life as well as in scientific discourse, specifically psychology (Hoffman et al., 1990). Metaphors in psychological research are abundantthe mind as the software of the brain, memory store, and memory scan; attention as a spotlight, a resource, and processing capacity; emotions as energy, defense mechanisms, calibration of probabilities, and open-minded thinking-all comprise only a small sample of metaphors used in the psychological terminology. Metaphors are not just a characteristic of language but supposedly reflect a feature of our thought process (Lakoff & Johnson, 1980). Whereas Lakoff and Johnson (1980) claim that metaphors may constrain our thinking (of the object or phenomenon under hand), Hoffman et al. (1990) convincingly demonstrate the usefulness of metaphors in the scientific inquiry. For the present context, metaphors and models are treated in the same section. The simplification and facilitation obtained through the use of metaphors and models is encapsulated in the fact that they supposedly highlight the more important characteristic of the phenomenon under investigation.
Metaphors like models attempt to portray or simulate reality while, by definition, not being identical with it and hence can be considered as another class of instances of the "as if" heuristic. The main difference is that for metaphor users, it is clear that the simplification of reality is false; the usefulness of acting "as if" is explained by the potential production of new insights. This is less clear in the case of models because models that leave out irrelevant information may constitute a justified simplification and thus not necessarily count as a fiction. Put differently, metaphors refer to concepts people already possess such as "love is like a flower" in which the fictitious nature is self-evident. Models are more abstract schemes, construed for identifying the core variables of a phenomenon leaving minor factors aside and demonstrating a particular mechanism or relation between constructs. The potential fictitious facet of a model is not contained in the disregarded variables; rather it is the likelihood that by the end of the model construction process, researchers will draw unwarranted conclusions under the fictitious assumption that the model is complete, overlooking the neglected variables that are not included. Below, we examine a few examples that highlight the potential pros and cons embedded in the "as if" heuristic underlying models and metaphors.
One of the more frequently discussed metaphors in psychological science concerns the depiction of the mind "as if" it is a computer. Accordingly, the cognitive system has been (since the cognitive revolution of the 20th century) commonly referred to as an information processing system leading to the claim that neural computations can explain cognition (Piccinini & Bahar, 2013). Refraining from a detailed presentation of the controversial mind-computer issues (see Gigerenzer & Goldstein, 1996, for a review of the mind-computer metaphor), it may nevertheless be important in the present context to mention Searle's (1980) widely cited paper titled "Minds, Brains and Programs." In this paper, he distinguishes between what he referred to as "weak" and "strong" AI. Following the weak version, the computer is a powerful tool in assisting the study of the mind, yet it nevertheless contains the "as if" characteristics and resembles the mind only on some but not all dimensions. For instance, a computer or a computer program does not possess intentionality. In contrast, following strong AI, the computer is not just a research tool but "rather, the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states" (p. 418). Following strong AI, the computer program is not an "as if" fictitious mind. Rather, "programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations" (p. 418).
The controversy whether AI should be classified as weak or strong initiated a long and intense debate (e.g., Harnad, 1989). We only note that although computers did not yet exist at Vaihinger's time, his approach should most probably be classified under the weak AI approach. In Vaihinger's framework, the computer may be a useful fiction in studying the cognitive system (or the mind as he would refer to it) but it remains a fiction. Hence it should only be used provisionally as a mental scaffold and be removed at a later stage. In contrast, strong AI supporters assume that the computer and the mind are identical. Hence, following the strong version, AI should not be considered as a temporary cognitive stage but rather the end goal itself.
Another prevalent metaphor that witnessed an upsurge during the past decades suggests that the mind consists of two (supposedly) entirely different "systems" that govern reasoning (e.g., Evans & Stanovich, 2013;Kahneman, 2011;Sloman, 1996). One system is presumably fast and intuitive usually referred to as System 1, the other slow and deliberative referred to as System 2. Despite their popularity, there have been growing criticisms on different aspects of two-system frameworks (e.g., Keren, 2013;Newstead, 2000). One major problem is that two-system researchers pretend "as if" a mental system is a well-defined construct which is highly questionable. Moreover, it should be noticed that two-system theories often confuse between system and process which they use interchangeably. The two concepts constitute different units of analysis (e.g., Keren & Schul, 2009;Schacter & Tulving, 1994). To use the computer analogy discussed above (even we use this "as if" fictional metaphor, but are careful to use it in a constructive way only for the purpose of elucidation), processes are comparable with software whereas systems might be seen as the computer involving both software and hardware. A recent paper by Melnikoff and Bargh (2018) offers an extensive examination of the different criticisms and concludes that cognitive science would do better without the dual system typology.
The point to be emphasized here is that readers often fail to notice the "as if" nature of two-system frameworks interpreting the two systems as if they were real two separate computer programs, or worse, as two different homunculi. Among the few who explicitly caution readers are Kahneman and Frederick (2002) who note that they use the term systems as a label for two (supposedly) distinguished sets of processes, and Kahneman (2011, pp. 28-29) who explicitly notes that the two systems are actually fictions (albeit useful ones that simplify the story, making it more comprehensible). Yet, even with these cautious remarks, the term system is so widely spread (in an era where almost all technologies are associated in one way or the other with systems) that it is difficult to accept that as far as the mind is concerned, the system has an "as if" status. People often forget to recognize the provisional nature of the "as if" heuristic and assume that the two systems represent reality.
Metaphors may be useful in elucidating complex issues and amplifying certain aspects as long as the simile is not equated with reality. Indeed, several examples exist in which the metaphor is unlikely to be confused with reality. For instance, Eriksen and St. James (1986) proposed the metaphor of a zoom or variable-power lens as an analogy for visual selective attention. With a low zoom power, there is a wider field of view with little discrimination of detail. Increasing the lens power narrows the visual field view while increases the power resolution for details. Eriksen and St. James reviewed the relevant literature and reported two experiments all of which were highly compatible with the variable-power lens metaphor. Conceiving attention as if it was a variable lens is helpful in constructing interesting research questions and offer insight into how attention may operate yet it is extremely unlikely that people will equate the metaphor with reality.
As another example, Muraven and Baumeister (2000) proposed a theoretical framework in which self-control depends on limited resources and used muscle's strength as a metaphor (i.e., self-control is like a muscle that gets tired). They conjectured that resources required for self-control deplete much like a muscle's ability to work, and further that "like a muscle, repeated practice and rest can improve self-control strength in the long term" (p. 254). Because self-control is an abstract concept while a muscle is concrete, using the muscle "as if" it was like self-control may simplify things and enhance comprehension; yet, it is transparent that the similarity between the two is limited to certain dimensions and the two cannot be simply exchanged with one another. Even in such straightforward cases, one has to take care that the similarity is not taken beyond justifiable limits.
A final illustration is taken from the field of judgment and decision making. It adopted as an analogy for a decision maker the metaphor of a bounded (Simon, 1982(Simon, , 1991 rational gambler (i.e., who employs limited reasoning) engaged in selecting the most advantageous bet from a small currently available set. For instance, initial empirical attempts to measure utility (e.g., Coombs & Komorita, 1958;Mosteller & Nogee, 1951) used choice between monetary gambles as their main empirical tool. Eventually, choice between gambles was firmly established as the paradigmatic riddle of decision making. Pretending "as if" gambling offers an adequate analogy for decisions in general, certainly a highly questionable assumption, contains two essential virtues: First, it captures two of the main variables underlying the large majority of decisions, namely, the consequences reflected in worth or utility (leaving the assumption aside as if it is measurable) and uncertainty. Second, it lends itself easily to modeling in theories such as utility or prospect theory (Kahneman & Tversky, 1979).
One of the merits of Vaihinger's "as if" framework is an attempt to bolster the essential role of theory and model building in the process of the scientific inquiry. This is convincingly illustrated in the development of game theory by von Neumann and Morgenstern (1953), one of the most fruitful theories in the social sciences. These authors were aware of the conceptual and practical difficulties associated with utility which is a necessary construct in the development of their theory. Yet, for the purpose of their theory, they take what they call an opportunistic stand and use monetary terms (which can exactly be measured) "as if" they were utilities. Being aware of the fictitious assumptions they made, they added that "this preliminary stage is necessarily heuristic [italics in original], i.e. the phase of transition from unmathematical plausibility considerations to the formal procedure of mathematics" (p. 7).
The gambling paradigm and the metaphor of the decision maker as a gambler offer a useful simplification that facilitates modeling and the testing of hypotheses. However, the decision maker as a gambler remains a fiction. In particular, the gambling model neglects and suppresses some essential features of the decision process like internal conflicts and their associated emotions, and it remains silent with regard to intangible outcomes. Except perhaps for gamblers in a casino, most daily decisions, important as less important ones, are essentially different except for the uncertainty underlying almost any decision. To illustrate, a patient who has to decide whether to undertake an operation does not resemble in any way a gambler and even her probability assessments are supposedly guided and (occasionally) biased by different considerations. The message to be taken from Vaihinger is not that the gambling paradigm is fundamentally invalid-it may still provide a useful benchmark for assessing the quality of decision making. Rather, it is to remind researchers that by the end of the day, the gambler's metaphor is fictitious and thus one should be cautious in the interpretation of empirical results.
Quantitative Measurement and Analysis of Subjective Experiences
Another case where the "as if" heuristic is evident concerns the quantitative measurement of subjective experiences. Scales that require participants to indicate their experiences (e.g., feelings, emotions, attitudes, sensations, evaluations) on a number of verbal or numeric categories are commonplace in psychological inquiry. Subsequent analysis of the obtained data is performed by means of parametric analyses (e.g., t tests, ANOVA) which assume measurement at an interval or a ratio scale. However, the assumption that subjective ratings of this kind actually produce quantitative data is contentious and is usually not supported by tests. Put differently, the data are treated "as if" measured on an interval or ratio scale which often is simply a fiction. In developing the law of comparative judgment, Thurstone (1994) acknowledged that it was unclear whether the psychological continuum that is supposed to underlie the modeled judgments actually existed, which in the present terminology boils down to a fiction. Specifically, he noted that "the psychological scale is at best an artificial construct. If it has any physical reality we certainly have not the remotest idea what it may be like" (p. 266).
The issue of how, if at all, can mental phenomena be quantified and measured has long been debated, yet the outcomes of these debates have gone largely unnoticed in the psychological literature (Cliff, 1992). Several approaches have been proposed to tackle the issue, the most important being additive conjoint measurement (Kranz et al., 1971) and related approaches (e.g., Rasch, 1960). Irrespective of the extent to which these approaches can be considered successful, they are rarely used in contemporary studies. The quantitative measurement by such scales appears to be simply assumed without much further reflection or evidence that this assumption is warranted (Tafreshi et al., 2016). In other words, researchers act as if their scales measure subjective experiences at interval or ratio level.
The way researchers use quantification of experiences is compatible with Vaihinger's proposal of how true fictions are used in the process of research yet are themselves omitted from the final result. Likewise, quantification is used in the process of coding and analyzing scale responses but is often omitted in the stage of interpretation. Most psychological studies do not use precise quantified hypotheses; mostly they are phrased in a directional or ordinal way (e.g., x will be higher in group A than in group B; y has an effect on z, even after controlling for q). Furthermore, the majority does not contain quantified conclusions of the data; they are also commonly phrased in nominal or ordinal fashion. For example, results are interpreted as significant or not. Effect sizes, which are quantitative, are also often interpreted at the last stage in J. Cohen's (1988) categorical terms of "small," "medium," and "large.
In sum, although quantitative measurement of subjective experiences, especially by scales, may be seen as a fiction, it may nevertheless be useful in that results obtained with such measures may enhance theory building and in a practical sense improve predictions. Notwithstanding, it should be noted that creating results that are theoretically or practically useful does not constitute any proof for their measures truly being quantitative. As such, researchers should be cautious in the quantitative analysis and interpretation of their data or, ideally, provide evidence that quantification is warranted and not a fiction at all in their data.
Summary and Conclusion
A central feature encapsulated in the cognitive system is the search for simplicity, which can be attained by different means. A common simplification strategy is the use of a heuristic which serves as an aid for learning, discovery, and comprehension. This article introduced an overarching heuristic of cognition labeled "as if," pertaining to the tendency to portray the world in a simplified idealized manner. While often fictitious, it assists in enhancing comprehension and making sense of the world around us. In addition, and closely related, a prominent aspect of the usefulness of the "as if" heuristic is the facilitation it offers in terms of communication.
Like other types of simplification, it involves the omission of some information to reduce reality's complexity. Yet, unlike regular simplifications, in which omitting information that is not essential to the phenomenon can be acceptable, "as if" simplifications may involve fictions known to be false and hence cannot be justified from a rigorous perspective of a truthful representation of reality.
Fictions may be of two sorts: first, the omission of information known to be relevant to our understanding of reality (Vaihinger's "semi-fictions"), as is the case with Adam Smith's economic model on which we elaborated earlier. Kahneman (2003) explained that such assumptions, while questionable, are made because they serve the purpose of allowing economists to analyze and predict economic behavior. Thus, "whether or not psychologists find them odd and overly simple, the standard assumptions about the economic agent are in economic theory for a reason: they allow for tractable analysis" (p. 166). Indeed, as we noted, Smith's book The Theory of Moral Sentiments suggests that he was probably aware of the fictitious assumption he made when developing his economic model and made this assumption deliberately to be able to develop an amenable model.
The second sort concerns fictions known to be false or at least can currently not be justified to be true (Vaihinger's "true fictions"). For instance, Friedman's (1953) economic agent model employs a metaphor of an excellent billiard player whose shots can be perfectly predicted if one assumes that the player made his shots "as if" he perfectly knew the complicated mathematical formulas that determine the ball's trajectory. Following Friedman, the confidence in such an assumption is not based on our beliefs about the billiard player's abilities (and correspondingly, the economic agent whose capacities are assumed to be selfish, rational, with unchanging preferences). Rather, following Friedman, neither the billiard player nor the economic agent would have been capable to reach the outcomes unless they possessed the assumed capabilities. Besides being a logically circular argument, it is important to note that Friedman's sole interest is in building a theory that would be capable of making the best predictions. Friedman's use of the "as if" heuristic and correspondingly employing "true fictions," is primarily justified for improving predictions. In contrast, Vaihinger's "as if" methodology is aimed at explaining and understanding the world around us thus creating cumulative knowledge.
As noted earlier, psychological concepts are frequently vague and poorly defined. Danziger (1997) provides a compelling analysis of constructs such as cognition, emotions, attitudes, intelligence, and so forth doubting their universal validity and claiming that "psychological theory operates on the basis of some pre-understanding of that which it is a theory of" (p. 6). In the present terminology, Danziger asserts that psychologists use the "as if" heuristic without actually acknowledging it. Given the fuzziness of psychological constructs and the ensuing need for defining them, one may adopt one of two potential approaches, a pragmatic and an ontological one. Specifically, the "as if" heuristic can be seen as a bridge between more pragmatic views of psychological science (i.e., finding out what works) and more ontological ones (i.e., attempting to capture reality and finding out the true nature of psychological constructs and processes). This bridge is in particular important in the field of psychology, which in its relatively short but productive history has seen several shifts in the way that mental processes have been conceived and studied (e.g., introspection, behaviorism, cognitivism, psychoanalysis).
The difference between the pragmatic and ontological perspective can be illustrated by analyzing the intelligence construct. Following a pragmatic, operationalist approach, intelligence is conceived and measured in terms of IQ tests. Indeed, such an approach-positing "as if" we know what intelligence is-may yield useful and practical knowledge for predicting performance on different cognitive tasks. Alternatively, one may prefer a more ontological approach aiming for a better understanding of the construct of intelligence by developing alternative models of what intelligence consists of and how it operates (e.g., a latent model, a network model). Under such an approach, hypotheses can be derived and tested by an "if-then" method, namely, if the hypothesis is correct then we should expect a certain pattern of results.
Note that the pragmatic approach directly relates to the "as if" heuristic-its test is not in terms of truth but rather in terms of usefulness. The ontological approach is more geared toward an if-then test which is assessed by tests of truth. The heuristic's efficacy is also apparent in the process of evaluating a model and the assessment of its value. A model is an abstraction of the world, a process which requires some loss of details and hence never strictly captures reality perfectly. However, models serve as assertions of a working hypothesis (Hogarth, 1986), and useful models should supposedly encapsulate the main variables that affect the phenomena. Yet, even successful models (in terms of their efficacy of predictions) account only for a relatively small part of the variance. They remain in some respect incomplete to the extent that researchers draw broad conclusions while overlooking the size of the unexplained component.
The idea that fictions may be useful in the process of scientific inquiry may sound as anomalous. How could the explicit embracement of untruths lead to finding out truths? One has to keep in mind that the fictional stage is supposed to fulfill a temporary aid and is not the final goal. In other words, the fiction is assumed to disappear as scientific understanding progresses. Three potential pitfalls underlie the use of the "as if" heuristic. First is ignorance or disregard for the fictitious nature of the "as if" heuristic. A main purpose of this article is to remind researchers to keep the heuristic's use in mind. Second, the usefulness of fictions should not be taken as a proof for their veracity (it works, therefore it is true). Third, the temporary advantage of fictions should not be neglected by failing to remove them (whenever possible) by the end of the process.
The implicit assumption of the heuristic namely that our research is based on provisional assumptions is concomitant with three options: First, in the case of Vaihinger's true fictions, it may turn out that the assumptions can be omitted from future theorizing without loss of accuracy. For instance, the construct of ether in physics was assumed for a long time to be the medium for the propagation of light.
As insight in electromagnetism advanced, the concept was simply dropped. Similarly, psychologists may conclude in the future that the "two system" notion is superfluous for understanding cognition and reasoning processes. Second, concerning Vaihinger's semi-fictions, existing assumptions may in the future be replaced by more accurate and proper assumptions. Finally, our understanding may reach a point at which specific assumptions may be converted into precise and testable hypotheses. For example, advanced tools for the measurement of feelings may enable data gathering that surpasses the axiomatic tests of quantification laid out by additive conjoint measurement allowing for enhanced quantification of feelings.
It was proposed that the "as if" construct may be conceived as resembling the notion of ecological validity. A major difference between the two concepts is that while ecological validity evaluates the correspondence between a simulated experiment and the real world, the construct of "as if" refers to the correspondence between the mind's internal representation and reality. The notion of ecological validity can also be conceived as yet another demonstration of the "as if" heuristic. Specifically, researchers are inclined to interpret the results of artificially designed experiments as if they genuinely and indisputably represent the real world, consequently abolishing any considerations of ecological validity.
One central question is whether the "as if" heuristic operates deliberately or mainly as an automatic process. The answer to this question depends on the perspective one takes. As illustrated throughout this article, several researchers deliberately used a fiction in the development of their model or theory because they found it useful for furthering our understanding of human behavior. However, under many other circumstances, the usage of the "as if" heuristic is likely to be an unconscious process. In that respect, it resembles the heuristics that have been prevalent in both the psychology of reasoning (Evans, 2006;Newell & Simon, 1972) and the psychology of decision making (e.g., Fredrick, 2002;Gigerenzer et al., 1999;Kahneman et al., 1982;Keren & Teigen, 2004). These heuristics concern mental shortcuts based on simple and efficient algorithms or rules that frequently work well yet can also lead to systematic deviations (biases), and are presumably employed automatically.
The importance of acknowledging the "as if" heuristic is to remind researchers of the boundaries that the heuristic's use imposes, and accordingly be cautious in their inferences and conclusions. For instance, we noted that sometimes the "as if" facet is masked as in the "two system" theoretical framework where it is essential that the researcher should explicitly state the use of the heuristic to avoid misinterpretation and miscommunication. In other circumstances, such as comparing self-control operations with that of a muscle (Muraven & Baumeister, 2000), the employment of the "as if" heuristic is self-evident.
Another illustration in which the heuristic is overlooked is associated with the seminal work of Kahneman and Tversky on heuristics and biases. While their work was original and offered insights into the cognitive pitfalls of reasoning under uncertainty, it has often been interpreted as if it demonstrated failures of rationality. Such an elucidation is problematic because it is conceived "as if" the construct of rationality has only one interpretation and that it is presumably well-defined. It is beyond the scope of this article to analyze the rationality debate except of noting that almost all researchers who contribute to this debate ignored the "as if" characteristic underlying the rationality construct. L. J. Cohen (1981) correctly claimed that "the actual interpretation of experimental data is bound to be affected by the resolution of certain fundamental issues about the normative criteria for rationality" (p. 361).
The use of heuristics, "as if" not excluded, has its pros and cons. Newell and Simon (1972), who pioneered the use of heuristics for problem solving, proposed that complex problems are solved by the use of heuristics that are fairly efficient. Yet they explicitly note that the use of such heuristic does not guarantee perfect solutions. The originators of the heuristics research program in reasoning and decision making (e.g., have accentuated the undesirable consequences of heuristics, in particular, the biases and potential errors associated with their use. Others (e.g., Gigerenzer, 2008;Gigerenzer et al., 1999) have noted the constructive and adaptive facets resulting from the use of heuristics. Simplification, given human limited processing capacity and restricted memory, is often a compelling necessity. The use of heuristics is an efficient way of achieving simplification yet it carries a price. The purpose of this article was not just to introduce the "as if" heuristic, but mainly to make researchers aware of its use being attentive to both its benefits as well as its potential pitfalls.
The "as if" heuristic is indeed a broad concept encompassing simplifications of different kinds. One may wonder whether such a construct is needed for examining such a wide-ranging set of phenomena. We propose that the "as if" construct serves as a unifying function to remind researchers to explicitly state their assumptions and boundary conditions. Overlooking the "as if" heuristic may lead researchers to unwarranted conclusions that go beyond the real given evidence. The usefulness of the heuristic lies in affording a lucid and an eloquent comprehension of the world that can be easily articulated in communication with others.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2020-07-30T02:02:14.650Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "77a40e2fa5df7b35af2a339a4ff195e5b51943a9",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1089268020943860",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "9aa0c4846d4fdc88d096ff698881aedbc0b084e6",
"s2fieldsofstudy": [
"Psychology",
"Philosophy"
],
"extfieldsofstudy": [
"Psychology"
]
} |
126366405 | pes2o/s2orc | v3-fos-license | Norm estimations , continuity , and compactness for Khatri-Rao products of Hilbert Space operators
We provide estimations for the operator norm, the trace norm, and the Hilbert-Schmidt norm for KhatriRao products of Hilbert space operators. It follows that the Khatri-Rao product is continuous on norm ideals of compact operators equipped with the topologies induced by such norms. Moreover, if two operators are represented by block matrices in which each block is nonzero, then their Khatri-Rao product is compact if and only if both operators are compact. The Khatri-Rao product of two operators are trace-class (Hilbert-Schmidt class) if and only if each operator is trace-class (Hilbert-Schmidt class, respectively).
INTRODUCTION
Matrices and operators are fundamental tools in mathematics and related fields from viewpoints of theory, computations, and applications.A variety of ways to multiply matrices has been investigated in the literature.For instance, the Kronecker product and the Khatri-Rao product.Denote by the set of m-by-n complex matrices and abbreviate to .Recall that the Kronecker product of and is given by ˆ=.
The notion of Kronecker product was generalized to the Khatri-Rao product as follows.Consider two complex matrices A and B partitioned so that their ( , ) ijth block are given by B may be different).Then the Khatri-Rao product [1] of A and B is defined by ˆ= . (
When
A and B are of only one block, their Khatri-Rao product is just their Kronecker product.See more information in [2][3][4][5][6] and references therein.
The notion of Kronecker product of matrices is extended to the tensor product of operators on a Hilbert space.Certain algebraic, order, and analytic properties of tensor product of operators have been established, see [7][8][9].In [10][11], the authors study the notion of tensor product for operators to the Tracy-Singh product of operators.Recently in [12][13], the authors introduced the Khatri-Rao product and the Khatri-Rao sum for operators acting on the direct sum of Hilbert spaces.This construction provides a natural extension for both the Khatri-Rao product/sum for matrices, and the tensor product/sum of operators (see Section 2 for details).Fundamental algebraic, order, and structure properties of the Khatri-Rao product were investigated in [12].
In this paper, we continue developing this theory by discussing analytic properties of Khatri-Rao products for bounded linear operators acting on a Hilbert space.Under the assumption that two operators are represented by block matrices whose each block is nonzero, we will show that their Khatri-Rao product is compact if and only if both factors are compact.We provide norm bounds for the operator-norm, and the Schatten p -norms for = 1, 2, p .Then we show that the Khatri-Rao product is (jointly) continuous with respect to the topologies induced by such norms.The norm bounds imply that the Khatri-Rao product of two operators are trace-class (Hilbert-Schmidt class) if and only if each operator is trace-class (Hilbert-Schmidt class, respectively).
This paper is organized as follows.In Section 2, we explain the notion of Khatri-Rao product for operators.In Section 3, we establish analytic properties involving norm bounds, continuity, convergence, and compactness of the Khatri-Rao product of operators in the operatornorm topology.The last section discusses the same properties of the Khatri-Rao product on norm ideals of compact operators.
PRELIMINARIES ON KHATRI-RAO PRODUCTS FOR OPERATORS
Throughout this paper, let H , H , K and K be complex Hilbert spaces.When X and Y are Hilbert spaces, let , B XY stand for the Banach space of all bounded linear operators from X into Y , equipped with the operator norm . .We abbreviate for all x H and .y K The tensor product is bilinear and continuous with respect to the topology induced by the operator norm.
From now on, fix the following orthogonal decompositions: (0,...,0, ,0,...,0) ( is in the -th position), : (0,...,0, ,0,...,0) ( is in the -th position). where ijWe can perform the addition, the scalar multiplication, the adjointation, and the usual multiplication of operator matrices in a similar way to those of matrices.We define the Khatri-Rao product of A and B to be a bounded linear operator from is reduced to the tensor product AB .
([12]
).The Khatri-Rao product of operators is bilinear.More precisely, for any compatible operators A,B,C and for any scalar .
NORM BOUNDS, CONTINUITY, CONVERGENCE AND COMPACTNESS OF KHATRI-RAO PRODUCTS IN THE OPERATOR-NORM TOPOLOGY
In this section, we discuss norm estimation, continuity, convergence, and compactness of the Khatri-Rao product of operators with respect to the topology induced by the operator norm.
Recall the following bounds for the operator norm of operator matrices.
The next theorem provides an upper estimate for the operator norm of Khatri-Rao product.Such bound depends on the number of blocks in the representation (2)., , , Hence, we obtain the bound (5).
A B
A B * is (sequentially) continuous with respect to the operator-norm topology. is a sequence of positive real numbers with limit zero, called the singular values of T. The zero operator is an example of a compact operator.Every finite rank operator (between infinite/finite dimensional spaces) is compact.Every compact operator is always bounded and continuous with respect to the operator-norm topology.
KHATRI-RAO PRODUCTS ON NORM IDEALS OF COMPACT OPERATORS
In this section, we investigate the Khatri-Rao product on several norm ideals of In order to proceed, some auxiliary results are needed.
The following result asserts that the Khatri-Rao product is continuous on the ideal of compact operators S .
Theorem 11.If a sequence =1 If A and B are compact, then we have the following bound: Proof.We may suppose that Aij and Bij are nonzero for all i, j.Then the operator
CONCLUSION
We have provided a necessary and sufficient condition for the Khatri-Rao product of operators to be compact.Indeed, for two operators in which every block is nonzero, their Khatri-Rao product is compact if and only if both factors are compact.We establish estimations for the operator norm, the trace norm, and the Hilbert-Schmidt norm for Khatri-Rao products of Hilbert space operators.The Khatri-Rao product is continuous with respect to the topologies induced by such norms.It follows that the Khatri-Rao product of two operators are trace-class (Hilbert-Schmidt class) if and only if each operator is trace-class (Hilbert-Schmidt class, respectively).
Theorem 3 .
For any operator matrices Since each (i, j)-th block of AB * is given by , ij ij AB it follows from Lemma 2 and the Cauchy-Schwarz inequality that 2 2 the operator-norm topology.Next, we consider the compactness of the Khatri-Rao product of two operators.Recall that a linear operator :T HH is said to be compact if and only if it can be written in the form 1 , x and {} nn y are orthonormal sets in H , and ( ) n n
B
H . Recall that any proper ideal of the algebra B H is contained in the ideal of compact operators.For any compact operator p A is defined by the functional calculus.If p A is a nonnegative real number, then A is called a Schatten p -class operator.The Schatten -norm is just the operator norm.For each 1 p "", denote by p S the Schatten p -class operators.In particular, 1 S and 2 S are known as the trace class and the Hilbert-Schmidt class, respectively.Each Schatten p -norm induces a norm ideal of B H and this ideal is closed under the topology generated by such norm.
Lemma 10 .
operator if and only if ij A is a Schatten p -class operator for all , ij Proof.This is a direct consequence of the norm estimations in Lemma 8. Let 1 p "".Let = ij AA be an operator matrix in the class It follows from Theorem 4, Theorem 7, and the fact that S is a closed set.The following theorem supplies an upper bound for the Schatten 1norm of the Khatri-Rao product of operators.
AB * is compact by Theorem 7. by Lemma 8(ii) and the Cauchy-Schwarz inequality, we have 2 obtain the bound (9).The final result states that the Khatri-Rao product is sequentially continuous on the norm ideal of trace-class operators and the norm ideal of Hilbert-Schmidt class operators.
(8)pose that Aij and Bij are nonzero operators for all i, j.Then the Khatri-Rao product Hence, we obtain the bound(8).If there is a zero block of Aij or Bij, then 0 | 2019-04-22T13:13:10.093Z | 2018-12-16T00:00:00.000 | {
"year": 2018,
"sha1": "c971ef489aa3335064465d9fea407104aef958fb",
"oa_license": "CCBYNC",
"oa_url": "https://mjfas.utm.my/index.php/mjfas/article/download/881/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c971ef489aa3335064465d9fea407104aef958fb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
49867505 | pes2o/s2orc | v3-fos-license | Hyperthermia induces therapeutic effectiveness and potentiates adjuvant therapy with non-targeted and targeted drugs in an in vitro model of human malignant melanoma
In the present study, we have aimed to characterize the intrinsic, extrinsic and ER-mediated apoptotic induction by hyperthermia in an in vitro model of human malignant melanoma and furthermore, to evaluate its therapeutic effectiveness in an adjuvant therapeutic setting characterized by combinational treatments with non-targeted (Dacarbazine & Temozolomide) and targeted (Dabrafenib & Vemurafenib) drugs. Overall, our data showed that both low (43 °C) and high (45 °C) hyperthermic exposures were capable of inducing cell death by activating all apoptotic pathways but in a rather distinct manner. More specifically, low hyperthermia induced extrinsic and intrinsic apoptotic pathways both of which activated caspase 6 only as opposed to high hyperthermia which was mediated by the combined effects of caspases 3, 7 and 6. Furthermore, significant involvement of the ER was evident (under both hyperthermic conditions) suggesting its role in regulating apoptosis via activation of CHOP. Our data revealed that while low hyperthermia activated IRE-1 and ATF6 only, high hyperthermia induced activation of PERK as well suggesting that ultimately these ER stress sensors can lead to the induction of CHOP via different pathways of transmitted signals. Finally, combinational treatment protocols revealed an effect of hyperthermia in potentiating the therapeutic effectiveness of non-targeted as well as targeted drugs utilized in the clinical setting. Overall, our findings support evidence into hyperthermia’s therapeutic potential in treating human malignant melanoma by elucidating the underlying mechanisms of its complex apoptotic induction.
In the present study, we have aimed to characterize the intrinsic, extrinsic and ER-mediated apoptotic induction by hyperthermia in an in vitro model of human malignant melanoma and furthermore, to evaluate its therapeutic effectiveness in an adjuvant therapeutic setting characterized by combinational treatments with non-targeted (Dacarbazine & Temozolomide) and targeted (Dabrafenib & Vemurafenib) drugs. Overall, our data showed that both low (43 °C) and high (45 °C) hyperthermic exposures were capable of inducing cell death by activating all apoptotic pathways but in a rather distinct manner. More specifically, low hyperthermia induced extrinsic and intrinsic apoptotic pathways both of which activated caspase 6 only as opposed to high hyperthermia which was mediated by the combined effects of caspases 3, 7 and 6. Furthermore, significant involvement of the ER was evident (under both hyperthermic conditions) suggesting its role in regulating apoptosis via activation of CHOP. Our data revealed that while low hyperthermia activated IRE-1 and ATF6 only, high hyperthermia induced activation of PERK as well suggesting that ultimately these ER stress sensors can lead to the induction of CHOP via different pathways of transmitted signals. Finally, combinational treatment protocols revealed an effect of hyperthermia in potentiating the therapeutic effectiveness of nontargeted as well as targeted drugs utilized in the clinical setting. Overall, our findings support evidence into hyperthermia's therapeutic potential in treating human malignant melanoma by elucidating the underlying mechanisms of its complex apoptotic induction.
Malignant melanoma is known to be the most aggressive form of skin cancer and one of the most lethal solid tumor types with its incidence rates increasing globally over the past few decades rendering the disease the 5 th most common type of cancer in the UK 1 . Hyperthermia is defined as the application of an exogenous heat source which acts by directly killing tumor cells or enhancing the efficacy of other therapeutic means (e.g. radiation, chemotherapy, etc.) against various cancer types 2,3 . The latest technological advances have allowed the more accurate and efficient application of hyperthermia in the tumor site as well as the precise temperature monitoring all of which have resulted in promising clinical outcomes in a wide range of cancer types 4 .
Results from numerous in vitro and in vivo studies have identified apoptosis as the key underlined pathway responsible for the induction of cell death as a response to hyperthermic treatments [5][6][7] . In general, apoptosis involves the induction of the extrinsic and intrinsic pathways whose activation depends on distinct signals 8 . Evidence, by other groups, has implicated the activation of both apoptotic pathways (in response to hyperthermia) the extent of which is dependent on the cancer type, temperature and duration of exposure 9 . In addition, the activation of an ER-mediated non-conventional apoptotic pathway has been documented in a study utilizing melanoma and non-melanoma cell lines 10 . Finally, although many studies have demonstrated the involvement of apoptosis in hyperthermia-induced cell death (in various cancer types) there is limited data pertaining to the elucidation of its underlined mechanism(s) in human malignant melanoma. Thus, the aim of this study was to delineate the underlined mechanism(s) of hyperthermia's effectiveness in inducing apoptosis, and furthermore to potentiate the action of clinically relevant non-targeted and targeted drugs in an in vitro model of human malignant melanoma. Consequently, our objectives were to (i) develop an optimized experimental platform of hyperthermic exposures by utilizing a validated model of human malignant melanoma, (ii) determine the mode of apoptotic induction and the role of the ER-stress response in relation to the duration and intensity of the hyperthermic exposures and (iii) evaluate the role of hyperthermia in potentiating the therapeutic efficacy of clinically-relevant non-targeted and targeted drugs. The latter is of paramount importance given that the disease is a highly aggressive and metastatic type of skin cancer which despite recent improvements in treatment options remains an incurable disease with a poor prognosis and an unmet need for more efficient treatments.
Results
Development of an experimental hyperthermic platform. In this set of experiments, we determined the optimal conditions of hyperthermic exposures by utilizing the human malignant melanoma (A375) and epidermoid carcinoma (A431) cell lines. Several temperature-response and time-course experiments were performed with cell viability levels assayed immediately after the 2 h hyperthermic exposure as well as after 24 h post-exposure, at 37 °C (Fig. 1A,B). Data showed that exposing cells to temperatures lower than 43 °C did not induce a significant effect on viability levels in both cell lines. However, when cells were exposed to temperatures higher than 43 °C, there was a significant reduction in viability observed at a greater extent in A375 cells only. Furthermore, a significant decline in viability was recorded, in both cell lines, at temperatures above 45 °C suggesting excessive cellular destruction (Fig. 1A,B). To these ends, when cells were exposed at 43 °C over shorter time courses (30-60 min) there was no significant reduction in viability levels (Fig. 1C,D) whereas exposure of both cell lines at 45 °C caused a considerable decline in the numbers of living cells (Fig. 1E,F). More specifically, our data showed that there was a 15% and 25% reduction in cell viability 24 h post-exposure to 43 °C (Fig. 1C,D) and further reduced to 60% and 40% at 45 °C (Fig. 1E,F) in A431 and A375 cells respectively.
In another set of experiments, cells were exposed to either 43 °C or 45 °C over 2 h and cell viability was determined following 24-72 h post-incubation at 37 °C in order to determine any further and more prolonged decrease in cell viability. A non-malignant immortalized keratinocyte (HaCaT) cell line was included in an attempt to determine the safety profile of the hyperthermic exposures on the rationale that keratinocytes are the cells surrounding melanocytes and so were used as a control group. Results confirmed our previous observations in that A375 cells were more sensitive to 43 °C (as there was a 30-40% decline in cell viability levels at 24-72 h post-exposure) while A431 cells were more resistant (Fig. 1G). Moreover, exposure at 45 °C induced an even more profound decrease (70-90%) in the viability of A375 cells. In agreement with our previous observations, A431 cells remained more resistant at 24 h post-exposure but this effect was not seen at 48-72 h suggesting that at these time points the hyperthermic effect was equally cytotoxic in both cell lines (Fig. 1H). On the contrary, HaCaT cells were significantly more resistant to exposure with either 43 °C (Fig. 1G) or 45 °C (Fig. 1H), irrespectively of the experimental condition, suggesting that these cells can retain their tolerance to increased temperatures as opposed to A375 and A431 cells.
To examine further the impact of hyperthermia in triggering cytotoxicity, relative levels of dead cells were determined by utilizing the CytoTox Fluor assay and trypan blue staining protocols. According to our findings, there was a significant increase in cytotoxicity levels in A375 compared to the HaCaT cells when exposed at both 43 °C ( Fig. 2A) and 45 °C (Fig. 2B) either immediately after exposure or 6-24 h post-exposure. In addition, when utilizing a trypan-blue staining method, data revealed that A375 cells exposed to 43 °C showed reduced proliferating potential compared to 37 °C (at 24 h post-exposure) while there was no significant change in the levels of cytotoxicity (dead cells) (Fig. 2C,D). However, exposure at 45 °C was associated with a slight increase in the levels of dead cells immediately after exposure an effect which became more apparent at 24 h post-exposure (Fig. 2D).
Hyperthermia induces apoptosis in human malignant melanoma (A375) cells.
In an attempt to investigate the effect of hyperthermia in inducing changes in the expression of key apoptotic genes, we utilized a genomic approach based on a real-time PCR microarray gene expression profiling system. Our data showed that there were several differences in the induction of various apoptotic genes 24 h post-exposure to 43 °C and 45 °C. A number of intrinsic apoptotic genes (Fig. 3B) were found to be either up-(e.g. APAF1, BAK1, BAX, BBC3, BCL2L11, CASP9, PMAIP1) or down-regulating (e.g. BCL2, VDAC3) ( Table 1). Of these, only BAK1, BBC3, CASP9 and PMAIP1 were common between the two hyperthermic temperatures with BAX, BCL2, VDAC and APAF1, BCL2L11 being exclusively involved at 43 °C and 45 °C respectively (Fig. 3A). On the other hand, a number of extrinsic apoptotic genes (Fig. 3B) were all shown to be up-regulated (e.g. FAS, FASLG, BIRC2, TNFRSF10, TNFSF10 and TRADD) ( Table 1). However, their up-regulation was either common between the two hyperthermic temperatures (e.g. FAS, FASLG, BIRC2, TNFSF10) or restricted to either 43 °C (e.g. TRADD) or 45 °C (e.g. TNFRSF10) (Fig. 3A). Finally, a number of genes was shown to be involved in the p53-dependent apoptotic Furthermore, we profiled the response of various caspases by utilizing western blotting assays. More specifically, initiator caspases-8 and -9 showed identical patterns of expression whereby they were activated immediately after as well as up to 8 h post-exposure at both hyperthermic temperatures. At longer post-exposure incubation periods (24-72 h), they were not shown to be activated except at 45 °C when they remained active even at 24 h (Fig. 4A). Moreover, we tested the activation of executioner caspase-6 by determining its protein levels as well as those of its target protein, lamin A/C. Data demonstrated a significant reduction in its protein expression levels at 43 °C (up to 4 h post-exposure) whereas remained consistently active up to 72 h post-exposure, at 45 °C (Fig. 4B). The same pattern was observed when the uncleaved form of lamin A/C was assayed confirming the results obtained with caspase-6 ( Fig. 4B). In the case of the executioner caspase-7, it was also found to be consistently activated immediately after exposure to 45 °C as well as after 2-72 h post-exposure without any significant activation observed at 43 °C (Fig. 4C). Data also revealed that in the case of the executioner caspase-3, its cleaved and un-cleaved protein expression levels were neither changed immediately after hyperthermic exposures nor at any time point up to 24 post-exposure. However, at this time point onwards its cleaved form became evident, only at 45 °C, suggesting of its activation at this hyperthermic condition (Fig. 4C). In agreement to these observations, poly ADP ribose polymerase (PARP) was also shown to remain unaffected up to 24 h post-exposure to 43 °C while it remained cleaved at every other time point of post-exposure to 45 °C (Fig. 4D).
In an attempt to characterize, in more detail, the involvement of the death receptor apoptotic pathway in response to hyperthermia, we examined changes in protein expression levels of three different death receptor molecules. According to our results, TNFR1 and TRADD presented a similar pattern of expression whereby there was a reduction in their protein content up to 24 h post-exposure to 43 we examined changes in protein content of Grp78/BiP, a chaperone protein induced by irregular protein folding and also known to bind to stress response proteins like PERK, IRE-1a and ATF-6 under normal conditions. However, upon ER stress induction it dissociates and activates their respective UPR pathways. According to our results, there was a significant increase in Grp78 protein expression levels up until 8 h post exposure to 43 °C and 24-48 h post exposure to 45 °C (Fig. 5). Furthermore, data showed a reduction in PERK protein levels up to 24 h post exposure at 45 °C whereas there were no alterations in its protein content at any time point post exposure to 43 °C (Fig. 5). Moreover, our findings demonstrated that IRE-1a and ATF-6 followed a similar pattern of expression characterized by a decrease in protein content up to 8 h post exposure to both hyperthermic temperatures with such decline being maintained at longer post exposure incubation periods (24-48 h) but only in the case of 45 °C (Fig. 5).
We also examined the protein expression of XBP-1s (the downstream target of IRE-1a) which was found to be induced immediately after exposure as well as 2 h and 4-8 h post exposure to 43 °C and 45 °C respectively. On the contrary, its protein levels were completely undetected at any time point after 8 h post exposure to both hyperthermic conditions (Fig. 5). Finally, our data revealed a significant alteration in the protein expression levels of CHOP (a major regulator of the ER-stress response), immediately after and 2-8 h post exposure to 43 °C whereas its induction became evident only after 24 h post exposure to 45 °C (Fig. 5).
Hyperthermia activates the heat shock response in human malignant melanoma (A375) cells.
In an attempt to monitor the effect of hyperthermia on heat shock response, we determined alterations in the expression of various protein regulators. Overall, there was a reduction in the protein content of transcription factor HSF1 immediately after and up to 4 h post exposure to 43 °C while this trend continued thereafter (2-72 h) but only at 45 °C (Fig. 6). In contrast, the expression levels of HSP 90 increased 4-48 h post exposure to 45 °C whereas remained at control levels at 43 °C (Fig. 6). Furthermore, HSPs 40 and 70 exhibited a similar pattern of expression in a manner where their protein contents were elevated immediately after and up to 24-48 h at both hyperthermic conditions (Fig. 6). Finally, the expression of HSP 60 was elevated 2-24 h post exposure to both hyperthermic temperatures and 24-72 h post exposure to 45 °C only (Fig. 6).
Hyperthermia potentiates the effectiveness of non-targeted and targeted therapeutic drugs in human malignant melanoma (A375) cells. In order to investigate if hyperthermia potentiates the therapeutic effectiveness of drugs currently used in the clinical setting, we utilized two chemotherapeutic agents (Dacarbazine and Temozolomide; non-targeted agents) and two inhibitors of B-Raf V600E (Dabrafenib and Vemurafenib; targeted agents) in combinational treatment protocols along with hyperthermia at 43 °C. Results showed that exposing cells to either Dacarbazine alone or in combination with hyperthermia had a significant additive effect on reducing cell viability at 48-72 h post-exposure, while at 24 h there appeared to be no significant changes with any of the treatment protocols ( Fig. 7A-C). Moreover, it appeared that the effect of Dacarbazine on BCL2L11 CDKN2A CHUK F2RL3 KIT MET NFKBIB PRKCE RPS6KA4 TNFRSF10 TNFSF12 TP53 TRAF2 PRKCB BAK1, BBC3 BIRC2, CASP7 CASP9, CFLAR DAPK3, FAS FASLG, IL6 KDR, Figure 6. Hyperthermia-induced regulation of heat shock proteins in human malignant melanoma (A375) cells. The effect of hyperthermia on protein content of HSF1, HSPs 90, 70, 60 and 40. Cells were grown overnight at 37 °C followed by exposure to hyperthermia, for 2 h, and then transferred back to 37 °C for the indicated post-exposure incubation times (2-72 h). Cell lysates were prepared and subjected to western blotting. Control cells were kept at 37 °C. β-tubulin was used as loading control. Samples from short and long-post exposure incubation periods following hyperthermia were electrophorized on separate gels. Delineation shows blots cropped from different areas of the same blot or different blots. Data shown is representative of at least two independent experiments. therapeutic protocol alone (i.e. drug at 37 °C and 43 °C) did not induce a consistent pattern of reduced cell viability in accordance with the range of concentrations tested at each one of the indicated post-exposure time points (Fig. 7G-I). In addition, when hyperthermia was combined with Vemurafenib treatment there was also an observed potentiation in reducing cell viability levels at 24-72 h post-exposure (Fig. 7J-L). Finally, it is noteworthy that although a similar pattern of potentiation was observed between the two targeted drug agents it occurred at substantially different concentration ranges in a manner where those of Vemurafenib were 100-fold higher that the corresponding Dabrafenib ones. Collectively, our data indicate a potential role of hyperthermia in enhancing the therapeutic effectiveness of non-targeted and targeted therapeutic drugs used in the clinical setting in the context of disease management.
Discussion
Data from various clinical studies have shown that hyperthermia enhances the effectiveness of therapeutic strategies like radiation and chemotherapy [11][12][13][14] . In the case of malignant melanoma, there is only a limited number of reports investigating into the induction of cell death as a response to hyperthermia 10,15,16 .
In optimizing our hyperthermic exposure platform, detailed kinetic analyses were performed by utilizing the epidermoid carcinoma (A431) and malignant melanoma (A375) cell lines. In addition, we have included a non-tumorigenic immortalized keratinocyte (HaCaT) cell line in the context of providing a safety profile for hyperthermic exposures given that keratinocytes are the primary epidermal cells surrounding melanocytes 17 . To our knowledge, there are no previous studies evaluating the effect of hyperthermia-induced cytotoxicity in non-malignant cell lines. Finally, the observed reduction in cell viability, at 43 °C, could also be attributed to hyperthermia's capacity to induce cell cycle growth arrest. In fact, several studies have associated hyperthermia's anti-proliferative effects with alterations in cell cycle regulation in various cell lines [18][19][20] .
Hyperthermia-induced cell death has been the subject of many studies utilizing a wide range of experimental cancer models 7,21,22 . Our results indicate the triggering of the extrinsic and intrinsic apoptotic pathways supported by the activation of caspases 8, 9, TNF-R1 and TRADD (at both 43 °C and 45 °C) suggesting their interaction in forming a death domain capable of recruiting caspase-8. Although our findings are in agreement with other studies demonstrating the induction of death receptors as a response to thermal stress 23-26 , they have not been documented in an experimental model of malignant melanoma before. Moreover, our data showed activation of RIP1, at 45 °C, which could be indicative either of the protein's interaction with FADD and TRADD in stimulating the extrinsic pathway or its interaction with RIP3 for the formation of the necrosome required for necroptotic cell death 27,28 . On the other hand, induction of caspase-9 has been associated with activation of the intrinsic apoptotic pathway in Jurkat cells 29 and various other cancer cell lines 30 while a recent study (utilizing melanoma cells) has provided no evidence for the activation of either caspase-8 or -9 under heat stress 10 . Such conflicting data can be attributed to the utilization of different experimental conditions (e.g. variations in hyperthermic experimental platforms, exposure kinetics and utilization of different types of cells 3 ) indicating the significance of utilizing an optimized experimental platform when assessing the effect of in vitro hyperthermic exposures. Finally, we observed that only caspase-6 became activated at 43 °C whereas caspases-3, -7 and -6 were all induced at 45 °C. Although our results are consistent with previous reports, demonstrating the induction of caspases-3 and -7 in response to hyperthermia 10,25 , the activation of caspase-6 (at 43 °C only) has not been previously reported.
Moreover, we investigated the participation of the ER stress response pathway in triggering hyperthermiainduced cell death. Our data showed an increase in Grp78 indicative of an increased demand for chaperone proteins together with a slight decrease in PERK which may be caused by its increased homodimerization for phosphorylating the eIF2 factor thus inhibiting protein synthesis in stressed cells 26 . Similarly, induction of IRE-1a and ATF-6 was also noted suggesting that IRE-1a becomes homodimerized and binds to downstream proteins while ATF-6 is cleaved to its active form under ER-stress conditions. Consistent with these observations, XBP-1s (the downstream target of IRE-1a) was shown to be up-regulated and together with active ATF-6 can modulate the activation of UPR pathways 31,32 . Finally, induction of CHOP was shown to be dependent on the activation of ATF-6 and XBP-1s and potentially linked to stimulation of apoptosis 33,34 . Interestingly, the induction of IRE-1a and ATF-6 has been suggested to play an anti-apoptotic role under ER stress conditions, in contrast to PERK which was shown to have pro-apoptotic effects instead [35][36][37][38][39] . In parallel, we also examined alterations in several heat shock proteins (HSPs) as a response to stress-induced protein misfolding and aggregation both of which can induce cell death. In particular, the up-regulation of HSPs 70 and 90 has been previously demonstrated to exert anti-apoptotic effects by preventing the formation of the apoptosome 40,41 . In addition, inhibition of HSP 70 appears to have anti-cancer effects by preventing tumor growth and enhancing cisplatin's cytotoxicity in an in vivo model of melanoma 42 . Findings from a recent study have linked the absence of JB12 (an ER-associated HSP 40 protein) with the stimulation of ER-stress-mediated apoptosis 43 whereas HSP 60 exerts its anti-apoptotic effects by acting as a mitochondrial chaperone while its inhibition promotes apoptosis and prevents tumor growth in an in vivo glioblastoma model 44,45 . Interestingly, the suppression of HSF1 appears to exert anti-proliferative effects in melanoma cells under hyperthermic conditions 46 . To this end, both HSPs 90 and 70 can interact with HSF1 and suppress its function 47,48 .
On a different note, we aimed to investigate the effect of hyperthermia in potentiating the effectiveness of several drugs (currently utilized in the clinical setting), in a way where lower concentrations can exert comparable cytotoxicity (with that observed at higher concentrations) and thus potentially minimizing the risk for unwanted side effects 49 . According to our initial observations, we determined that 43 °C was the optimal hyperthermic temperature used in all adjuvant treatment protocols (data not shown). This finding is in agreement with other studies indicating that the combination of low hyperthermia (40-43 °C) with chemotherapy exerts increased cytotoxicity against various cancer cells 3,50 while higher temperatures (>45 °C) are associated with the induction of necroptotic death 10,51 . Our data revealed that hyperthermia potentiated the effectiveness of DTIC, the action of which requires its obligatory bio-activation in the liver 52 . This is an experimental limitation of our in vitro model and consequently the reason for utilizing TMZ in additional experiments. This drug agent is an analogue of DTIC but without the requirement for bio-activation as it is spontaneously metabolized to its active form 52 efficacy was also demonstrated to be potentiated in the presence of hyperthermia to a higher degree than DTIC. This observation is also in agreement with previous studies demonstrating hyperthermia-induced enhancement of the therapeutic efficacy of TMZ in in vitro and in vivo experimental models 53 . On another note, almost half of melanoma patients carry a mutation (V600E) in the BRAF oncogene which results in an amino acid substitution, at amino acid 600, from a valine (V) to a glutamic acid (E). Consequently, there has been a growing interest in developing new drugs capable of targeting this mutation and thus inhibiting the continuous activation of MAPK/ ERK signaling pathway which contributes to tumor growth 54 . Two such BRAF-targeted drugs are Vemurafenib and Dabrafenib both of which have been approved by FDA in 2011 and 2013 respectively 55,56 . Our data revealed that exposure to mild hyperthermia (43 °C) potentiated the therapeutic effectiveness of both drugs, a finding which has not been reported before.
Moreover, hyperthermia has been shown to induce oxidative stress via generation of reactive oxygen species (ROS) 57 which, in turn, can induce an apoptotic response 58 . For instance, a previous study utilizing in vitro and in vivo models of malignant melanoma has demonstrated that exposure to 45 °C was capable of affecting the redox state but not altering the cellular proliferating potential 59 . In addition, generation of free radicals along with the presence of molecular oxygen appeared to affect the efficiency of several photosensitizers against melanoma cells 60 . Furthermore, the combination of hyperthermia with radiation therapy was found to be more effective due to the suppressed oxygen uptake caused by the increased temperature in multicell spheroids 61 . On another note, under normal conditions, melanocytes produce melanin that is capable of protecting cells by absorbing UV radiation 62 . L-tyrosine acts as a positive regulator of melanogenesis while it is also associated with increased metastatic potential of melanoma cells 63 . Numerous reports have shown the utilization of various forms of melanin-containing nanoparticles based on their ability to increase the temperature on tumor location (due to the capacity of melanin to absorb energy after irradiation) thus leading to tumor growth inhibition and even complete eradication [64][65][66][67][68][69] . On the other hand, various studies have shown that hyperthermia can influence the immune system in various ways including induction of HSPs, improvement of dendritic cell and NK-cell function, improved lymphocyte-endothelial adhesion and leukocyte trafficking, and mediation of immune surveillance 70 . To this end, several studies have shown that thermal therapy can enhance the therapeutic efficacy of immunotherapy when combined. For instance, a combinational protocol utilizing IL-2 or GM-CSF along with hyperthermia resulted in complete eradication of tumors in melanoma-bearing mice 71 . Finally, pyroptosis is another type of programmed cell death involving the activation of caspase-1 72 . This distinct pathway has protective effects against microbial infections for the host while a recent report revealed the bidirectional crosstalk between apoptosis and pyroptosis in innate immune cells 73 .
Collectively, our data suggest that at higher temperatures (45 °C) cells could not adapt effectively and consequently increased cytotoxicity and apoptotic cell death were evident whereas at milder hyperthermic conditions (43 °C) the cells were more thermotolerant and thus able to regulate the apoptotic response in a more efficient manner. For instance, although initiator caspases -8 and -9 were activated in response to both 43 °C and 45 °C, induction of effector caspases appeared to differ between the two hyperthermic conditions in a manner where triggering of effector caspases-3, -7 and -6 occurred at 45 °C (Fig. 8B) whereas only caspase-6 was activated at 43 °C (Fig. 8A). This suggests that mild hyperthermia triggers the apoptotic response in a more regulated manner in contrast to more excessive hyperthermia which requires the participation of all the executioner caspase repertoire in order to sustain apoptotic cell death. Moreover, this study provides further insights in the involvement of ATF-6, IRE-1 and PERK in regulating the apoptotic activation in response to low and high hyperthermic conditions. More specifically, it was evident that only IRE-1a and ATF-6 pathways were induced at 43 °C (Fig. 8A) whereas all three of them were activated at 45 °C (Fig. 8B). Although both the IRE-1 and ATF6 pathways can up-regulate CHOP, PERK predominates through selective up-regulation of translation of ATF4 which, in turn, induces transcription of CHOP. Hence, it can be proposed that PERK signaling along with the subsequent induction of CHOP play a major role in regulating hyperthermia-induced apoptosis. Last but not least, hyperthermia exerted a significant role in potentiating the therapeutic effectiveness of a number of non-targeted and targeted drugs (when administered as adjuvant treatment protocols) thus high lightening its premise as a therapeutic approach in melanoma patients.
Materials and Methods
Cell lines. The human epidermoid carcinoma (A431) and malignant melanoma (A375) cell lines were purchased from Sigma-Aldrich (St. Louis, MO, USA). The human immortalized keratinocyte (HaCaT) cell line was a kind gift from Dr. Sharon Broby (Dermal Toxicology and Effects Group; Centre for Radiation, Chemical and Environmental Hazards; Public Health England, UK). All cell lines were maintained in Dulbeccos's Modified Eagle Medium (DMEM), high glucose, supplemented with 10% fetal bovine serum, 2 mM L-glutamine and 1% pen/strep (100U/ml penicillin, 100 μg/ml streptomycin). Cells were cultured in a humidified atmosphere at 37 °C and 5% CO 2 . They were grown as monolayer cultures and sub-cultured when reaching 80-90% confluence. All cell lines were cultured for up to 20-25 passages before new vials were utilized. All cell culture media and reagents were purchased from Labtech International Ltd (East Sussex, UK) and cell culture plastic ware were obtained from Corning (NY, USA).
Exposure to hyperthermia. Cells were exposed to a range of temperatures (37 °C-50 °C) for various time periods in a standard 5% CO 2 incubator. Briefly, the appropriate number of cells was plated and incubated at 37 °C overnight. Next day, medium was changed prior to hyperthermic exposure and all plates were transferred into a 5% CO 2 incubator set at 37-50 °C and exposed for various time periods. Then, plates were returned at a 37 °C incubator for additional incubation periods (post-exposure). and cells were exposed to various hyperthermic conditions at the end of which they were returned back to 37 °C. Cell viability levels were determined immediately after exposures as well as at 24-72 h post-exposure by utilizing the Celltiter-Blue Assay (Promega, UK) according to the manufacturer's protocol. The assay uses the indicator dye (resazurin) which is converted to a highly fluorescent product (resorufin) by metabolically active cells. Non-viable cells lose their metabolic capacity; thus, they are not able to reduce resazurin into the fluorescent product and consequently cannot generate a fluorescent signal. Briefly, 20 μl of Celltiter-Blue reagent was added into each well of α 96-well plate and mixed by gentle shaking. The plates were incubated at 37 °C for 2 h and then the samples were transferred into the wells of a black opaque plate. Fluorescence was monitored at 400 Exc/505 Emm (nm) by using a SpectraMax M5 multimode plate reader (Molecular Devices, LLC, Sunnyvale, USA). Cell viability was expressed as percentage of control (37 °C) cells. Five replicates (n = 5) of each experimental condition were used under each experiment.
Determination of relative levels of dead cells was made based on the CytoTox-Fluor cytotoxicity assay (Promega, UK) according to the manufacturer's protocol. The assay involves a fluorogenic peptide substrate (bis-alanyl-alanyl-phenylalanyn-rhodamine 110; bis-AAF-R110) which can measure the activity levels of a specific protease released from dead cells which have lost membrane integrity. This particular peptide substrate cannot produce a signal in viable cells as it cannot cross their cell membrane. Briefly, cells were plated in 96 well-plates, exposed to hyperthermic conditions and then 100 μl of the assay reagent was added into each well (at indicated time points) mixed by orbital shaking and incubated at 37 °C for 2 h. Then, samples were transferred into the wells of a black opaque plate and fluorescence was monitored at 400 Exc/505 Emm (nm) by using a SpectraMax M5 multimode plate reader. The generation of fluorescent product is proportional to the protease activity of the marker associated with cytotoxicity so that higher fluorescence values represent increased levels of dead cells. Five replicates (n = 5) of each condition were used in each experiment.
In another approach, the trypan blue staining protocol was utilized in order to determine levels of viable and dead cells within the same sample. Briefly, cells were plated in 100 mm 3 dishes (incubated overnight at 37 °C) and after exposure to hyperthermia they were trypsinized and collected. A sample of each cell suspension was mixed with the trypan blue stain and cells were counted under the microscope. Overall, cells were categorized into being either viable (unstained) or dead (stained) while the total cell suspension number was calculated. Three replicates (n = 3) of each experimental condition were used under each experiment.
RNA extraction and determination of apoptotic gene profiling by RT-PCR-based microarrays.
To examine differential apoptotic gene expression in response to hyperthermia, A375 cells were plated in 100 mm cell culture dishes, cultured overnight and exposed to 43 °C and 45 °C or 37 °C for 2 h. Cells were then returned to 37 °C for an additional 24 h incubation period after which they were collected via trypsinization. Total RNA was extracted using the TRIzol reagent according to the manufacturer's protocol (Invitrogen). RNA quality and concentration were assessed by agarose gel electrophoresis and spectrophotometric analysis. Complimentary DNA was synthesized by using the SuperScript VILO cDNA synthesis kit (Invitrogen, Waltham, MA, USA) according to the manufacturer's protocol. qPCR was carried out by utilizing the TaqMan Array Human Apoptosis 96-well plates (Applied Biosystems, Carlsbad, CA, USA). TaqMan Universal master mix (2x) was mixed with the equal amount of diluted cDNA (5-50 ng per well) in RNAase free water and 10 μl of the mixture were added into each well of the 96-well plate. RT-PCR was performed on a StepOne Plus RT-PCR system (Applied Biosystems, Carlsbad, CA, USA). Gene expression data were analyzed by the ΔΔCt method and differences observed were expressed as fold change in gene expression by using the DataAssist v3.01 software.
Determination of protein expression by western blotting. Samples were stored as cell pellets at −20 °C following trypsinization and PBS washes. Cell pellets were suspended in the appropriate amount of lysis buffer (10 mM HEPES, pH 7.9; 10 mM KCl; 0.1 mM EDTA; 1.5 mM MgCl 2 ; 0.2% NP-40) supplemented with a cocktail of protease inhibitor tablets (Thermo Fisher, Waltham, MA, USA), and were left on ice while periodically being vortexed for 15 min. Then, they were sonicated at 30% amplitude for 3 cycles of 15 s each (with 30 s intervals) on ice. Cell lysates were centrifuged at 14,000 × g for 15 min at 4 °C and protein content was determined by utilizing the Pierce BCA protein assay kit according to the manufacturer's protocol. Fifty μg of proteins were separated by using SDS-polyacrylamide gels of different gradient (8-20%) according to the molecular weight of the protein of interest. Separated proteins were then transferred electrophoretically onto either 0.2 and/or 0.45μm PVDF membranes (depending on protein's molecular weight) (Thermo Scientific, Waltham, MA, USA) by wet transfer in 1x transfer buffer at predetermined running conditions. The blots were blocked with 5% (w/v) non-fat milk powder in TBST buffer, for 1 h at RT, under gentle agitation. Then, the blots were incubated with specific primary antibodies, overnight at 4 °C, under gentle agitation. On the following day, the membranes were washed in TBST buffer for 10 min, three times, and then were incubated with an appropriate secondary antibody, for 1 h at RT, under agitation. Blots were incubated with SuperSignal West Pico Chemiluminescent Substrate (Thermo Scientific, Waltham, MA, USA) according to the manufacturer's protocol before being imaged by using a ChemiDoc XRS + system (Bio-Rad, Perth, UK). All antibodies were purchased from Cell Signaling Technology (Hertfordshire, UK), apart from β-tubulin which was from Sigma-Aldrich (St. Louis, MO, USA).
Data analysis.
Experimental conditions for all sets of experiments were expressed as mean values ± SEM and comparisons were made between control and treatment groups. Calculations were performed by using the Microsoft Office Excel 2016 software. Means were compared by one-way analysis of variance (one-way ANOVA) with Tukey's test for multiple comparisons. SPSS v.22 or PRISM v5.01 software were used for statistical tests. A value of p < 0.05 was considered statistically significant. | 2018-07-17T13:43:11.488Z | 2018-07-16T00:00:00.000 | {
"year": 2018,
"sha1": "c253889ff8139125aa675f95188f631370b19424",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-29018-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "187f459697ae990b775485b06c4ce0862e0ae3d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44018828 | pes2o/s2orc | v3-fos-license | Effect of Biological and Biochemical Silage Additives on Final Nutritive , Hygienic and Fermentation Characteristics of Ensiled High Moisture Crimped Corn
The aim of this study was to determine the effect of different biological and biochemical additives on the final nutritive quality, fermentation process and concentration of mycotoxins of ensiled high moisture crimped corn. We created four variants for the experiment: control (UC), A1, and A2 (biological stimulators with the active principle of lactic acid bacteria) and variant B (combined additives with the active principle of lactic acid bacteria, benzoate sodium and active enzymatic complex of cellulases). After 6 months of storage in laboratory conditions, we determined in experimental silages the content of dry matter ranging from 608.9 to 613 g·kg-1. The significantly lower content of crude fibre was detected in silages with additives. In silages ensiled with additives we detected the highest content of nitrogen-free extract in variant B (834.3 g·kg-1 of DM, P < 0.05). A similar effect was determined also in the content of starch; significant differences were detected in variants A1 and B (P < 0.05) compared to the control variant. We detected a significantly (P < 0.05) higher content of total sugars in trial silages; the highest content was in variant A2 (6.1 g·kg-1 of DM). In the trial variants we determined significantly the lowest content of acetic acid in variant B (2.82 g·kg-1 of DM). In case of butyric acid, whose content in the control variant was 0.22 g·kg-1 of DM, we detected the lowest content in variant A1 conserved with homoand heterofermentative species of lactic acid bacteria. The lowest content of ammonia was determined in silages of variant B (0.074 g·kg-1 of DM). We found lower concentrations of DON and FUM (P > 0.05) after the application of biological and biochemical silage additives. In concentration of T-2 toxin we detected a significantly (P < 0.05) lower value in variant A1. In concentration of AFL we found significant differences between variants A1 and B, as well as in concentration of OT between untreated control variant (UC) and variants conserved by additives. Application of silage additives influenced the nutritive and hygienic quality of the conserved fodders. Feeds, maize, conservation, silage additives, mycotoxins Maize (Zea mays L.) is a major source of energy in feeding rations for ruminants in the Slovak Republic. Generally, it produces feed with different nutritional and hygienic indicators. The differences lie mainly in the energy content (Bíro 2001). Energy limits the nutritive value of feeds, except for fresh and green forages (Hoffman 1998). During the harvest period, the technology of high moisture corn is economically more efficient due to lower losses and processing costs (Volkov et al. 1999; Bíro and Juráček 2003). High moisture corn has a higher nutritive value and higher digestibility of organic matter compared to dry corn (Woodarce 2004). Starch supplied from corn and corn silage is an important source of dietary energy for lactating dairy cows and other ruminants. However, various sources of corn starch have highly variable ruminal and total tract digestibility (Ørskov 1986; Threuer 1986). Factors such as particle size (Remond et al. 2004), conservation method (Oba and Allen 2003) and type of corn endosperm (Correa et al. 2002; Šimko et al. 2008) can influence ruminal and total tract digestion of starch in lactating dairy cows ACTA VET. BRNO 2009, 78: 691-698; doi:10.2754/avb200978040691 Address for correspondence: Ing. Branislav Gálik, PhD. Department of Animal Nutrition Faculty of Agrobiology and Food Resources Slovak University of Agriculture in Nitra 949 76 Nitra, Slovak Republic Phone: +421 376 414 331 E-mail: Branislav.Galik@uniag.sk http://www.vfu.cz/acta-vet/actavet.htm (Blasel et al. 2006). Preservation system is based on quick anaerobic fermentation of plant carbohydrates, simultaneous rapid decrease of pH value and production of fermentation products and by-products (Merry et al. 1997). New approaches to development of ensiling additives lead to application of combined preparations based on several groups of lactic acid bacteria in order to reach high efficiency and large utilization range. Application of combined additives composite from homoand heterofermentative lactic acid bacteria stimulates silage fermentation and affects aerobic stability (Driehuis et al. 2001; Owen 2002). Many silage additives include microbial inoculants that extensively preserve feeds during ensiling (Bolsen et al. 1996). Biological additives containing different forms of lactic acid bacteria were used for forage preservation in many experiments of different authors (e.g. Chamberlain et al. 1987; Kung et al. 1999; Doležal and Zeman 2005). Inoculation of forages with selected lactic acid bacteria was recognized as a method to improve the fermentation process by stimulation and to ensure the aerobic stability of silage (Bolsen et al. 1996). Materials and Methods In the experiment, we conserved high moisture corn obtained from the University Experimental Farm in Kolíňany. Harvested corn grain (grain hybrid Latizana) was immediately mechanically processed and crushed by MURSKA 1000 HD grinder with the content of dry matter 613.3 g·kg-1. Four variants were analyzed in the experiment: control (UC) and experimental variants (A1, A2, B). Corn grain in experimental variants was ensiled with microbial (A1 and A2) and biochemical (B) additives, which we applied homogeneously to ensiled matter before ensiling into laboratory silos with the volume of 50 dm3. The additive used in variant A1 consisted of homoand heterofermentative species of lactic acid bacteria (Lactobacillus rhamnosus, Lactobacillus plantarum, Lactobacillus brevis, Lactobacillus buchneri and Pediococcus pentosaceus: 2.5 × 1011 CFU·g-1). In variant A2 we applied the additive compound of 5 lactic acid bacteria species (Enterococcus faecium, Lactobacillus plantarum, Lactobacillus casei, Lactobacillus buchneri and Pediococcus pentosaceus: 150 × 103 CFU·g-1). Inoculants used in variants A1 and A2 were in powder form. High moisture corn in variant B was ensiled by combined biochemical additive, where the biological part were lactic acid bacteria (Lactobacillus plantarum, Enterococcus faecium, Pediococcus pentosaceus and Lactococcus lactis: 166 × 109 CFU·g-1) and the chemical part were enzymatic complex of cellulases (Trichoderma viridae: activity 50 610 CU, 6478 IU) and preservative benzoate sodium. All variants were ensiled in 3 repetitions. After filling the matter into silos (density 1,100 kg·m-3), we sealed them and stored in the laboratory of feeds, conserved by the temperature of 18-20 °C. The nutritive characteristic of fresh high moisture corn is presented in Table 1. After termination of the fermentation process (6 months of storage) we opened the silos and in average laboratory samples we determined the indicators of nutritive value and fermentation process. For analysing organic and inorganic nutrients, we used standard methods according to the Regulation of Ministry of Agriculture of the Slovak Republic no. 2145/2004-100 about sampling feeds and about laboratory testing and evaluating of feeds. The content of fermentative carboxyl organic acids was determined on analyser EA 100 (Villa Labeco, SR) using the method of ionic electrophoresis.
Maize (Zea mays L.) is a major source of energy in feeding rations for ruminants in the Slovak Republic.Generally, it produces feed with different nutritional and hygienic indicators.The differences lie mainly in the energy content (Bíro 2001).Energy limits the nutritive value of feeds, except for fresh and green forages (Hoffman 1998).During the harvest period, the technology of high moisture corn is economically more efficient due to lower losses and processing costs (Volkov et al. 1999;Bíro and Juráček 2003).High moisture corn has a higher nutritive value and higher digestibility of organic matter compared to dry corn (Woodarce 2004).Starch supplied from corn and corn silage is an important source of dietary energy for lactating dairy cows and other ruminants.However, various sources of corn starch have highly variable ruminal and total tract digestibility (Ørskov 1986;Threuer 1986).Factors such as particle size (Remond et al. 2004), conservation method (Oba and Allen 2003) and type of corn endosperm (Correa et al. 2002;Šimko et al. 2008) can influence ruminal and total tract digestion of starch in lactating dairy cows (Blasel et al. 2006).Preservation system is based on quick anaerobic fermentation of plant carbohydrates, simultaneous rapid decrease of pH value and production of fermentation products and by-products (Merry et al. 1997).New approaches to development of ensiling additives lead to application of combined preparations based on several groups of lactic acid bacteria in order to reach high efficiency and large utilization range.Application of combined additives composite from homo-and heterofermentative lactic acid bacteria stimulates silage fermentation and affects aerobic stability (Driehuis et al. 2001;Owen 2002).Many silage additives include microbial inoculants that extensively preserve feeds during ensiling (Bolsen et al. 1996).Biological additives containing different forms of lactic acid bacteria were used for forage preservation in many experiments of different authors (e.g.Chamberlain et al. 1987;Kung et al. 1999;Doležal and Zeman 2005).Inoculation of forages with selected lactic acid bacteria was recognized as a method to improve the fermentation process by stimulation and to ensure the aerobic stability of silage (Bolsen et al. 1996).
Materials and Methods
In the experiment, we conserved high moisture corn obtained from the University Experimental Farm in Kolíňany.Harvested corn grain (grain hybrid Latizana) was immediately mechanically processed and crushed by MURSKA 1000 HD grinder with the content of dry matter 613.3 g•kg -1 .
Four variants were analyzed in the experiment: control (UC) and experimental variants (A1, A2, B).Corn grain in experimental variants was ensiled with microbial (A1 and A2) and biochemical (B) additives, which we applied homogeneously to ensiled matter before ensiling into laboratory silos with the volume of 50 dm 3 .The additive used in variant A1 consisted of homo-and heterofermentative species of lactic acid bacteria (Lactobacillus rhamnosus, Lactobacillus plantarum, Lactobacillus brevis, Lactobacillus buchneri and Pediococcus pentosaceus: 2.5 × 10 11 CFU•g -1 ).In variant A2 we applied the additive compound of 5 lactic acid bacteria species (Enterococcus faecium, Lactobacillus plantarum, Lactobacillus casei, Lactobacillus buchneri and Pediococcus pentosaceus: 150 × 10 3 CFU•g -1 ).Inoculants used in variants A1 and A2 were in powder form.High moisture corn in variant B was ensiled by combined biochemical additive, where the biological part were lactic acid bacteria (Lactobacillus plantarum, Enterococcus faecium, Pediococcus pentosaceus and Lactococcus lactis: 166 × 10 9 CFU•g -1 ) and the chemical part were enzymatic complex of cellulases (Trichoderma viridae: activity 50 610 CU, 6478 IU) and preservative benzoate sodium.All variants were ensiled in 3 repetitions.After filling the matter into silos (density 1,100 kg•m -3 ), we sealed them and stored in the laboratory of feeds, conserved by the temperature of 18-20 °C.The nutritive characteristic of fresh high moisture corn is presented in Table 1.After termination of the fermentation process (6 months of storage) we opened the silos and in average laboratory samples we determined the indicators of nutritive value and fermentation process.For analysing organic and inorganic nutrients, we used standard methods according to the Regulation of Ministry of Agriculture of the Slovak Republic no.2145/2004-100 about sampling feeds and about laboratory testing and evaluating of feeds.The content of fermentative carboxyl organic acids was determined on analyser EA 100 (Villa Labeco, SR) using the method of ionic electrophoresis.Contents of alcohols and ammonia were detected by Conway microdiffusion method, titration acidity by alkalimetric titration and active acidity by the electrometric method.Energy (NEL and NEG) and protein (PDI) values were calculated by regression scheme (Petrikovič and Sommer 2002).After opening the laboratory silos, we sensorically observed the occurrence of fungi.Concentration of mycotoxins (FUM: fumonisins, AFL: aflatoxins, ZON: zearalenone, DON: deoxynivalenol, T-2 toxin and OT: ochratoxin) was detected using the immunoenzymatic method in screening quantitative test of ELISA Reader (NEOGEN, U.S.A.).Before spectrophotometric measuring of concentration, the samples were processed by extraction in distilled water (DON), 70% methanol (FUM, ZON, AFL), respectively in 50% methanol (OT and T-2 toxin).
Significance of determined differences was tested by single-factor analysis of variance (ANOVA).The evidence of differences of the mean values was assessed by t-test.
Results
The nutritive characteristic of high moisture corn is presented in Table 1.The content of nutrients in silages from high moisture corn is given in Table 2.After 6 months of storage we detected the content of dry matter in high moisture corn silages ranging from 596.3 (variant A1) to 607.4 g•kg -1 (variant A2).In the crude protein content, which is typically deficient in corn, we did not detect significant differences influenced by additives.The highest content of crude protein was in silages of variant A1 (95.4 g•kg -1 of DM).For corn grain low content of crude fibre is also typical; this is in negative correlation with digestibility of organic matter.In our experiment, we determined a positive decrease of crude fibre content in silages with applied additives; the differences found were significant (P < 0.05).The lowest content of crude fibre (23.5 g•kg -1 of DM) was detected in silages from high moisture corn conserved by combined biochemical inoculant in variant B. In the content of ash (13.5-14.1 g•kg -1 of DM) we did not detect significant differences influenced by additives.Nutritionally, corn grain is valued also for its high content of easily digestible carbohydrates as nitrogen-free extract (NFE).Additives positively influenced the content of NFE in silages.Significantly the highest content of NFE (P < 0.05) was in silages conserved by combined biochemical additive (variant B).Likewise, in the content of starch as source of energy in corn grain, we determined a positive influence of additives.Compared to the variant UC, in which we ensiled high moisture corn without additives, we detected a significantly higher content of starch in silages of variant A1 and B (P < 0.05).The content of total sugars analysed according to Luff-Schoorl ranged from 1.0 to 6.1 g•kg -1 of DM.Detected differences between variant UC and variants with applied additives were significant (P < 0.05).We did not detect significant differences in the energetic and protein value of conserved silages from high moisture corn.
Detected values of fermentation process indicators are presented in Table 3.In silages with additives we determined a lower content of lactic acid compared to the variant without additives, mainly in variant A1 (10.55 g•kg -1 of DM).The content of acetic acid was low in all variants and it did not exceed 5.0 g•kg -1 of DM.In variant A1 we found a lower content of dry matter with a lower content of total sugars, and higher value of pH compared to variant A2.These factors affected lower production of lactic acid and a higher content of acetic acid in variant A1.The content of undesirable butyric acid was low in silages, non-significantly the highest concentration was in silages of variant A2 (0.38 g•kg -1 of DM), in this variant we applied a biological additive for stimulation of the fermentation process.Active acidity (pH) of water extracts ranged from 3.70 (variant B) to 3.85 (variant A1).The highest pH together with the lowest titration acidity (TA) was found in silages in which we detected the highest content of acetic acid (variant A1).Positive non-significant influence of applied additives was detected in the content of ammonia (NH 3 ), which compared to the control variant (0.416 g•kg -1 of DM) ranged in trial variants from 0.074 (variant A2) to 0.101 g•kg -1 of DM (variant A1).Concentration of mycotoxins in high moisture corn before ensiling is presented in Table 4.In fresh high moisture corn we did not detect the concentration of deoxynivalenol at the detection level of mg•kg -1 .The most prevalent were Fusarium toxins: FUM, followed by ZON and T-2.The samples of high moisture corn before ensiling were the least contaminated by toxin producers of the genera Penicillium and Aspergillus.Concentration of mycotoxins in silages of high moisture corn is given in Table 5.The lowest concentration of zearalenone (29.83 µg•kg -1 ) was determined in silages of the variant conserved by the combined biochemical additive (variant B).In concentration of deoxynivalenol, we detected lower values in all variants conserved by different silage additives (0.067 mg•kg -1 ).Average concentration of deoxynivalenol in untreated control variant was 0.133 mg•kg -1 .The same effect of silage additives was found in concentration of fumonisins.Significantly the lowest concentration of T-2 toxin was in variant A1 which we conserved by homo-and heterofermentative species of lactic acid bacteria.The positive effect of the biochemical additive (variant B) was detected also in the concentration of aflatoxins (2.13 µg•kg -1 ).Significant differences in the concentration of aflatoxins were found between variants A1 and B. The concentrations of ochratoxin were lower in silages of variants conserved by biological and biochemical additives (the lowest in variant A2, 0.533 µg•kg -1 ).Compared to untreated control variant, the differences were significant (P < 0.05).
Discussion
In a similar experiment, Doležal and Zeman (2005) determined the average content of dry matter of 603.4 g•kg -1 which partially corresponds with our results.In the content of crude protein, we confirmed the results of Zebrowska et al. (1997) who determined the average content of crude protein in corn of 100 g•kg -1 of DM.In the content of crude protein we did not detect significant effects of additives.Similar results were also reported by Wardynski et al. (1993).The content of crude fibre ranged in silages from 23.5 to
Table 2 .
Nutrient contents of high moisture corn silage * DM -dry matter, CP -crude protein, F -fat, CF -crude fiber, A -ash, NFE -nitrogen free extract, OM -organic matter, S -starch, TS -total sugars, NEL -net energy of lactation , NEG -net energy of gain, PDIE, PDIN -protein digestible in intestine Values with identical superscripts within one column are significant at P < 0.05x x x x
Table 3 .
Results of fermentation process of high moisture corn silages * DM: dry matter, LA: lactic acid, AA: acetic acid, BA: butyric acid, PA: propionic acid, FA: formic acid, TA: titration acidity, pH: active acidity, NH 3 : ammonia, Alc: alcohols Values with identical superscripts within one column are significant at P < 0.05 x x x x | 2017-09-20T05:07:35.434Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "e6a76d04dd32b90d37aa4e5ce5d4e49cf2c1db4f",
"oa_license": "CCBY",
"oa_url": "https://actavet.vfu.cz/media/pdf/avb_2009078040691.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e6a76d04dd32b90d37aa4e5ce5d4e49cf2c1db4f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
189936977 | pes2o/s2orc | v3-fos-license | On solutions of matrix-valued convolution equations, anisotropic fractional derivatives and their applications in linear and non-linear anisotropic viscoelasticity
A relation between matrix-valued complete Bernstein functions and matrix-valued Stieltjes functions is applied to prove that the solutions of matricial convolution equations with extended LICM kernels belong to special classes of functions. In particular the cases of the solutions of the viscoelastic duality relation and the solutions of the matricial Sonine equation are discussed, with applications in anisotropic linear viscoelasticity and a generalization of fractional calculus. In the first case it is in particular shown that duality of completely monotone relaxation functions and Bernstein creep functions in general requires inclusion in the relaxation function of a Newtonian viscosity term in addition to the memory effects represented by the completely monotone kernel. We define anisotropic generalized fractional derivatives (GFD) by replacing the kernel $t^{-\alpha}/\Gamma(1-\alpha)$ of the Caputo derivatives with completely monotone matrix-valued kernels which are weakly singular at 0.
In the first case it is in particular shown that duality of completely monotone relaxation functions and Bernstein creep functions in general requires inclusion in the relaxation function of a Newtonian viscosity term in addition to the memory effects represented by the completely monotone kernel.
Introduction
A recent idea [4] of simplifying the proof of a viscoelastic duality relation between the tensor-valued relaxation modulus and the tensor-valued creep function [3], based on a relation between matrix-valued complete Bernstein functions and matrix-valued Stieltjes functions [5] has led me to consider a more general application of this method to convolutions of symmetric matrix-valued functions.
In this reference I have decided to base the analysis of the viscoelastic duality relation on the analysis of the solutions of the matrix-valued solutions of the Sonine equation [9].
In the case of the Sonine equation it is assumed here that one of the functions is locally integrable near zero and completely monotone (LICM). The Sonine equation was examined in much detail in [8], but the authors did not assume that one of the functions was LICM. They constructed the inverse operator for the convolution equation k(t) * x(t) = f (t). The inverse operator however involves the solution l(t) of the Sonine equation k(t) * l(t) = 1. Existence of such a function is the subject of our investigation and we prove it for the kernel k in the extended LICM class, defined below. The last-mentioned problem is also studied in [6] for real-valued functions but we consider more general symmetric matrix-valued functions.
I reexamine Kochubei's suggestion [6] that the solutions of the Sonine equation could be used to construct a generalization of the concepts of a derivative and integration operators along the lines of fractional calculus. I however allow for matrix-valued LICM kernels which introduce 3D modeling of anisotropic effects in the context of non-linear relaxation equations.
I also show that in order to satisfy a Sonine equation the LICM function k(t) must be singular. If k(t) is a singular LICM function, then it satisfies the Sonine equation with an associated function l(t), which is also a singular LICM function.
The results presented here are relevant for anisotropic linear and non-linear viscoelasticity. They also open new methods for dealing with problems involving memory effects.
The convolution of two square matrix-valued functions F and G of the same rank N is defined by the equation The Laplace transformF(p) of a matrix-valued function F(t) is defined as usual by the formulaF for every p ∈ C such that the integral exists.
It is easy to check the identity for arbitrary matrix-valued functions defined on [0, ∞[ provided both Laplace transforms on the right-hand side exist. We shall equip the convolution algebra with a unit element U: The unit operator U is a convolution with a Borel measure υ on [0, Extending identity (2) to (3) we haveυ(p)f (p) =f (p), whencẽ Consider the general convolutional equation where F(t) and R(t) are two square matrix-valued functions defined for t ∈ ]0, ∞[ in a class to be specified, A 1 is a positive semi-definite symmetric matrix (possibly zero), while X(t) is a square matrix-valued function defined by equation (6). We shall examine the properties of the function X(t). The ranks of the matrices are equal and will be denoted by N . Let S denote the space of symmetric real square matrices of a fixed rank N . If A 1 = 0 and R(t) is unit matrix for t 0, then equation (6) is a generalization of the Sonine equation [8] for matrix-valued functions.
Definition 2.1 A matrix-valued function F : ]0, ∞[→ S is said to be completely monotone (CM) if it is infinitely differentiable and for every vector v ∈ R N the following inequalities are satisfied The above definition allows for a singularity at 0.
where µ is a Borel measure on [0, ∞[ satisfying the inequality Note that this result also implies that every LICM function F satisfies the identity hence the thesis follows from the Lebesgue Dominated Convergence Theorem.
Definition 2.4
The set E consists of matrix-valued Borel measures of the form where A is a positive semi-definite symmetric matrix, F is an LICM S-valued function and dt is the Lebesgue measure on [0, ∞[.
The set E appears in a natural way in the context of solutions of convolution equations. It also allows for a Newtonian viscosity component in viscoelastic relaxation functions (Section 5). We denote by B is the set of all the Bernstein functions.
If B is an S-valued Bernstein function, then for every v ∈ R N the function t → v T B(t) v, t > 0, is non-decreasing and continuous, hence it has a finite limit at t = 0. Hence the limit lim t→0 B(t) exists. We shall therefore consider Bernstein functions as defined on [0, It follows that the derivative B ′ of B on ]0, ∞[ is a completely monotone function. It is also locally integrable because A general Bernstein function is obtained by integrating the measure υ B 0 + F dt over intervals [0, t] for a symmetric and positive semi-definite matrix B 0 and an LICM matrix-valued function F . An S-valued LICM function F is locally integrable and non-increasing, hence its Laplace transformF(p) is defined for all p > 0.
An S-valued Bernstein function B also has a Laplace transform defined for where F is the LICM derivative of B. Let S + denote the set of positive semi-definite symmetric N × N matrices.
The matrixF(p) is symmetric and positive definite, hence it is invertible.
The proof of Proposition 2.7 is analogous to Proposition 2.6. Equation (2) is equivalent to the equation If either the matrix A 1 is positive definite or the matrixF(p) is invertible for p > 0, then the matrix A 1 +F(p) is invertible for p > 0. In this case the unique solution of (10) is . We shall show that in this case X is an LICM function.
For R(t) = t I, t > 0, we haveR(p) = p −2 I and We shall show that under these assumptions the solution X of equation (6) is an S + -valued Bernstein function. Similarly, if A is an S + -valued BF and R(t) = t I, t > 0, then X is an S + -valued LICM function.
By transposition the results obtained below also apply to equations of the form
Main theorem.
Assume that A 1 is positive semi-definite symmetric matrix of rank N .
Theorem 3.1 If either (1) A 1 is a positive definite symmetric matrix or else (2) F is a non-zero S + -valued LICM function, satisfying the conditions exists, then equation (6) with R(t) = I has a unique solution X(t), where X is an LICM function in the first case and In general the solution of equation (6) X ∈ E.
Remark. Concerning Condition ( * * ), we begin with the remark that the matrix F(t) is invertible for sufficiently large t. Indeed, on account of Condition ( * ) each of its eigenvalues a n (t) (n = 1, . . . N ) is a CM function and is positive for some t n . Consequently the matrix F(t) is invertible for t > sup{t n | n = 1, . . . N }.
Hence it makes sense to inquire whether the limit lim t→0 F(t) −1 exists.
Remark.
The equation k * l = 1 in for locally integrable real-valued functions k and l is known as the Sonine equation [9]. If for a given k ∈ L 1 loc ([0, ∞[) there is an l ∈ L 1 loc ([0, ∞[) satisfying the above equation, then k is called a Sonine function, while k, l are known as a Sonine pair. Sonine pairs are studied in some detail in [8]. Theorem 3.1 asserts in particular that every LICM function or matrix-valued LICM function is a Sonine function and in this case the Sonine pair consists of two LICM functions. For real-valued functions this fact has apparently been discovered by Kochubei [6].
The following matrix-valued Sonine pairs are of particular interest: Many LICM functions are known [7], but it is often more difficult to find the other member of the Sonine pair. The simplest Sonine pair of CM functions is k(t) = t α−1 /Γ(α) and l(t) = t −α /Γ(1 − α), 0 < α < 1. Using the Laplace trans- It is also interesting that for an arbitrary analytic function k(t) there is another analytic function l(t) such that k(t) t α−1 /Γ(α) and l(t) t −α /Γ(1 − α) are a Sonine pair and there is an algorithm for calculating the power series of l(t) given the power series for k(t) [8,11].
Proof of Theorem 3.1.
The Laplace transformF(p) is a symmetric positive definite matrix for every p > 0. For p > 0 equation (6) is equivalent equationX(p) = p (A 1 +F(p)) −1 . The inverse on the right-hand side exists for p > 0 if A 1 is positive definite or else in view of Condition ( * ) and Proposition 2.6. The right-hand side of the last equation is the algebraic inverse of a matrix-valued CBF, hence it is a matrix-valued Stieltjes function of the form where C 0, H is a measurable symmetric matrix-valued function bounded µ-almost everywhere and µ is a Borel measure satisfying inequality (9) (Theorem B.2). The second term on the right-hand side of equation (14) is a Laplace transformG(p) of the LICM matrix-valued function Applying the inverse Laplace transformation to (14) we conclude that Uniqueness of the solution X(p) follows from the fact that in view of the invertibility of the matrix A 1 +F(p) the equation (A 1 +F(p))X(p) = 0 for p > 0 implies thatX(p) = 0 for p 0. (6) with R(t) = (t n /n!) I and an LICM matrix-valued function F satisfying Condition ( * ) has a unique solution X which is an n-fold indefinite integral of an element of E.
In particular, for n = 1 the solution X is a matrix-valued Bernstein function and X(0) = 0 if A 1 > 0, while X(0) = F 0 if A 1 = 0, with F 0 given by equation (13).
For the proof of the last statement note that lim t→0 t 0 F(s) ds = 0 because F is integrable in a neighborhood of 0.
Here is a complementary result for the last statement: has a unique solution X and this solution is an S-valued LICM function.
Proof.
Differentiating equation (16) with respect to t one gets equation (6) with R(t) = I and F = B ′ , an LICM function satisfying Condition ( * ). The thesis then follows from Theorem 3.1.
Anisotropic generalized fractional derivatives (GFD).
The term "derivative" has been improperly applied to fractional derivatives although they do not satisfy the Leibniz property, which is part of the definition of a derivative. Fractional derivatives have however provided useful tools for constructing equations which in some sense interpolate between differential equations of varying orders. We shall now show that similar operators can be constructed for a kind of anisotropic generalized fractional derivative (GFD) which might be useful to construct anisotropic relaxation equations and for other purposes. The first application of this result is the solution of a convolution equation where F is a given matrix-valued function. Since The LICM matrix-valued function F(t) can have a singularity at 0 such that for every v ∈ R N the limit lim t→0 v T F(t) v = ∞. In this case F 0 = 0, where F 0 is defined by equation (13). We shall say that the function F is singular if F 0 = 0.
If F is a singular LICM matrix-valued function, then by Theorem 3.1 the solution X = G of the Sonine equation F * X = I is an LICM function and we can define the F-derivative by the formula for every absolutely continuous function v : [0, ∞[→ R N . If X = G (a singular LICM function) is the solution of the convolution equation (6) with R(t) = I, then the F-integral operator is defined by the formula It follows from Theorem 3.2 that the function G is singular. We then have Theorem 4.1 Let F be a singular LICM matrix-valued function.
The following relations hold The first term on the right-hand side equals q.e.d.
(2) Let w := G * v. On account of the identity F * G = I It remains to prove that w(0) = 0. G is an LICM function, hence it has the form (15) with µ satisfying equation (9) and |H(r) | 1. Hence For t 1 the second factor is bounded from above by a constant because v is assumed locally integrable. The first factor equals From the inequality e x − 1 x e x (x 0) follows the inequality 1 − e −x x. We shall apply this inequality for r ∈ [0, ∞[, noting that µ([0, 1]) < ∞ because of (9) with the inequality 1 2/(1 + r) valid for r 1. For r > 1 we shall note that 1/r 2/(1 + r). Hence expression (24) is bounded by which tends to 0 as t → 0 on account of (9) and the Lebesgue Dominated Convergence Theorem. Thus w(0) = 0 and the theorem has been proved.
The new derivative concept provides a new approach to modeling stress relaxation in anisotropic and non-linear viscoelastic media. A possible relaxation equation could have the form where K is a rank-2 tensor-valued function of two rank-2 tensor-valued arguments.
In view of equation (20) 2 and (18) equation (25) is equivalent to the equation In the scalar case Theorem 4.1 applies only to (weakly) singular CM kernels F such as t −α /Γ(1 − α).
Application to anisotropic linear viscoelasticity.
The relaxation modulus F jklm (t) and the creep function C jklm (t) in 3-dimensional linear viscoelasticity are defined by the two constitutive laws which are assumed equivalent to each other E jk = C jklm * DΣ lm j, k = 1, . . . 3 (summation over repeated indices is assumed) where the symmetric tensors Σ kl and E kl denote the stress and strain tensors, respectively. We assume the usual symmetries Equivalence of the two constitutive equations is ensured by the relation N jklm C jklm + F jklm * C lmrs = t (δ jr δ ks + δ js δ kr ), j, k, r, s = 1, . . . , 3, t > 0 (30) The first term on the left-hand side represents a Newton viscosity component. It is assumed to satisfy the inequalities N jklm e jk e lm 0 for every symmetric tensor e kl .
Defining the index I, 1 The matrix-valued function F IJ is LICM if for every v ∈ R 6 the function t → v I v J F IJ (t) is LICM. The above statement is equivalent to e ij e kl F ijkl (t) being LICM for every symmetric 3 × 3 matrix e ij . We shall use the notation N, R, C for the 6 × 6 matrices N IJ , R IJ and C IJ . The inequalities N 0, N > 0 are short-hand for the inequalities N ijkl e ij e kl 0 for an arbitrary rank-2 tensor e kl and N ijkl e ij e kl > 0 for an arbitrary non-zero rank-2 tensor e kl , respectively.
In this notation equation (32) assumes the form Theorem 5.1 1. If the matrix N is symmetric positive semi-definite and F is an LICM matrix-valued function satisfying the following conditions: ( * * 1 ) for every non-zero vector v ∈ R N the function v T F(t) v does not vanish identically, ( * * 2 ) F 0 := lim t→0 F(t) −1 exists and the symmetry relations (29) 1 are satisfied, then equation (30) has a single solution C ijkl (t). The function C ijkl (t) is a rank-4 tensor valued Bernstein function satisfying (29).
2. If C ijkl (t) is a rank-4 tensor-valued Bernstein function such that ( * * 3 ) for every non-zero symmetric rank-2 tensor e kl the function C ijkl (t) e ij e kl is not constant and C ijkl satisfies the symmetry relations (29) 2 , then equation (30) has a unique solution (N, F). The function F is a rank-4 tensor valued LICM function satisfying (29), while N is a rank-4 tensor satisfying (29) and N 0 for every rank-2 tensor e kl .
The first part of the theorem follows from Corollary 3.1. The second part follows from Corollary 3.2. We now examine the reverse direction from creep tests to some parameters of the relaxation function.
Proof.
The derivative C ′ (t) is an LICM function and C ′ (t) = B + Q(t), where B = lim t→∞ C ′ (t), Q is LICM and lim t→∞ Q(t) = 0. Hence where the matrices A and B are symmetric and positive semi-definite. We note that A = C(0) and, by Proposition 2.3, B = lim t→∞ C ′ (t).
In view of Assumption ( * * * ) the matrixC(p) has an inverse for every p > 0. Since is a Stieltjes function, its algebraic inverse pR(p) is a CBF. Hence where N is symmetric positive semi-definite µ is a Borel measure satisfying (9) and H(r) a bounded symmetric function for µ-almost all r ∈ [0, ∞[. By Proposition 2.3 N = lim p→∞R (p). On the other hand where we note that B + Q(0) = C ′ (0). Finally where we note that A+ ∞ 0 Q(t) dt = lim t→∞ C(t) if the limit on the right-hand side exists.
Theorem 5.1 implies that setting the Newtonian viscosity coefficient N = 0 has profound consequences for the creep C(t): the creep either starts with a jump or vertically from the zero value. On the other hand the value of N can be estimated from creep tests as the inverse of the original creep rate.
In comparison with [3] the results of Section 5 have demonstrated the inseparability of the Newtonian component of viscoelasticity from viscoelastic memory effects.
Conclusions.
We have demonstrated a particular role of LICM kernels in two classes of convolution equations and utility of the concepts of CBFs and Stieltjes derivatives in the study of existence problems for these equations. A particular class of convolution equations studied here are fundamental in linear viscoelasticity.
Another convolution equation has allowed us to define a class of anisotropic generalized fractional derivatives associated with matrix-valued LICM kernels. We have also shown that the kernel appearing in the definition of anisotropic GFD and the associated anisotropic fractional integrals should be singular at 0.
A A remark on the convolution algebra.
For our purposes it is important that the convolution algebra has a unity. The unity is not the convolution with a function. Hence the convolution algebra must include Borel measures. The convolution ρ * ν of two measures ρ and ν defined on [0, ∞[ is defined as the Borel measure λ satisfying the identity This definition is easily extended to matrix-valued measures. For our purposes the convolution algebra has to involve only Borel measures of the form u C + F(t) dt, where the unity is a measure defined in Section 2.
B Matrix-valued Stieltjes functions and Complete Bernstein functions.
We shall now use some results from Appendix B of [5]. A matrix-valued Stieltjes function Y(p) has the following integral representation: where B ∈ S + , µ is a Borel measure on [0, ∞[ satisfying (9) and H(r) is an S + -valued function defined and bounded µ-almost everywhere on [0, ∞[. Conversely, any matrix-valued function with the integral representation (37) is an S + -valued Stieltjes function.
Theorem B.1 Every matrix-valued Stieltjes function is the Laplace transform of an element of E.
Proof.
The Laplace transform of the S + -valued LICM function A(t) (equation (8)) is given by the equatioñ where µ, H satisfy the same conditions as in (37). The second term on the right-hand side of equation (37) where the Borel measure µ satisfies inequality (9). The inner integral represents a general matrix-valued LICM F(t). Thus V(t) is the Laplace transform of a general matrix-valued LICM and Y(p) is the Laplace transform of a general element of E.
An S + -valued CBF Z(p) has the following integral representation: where B ∈ S + , ν is a Borel measure on [0, ∞[ satisfying (9) and H(r) is an S + -valued function defined ν-almost everywhere on [0, ∞[. Conversely, any S + -valued function with the integral representation (39) is a S + -valued CBF.
It follows immediately that the the function p −1 Z(p), where Z is an S +valued CBF function, is an S + -valued Stieltjes function.
We quote Lemma 1 in Appendix B of op. cit. in the form of the following theorem Theorem B.2 If Z(p) is an S + -valued CBF and does not vanish identically, then Z(p) −1 is an S + -valued Stieltjes function.
Conversely, if Y(p) is an S + -valued function does not vanish identically then Y(p) −1 is a CBF. | 2019-06-18T14:41:27.484Z | 2019-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "94e1691476b5b1cb877921cc79ff37cc56091be9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.07946",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d880a1c3100c6b7dfac5b5c29cccf72247cc99fa",
"s2fieldsofstudy": [
"Mathematics",
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
252233240 | pes2o/s2orc | v3-fos-license | Properties and Biodegradability of Films Based on Cellulose and Cellulose Nanocrystals from Corn Cob in Mixture with Chitosan
The increase in consumer demand for more sustainable packaging materials represents an opportunity for biopolymers utilization as an alternative to reduce the environmental impact of plastics. Cellulose (C) and chitosan (CH) are attractive biopolymers for film production due to their high abundance, biodegradability and low toxicity. The objective of this work was to incorporate cellulose nanocrystals (NC) and C extracted from corn cobs in films added with chitosan and to evaluate their properties and biodegradability. The physicochemical (water vapor barrier, moisture content, water solubility and color) and mechanical properties of the films were evaluated. Component interactions using Fourier-transform infrared (FTIR) spectroscopy, surface topography by means of atomic force microscopy (AFM), biodegradability utilizing a fungal mixture and compostability by burying film discs in compost were also determined. The C-NC-CH compared to C-CH films presented a lower moisture content (17.19 ± 1.11% and 20.07 ± 1.01%; w/w, respectively) and water vapor permeability (g m−1 s−1 Pa−1 × 10−12: 1.05 ± 0.15 and 1.57 ± 0.10; w/w, respectively) associated with the NC addition. Significantly high roughness (Rq = 4.90 ± 0.98 nm) was observed in films added to NC, suggesting a decreased homogeneity. The biodegradability test showed larger fungal growth on C-CH films than on CH films (>60% and <10%, respectively) due to the antifungal properties of CH. C extracted from corn cobs resulted in a good option as an alternative packaging material, while the use of NC improved the luminosity and water barrier properties of C-CH films, promoting strong interactions due to hydrogen bonds.
Introduction
The world plastics production was estimated as 359 million tons in 2018, packaging materials as one of their main applications. The high plastic accumulation in the environment and its elevated biodegradation resistance has raised contamination concerns, encouraging the increase of the consumer demand for more sustainable materials. Therefore, alternatives have been sought to produce biodegradable, renewable and environmentally friendly materials at a competitive cost [1,2]. The development and application of biopolymer-based films from agricultural byproducts or waste food has increased lately due to concerns about the overexploitation of limited natural resources such as fossil fuels and the high environmental impact of packaging made from nonbiodegradable materials.
Biopolymeric films meet the general characteristics of quality and appearance for food products but also for public health, which increases consumer interest [1]. Polysaccharides The MC values reported here are similar to those reported for polysaccharide matrices (≈20% w/w) [19]. The addition of C and NC to the CH resulted in films with reduced MC, which may be due to interactions occurring through hydrogen bonds between the C and CH that extensively block the hydroxyl groups trying to interact with the surrounding water molecules. Moreover, it has been reported that the use of NC in the film-forming solutions results in water access restriction into the formed film matrix, due to its crystalline structure [20].
C-CH and C-NC-CH films' water solubility (Ws) were not significantly different (p < 0.05). The reports on the Ws value of CH-NC films are in agreement with the values shown here (21% by weight) [21]. The Ws values of the C-CH and C-NC-CH films were significantly lower than that of the CH films probably due to chemical crosslinking within the hydrophilic polymeric matrix. From Table 1, it can be seen that the addition of either C or NC to CH restricts the diffusion of water molecules within the polymeric structure, leading to reduced Ws of the films. It is known that NC increases the cohesion of the film components, which, in turn, reduces the hydroxyl groups available to interact with water, and based on this effect, it has been proposed that Ws is inversely proportional to the NC concentration [22].
The C-NC-CH film exhibited the lowest water vapor permeability (WVP) (1.05 ± 0.15 × 10 −12 g m −1 s −1 Pa −1 ), while the CH film showed the highest value (7.8 ± 0.20 × 10 −12 g m −1 s −1 Pa −1 ) ( Table 2). Films made from polysaccharides such as CH are generally quite hydrophilic, so they have poor water barrier properties [23]. The C-CH and C-NC-CH films presented WVP values similar to those reported by Cazón et al. [24] (6.6 × 10 −13 -1.6 × 10 −11 g m −1 s −1 Pa −1 ), but a significantly higher value was reported for CH (3.65 × 10 −10 g m −1 s −1 Pa −1 ) [25]. The films containing NC showed better water barrier properties than those based on C-CH. This may be associated with interactions between the polymeric matrix and the NC, which reduced the availability of the hydrophilic groups able to interact with the water molecules, resulting in a reduced water vapor transfer rate [24].
Mechanical Properties
The CH film showed the lowest thickness (474.90 ± 46.27 µm), which, upon the addition of C, increased significantly (Table 2). However, the C-CH and C-NC-CH films did not significantly exhibit different thicknesses (p < 0.05).
The tensile strength (TS) of the films did not show significant differences (p < 0.05) ( Table 2). The films made from CH only exhibited the highest TS value (1.093 ± 0.250 MPa), followed by the C-NC-CH film. The effect of the C addition to CH films decreased TS to a higher extent than the NC addition, suggesting that NC acts by promoting strong interactions and hydrogen bond formation with the polymeric matrix. In addition, glycerol helps to increase the dispersion and material component interactions [26]. C-NC-CH film's TS was relatively low, which was attributed to agglomerate formations when NC was added because of induced stress points in the polymeric matrix [27].
Color Parameters
The parameters evaluated corresponded to the CIELAB color space: a* (coordinates red to green), b* (coordinates yellow to blue) and L* (luminosity). After NC incorporation, the a* and b* values of the films increased (Table 3), indicating that the C-NC-CH film is more reddish and yellowish, whereas the CH films showed the lowest b* value because of greater blue color, in agreement with other reports [19]. A high lightness (L*) value indicates clearer and more transparent films. The C-CH films exhibited significantly low L* values, associated with the CH polymer matrix bond formation with other film compounds (Table 4). These results indicate that the CH film tends to be more transparent (Figure 1) [28]. Film colors may be described using other parameters, such as ∆E, which indicates the extent of the total color difference relative to the white reference plate. The ∆E of the films with C-CH and C-NC-CH was significantly higher than that of CH, corroborating that the CH film was the most transparent [25].
These results indicate that the CH film tends to be more transparent (Figure 1) [28]. Film colors may be described using other parameters, such as ΔE, which indicates the extent of the total color difference relative to the white reference plate. The ΔE of the films with C-CH and C-NC-CH was significantly higher than that of CH, corroborating that the CH film was the most transparent [25].
Films Topography
The CH film ( Figure 2C) presented the smoothest surface (Ra = 0.95 nm ± 0.09; Rq = 1.2 ± 0.07 nm), although not significantly different from the C-CH film (p < 0.05) ( Figure 1A), whereas the NC-containing film ( Figure 2B) was the roughest (Ra = 3.97 ± 0.85 nm; Rq = 4.90 ± 0.98 nm). Upon the addition of NC, aggregates were formed, which explains the rough surfaces of C-NC-CH films, decreasing their homogeneity ( Figure 2) [29]. Figure 3 shows micrographs of NC and CH-NC films. Figure 3A shows the particle size of NC obtained by AFM, where 15-20 particles were selected from different areas of the film, ranging between 74.63 and 128.85 nm. Figure 3A shows the particle size of NC obtained by AFM, where 15-20 particles were selected from different areas of the film, ranging between 74.63 and 128.85 nm. Figure 3A shows the particle size of NC obtained by AFM, where 15-20 particles were selected from different areas of the film, ranging between 74.63 and 128.85 nm. According to Boukouvala et al. [30], cellulose may be considered a nanocrystal if the crystalline particles range from 1 to 1000 nm. The observed particles size is the result of the type of extraction applied, and some authors consider that the size is proportional to the degree of polymerization. Hydrolysis carried out with sulfuric acid (64% v/v) allowed crystal formation [31]. Figure 3B shows the micrograph of a NC-CH film for an area of 5 × 5 mm, in which the formation of agglomerates can be observed. According to Börjesson et al. [31], for films made with NC and dried by evaporation in contact with the air, there is the induction of NC agglomerates during film formation.
Films Components Interactions Evaluated by FTIR
The films' FTIR spectrograms are shown in Figure 4. The spectrograms of the films produced from corn cob cellulose (C), cellulose-chitosan (C-CH), and cellulose-cellulose nanocrystals-chitosan (C-NC-CH) are shown in Figure 4A-C, respectively. The corn cob cellulose spectrogram shows the O-H tension vibration characteristic peak at 3313 cm −1 , a band due to -C-H tension vibration is located at 2933 cm −1 and one band across the O-H bond of adsorbed water is located at 1639 cm −1 , while the band corresponding to -CH 2 snipping is at 1422 cm −1 . A fluttering -CH 2 band is observed at 1319 cm −1 , while the pyranose ring strain vibration band C-O-C is located at 1031 cm −1 [32].
CH-C ( Figure 4B) and C-NC-CH ( Figure 4C) films show characteristic CH peaks at 1648, 1544 and 1411 cm −1 that correspond to the stretching of C=O (amide I), to N-H bending (amide II) and to HN-CO stretching (amide III), respectively. The absorption peak at 1030 cm −1 is due to C-O stretching; the peaks between 2920 and 2850 cm −1 are related to the amino group, and the peaks in the range of 3600-3200 cm −1 correspond to the O-H and N-H stretch bands. It is well-known that characteristic NC absorption bands appear at 3000-2800 cm −1 (C-H stretching of the CH 2 and CH 3 groups) and at 3455-3230 cm −1 (O-H stretching) [25,33]. Figure 5 shows that, following the 28 days of the biodegradability test, there was a large fungal growth in the positive control ( Figure 5D) and in the C-CH film ( Figure 5A) that was classified as 4, indicating complete coverage. The film with NC ( Figure 5B) was classified as 3, because it displayed about 50% of its surface covered by fungi, whereas the CH film exhibited the least fungal growth and was classified as 1 ( Figure 5C). All spectra showed peaks in the 850-640 cm −1 region that are assumed to be characteristic of the corn cob composition. When chitosan was added, the characteristic peaks of the amino group appeared at 2920 cm −1 and those of the amides in the region of 1650-1410 cm −1 , whereas the cellulose bands at 2933 and 1319 cm −1 disappeared ( Figure 4A,B) [25].
The broad peak in the region between 3500 and 3300 cm −1 is attributed to the O-H hydrogen bond stretching vibration, indicating strong interactions between NC and CH through hydrogen bonds. The bands in the region from 1650 to 1410 cm −1 exhibited a higher intensity in the C-NC-CH film. In addition, the 1030 cm −1 peak also presented a higher intensity with the NC addition ( Figure 4B,C). These results indicate that the link between chitosan and NC occurs through hydrogen bonds [21,29].
Films Biodegradability
Biopolymer biodegradation, such as cellulose and chitosan, involves the hydrolytic or enzymatic breakage of their backbone structure, including the removal of hydrogen bonds, with the formation of CO 2 , CH 4 , water, biomass and other natural substances as the products [34]. Figure 5 shows that, following the 28 days of the biodegradability test, there was a large fungal growth in the positive control ( Figure 5D) and in the C-CH film ( Figure 5A) that was classified as 4, indicating complete coverage. The film with NC ( Figure 5B) was classified as 3, because it displayed about 50% of its surface covered by fungi, whereas the CH film exhibited the least fungal growth and was classified as 1 ( Figure 5C). Figure 5 shows that, following the 28 days of the biodegradability test, there was a large fungal growth in the positive control ( Figure 5D) and in the C-CH film ( Figure 5A) that was classified as 4, indicating complete coverage. The film with NC ( Figure 5B) was classified as 3, because it displayed about 50% of its surface covered by fungi, whereas the CH film exhibited the least fungal growth and was classified as 1 ( Figure 5C). The numbers below each image correspond to the growth scale described in Table 5. Based on these results, it was shown that the C-CH and C-NC-CH films and, to a lesser extent. the CH film, acted as a substrate for the fungi mixture used, which indicated that the three formulations may be classified as biodegradable materials. The delay in fungal growth on the surfaces of the films was attributed to the presence of CH, which possesses antifungal properties. CH acts, promoting the permeabilization of the fungi cell wall through electrostatic interactions between the positive charges of the protonated amino groups and the negative charge of the fungal cell wall. Permeabilization triggers the loss of intracellular material, leading to cell death and, thus, the inhibition of fungal growth [35,36].
From Figure 5, it can be seen that the C-NC-CH film ( Figure 5B) showed biodegradability to a lower extent than the C-CH film ( Figure 5A), attributed to the more hydrolytic resistance of the crystalline regions of NC [27].
The progress of biodegradability can be observed in Figure 6 at 5 d, 10 d, 15 d, 20 d, 25 d and 28 d after the inoculation. According to the standard, testing can be completed in less than 28 d for samples showing a growth index of two or more (10-30% of the area covered), and thus, the study could be stopped after 10 d for the cellulose film and the control (filter paper). However, no growth was observed on the surface of the chitosan film after 28 d of analysis, while the NC film presented >10% of its surface covered by fungi at 20 d.
Films Compostability
When the useful life of bioplastics ends, one of the most widely used forms of disposal is through composting. It can be observed from Table 4 that the films display a lower percentage of compostability than the filter paper (100%). Natural polymers such as cellulose and chitosan are often assumed to be biodegradable and environment friendly. However, biodegradable materials are not necessarily compostable, since the latter requires specific settings to break down, disintegrating into small fragments, and their biodegradation products do not represent damage to the environment in terms of ecotoxicity [37,38].
The films produced in this work did not show significant differences in their compostability (37.35-43.75% by weight). The difference in the compostability extent of the films relative to the filter paper (FP) used as the control is attributed to the cellulose content. The composts comprise a high microbial population that produces enzymes that degrade complex molecules such as cellulose [39]. It has been reported that, under composting conditions, CH takes approximately 70 d for 100% degradation, which is longer than the time taken by cellulose; the compostability values of the films were lower than that shown by filter paper [40].
Other studies have reported that films with NC (30% p/p) exhibit less than 5% compostability after 15 d, which is significantly lower than the value obtained in this work, associated with the low NC concentration incorporated into the C-NC-CH films [41]. In the case of CH films, there is a compostability report of only 15% after 12 d, and this difference may be attributed to the compostability conditions, which vary from one study to another [42].
Films Compostability
When the useful life of bioplastics ends, one of the most widely used forms of disposal is through composting. It can be observed from Table 4 that the films display a lower percentage of compostability than the filter paper (100%). Natural polymers such as cellulose and chitosan are often assumed to be biodegradable and environment friendly. However, biodegradable materials are not necessarily compostable, since the latter requires specific settings to break down, disintegrating into small fragments, and their biodegradation products do not represent damage to the environment in terms of ecotoxicity [37,38].
The films produced in this work did not show significant differences in their compostability (37.35-43.75% by weight). The difference in the compostability extent of the films relative to the filter paper (FP) used as the control is attributed to the cellulose content. The composts comprise a high microbial population that produces enzymes that degrade complex molecules such as cellulose [39]. It has been reported that, under composting conditions, CH takes approximately 70 d for 100% degradation, which is longer than the time taken by cellulose; the compostability values of the films were lower than that shown by filter paper [40].
Other studies have reported that films with NC (30% p/p) exhibit less than 5% compostability after 15 d, which is significantly lower than the value obtained in this work, associated with the low NC concentration incorporated into the C-NC-CH films [41]. In the case of CH films, there is a compostability report of only 15% after 12 d, and this difference may be attributed to the compostability conditions, which vary from one study to another [42].
Materials
Corn cobs (Zea mays, spp. mays) were supplied by the community of Texcatepec in the municipality of Chilcuautla (Hidalgo, México). Chitosan of medium molecular weight showing ≥90% deacetylation was acquired from Chemsavers (Luefield, VA, USA). All other chemicals were of analytical grade and commercially supplied.
Cellulose Extraction
Cellulose extraction was carried out in corn cobs according to Melikoglu et al. [43], with some modifications. The corn cobs were dried in an oven (Binder, WTB DB 115, Tuttlingen, Germany) at 50 • C for 24 h; then, two grinding processes were used, the first involving a hammer mill using a 3-mm mesh size (Model Qvn, México), and the second was a coffee grinder (Krups, Mod. GX4100, Solingen, Germany). The final particle size was about 2.83 mm, obtained by sieving the powder using a No. 7 mesh (Tyler Standard, OH, USA).
Ground corn cob (3.3 g) was placed in 100 mL of 10% (w/v) NaOH and heated at 55 • C for 3 h with continuous stirring using a magnetic stirrer (Barnstead Thermolyne, Dubuque, IA, USA). Subsequently, the insoluble residue was filtered and washed with distilled water until neutral pH was achieved and dried in the oven (Binder) at 60 • C for 24 h; after completing this process, most lignin was removed. Subsequently, the dry sample was placed in 1% (v/v) NaClO and heated at 95 • C for 1 h with constant stirring, repeating this process twice. The sample was filtered and washed with distilled water until a neutral pH and dried at 60 • C (Binder oven) for 24 h.
Cellulose Nanocrystals Production
The NC were obtained by acid hydrolysis following Zhang et al. [44], with some modifications. Three grams of the extracted cellulose were placed in 60 mL of 64% (v/v) H 2 SO 4 at 45 • C for 1.5 h with constant stirring. The hydrolysis was stopped by adding ice-cold distilled water in a 1:10 ratio (suspension:H 2 O, v/v), and the mixture was stirred without heating for 10 min. Then, it was centrifuged (Eppendorf, 5810 R, Hamburg, Germany) at 4 • C and 4000× g for 5 min, repeating this process four times to remove the acid. The resulting insoluble residue was diluted with distilled water and dialyzed at room temperature for 72 h using a 12-kDa cut-off dialysis bag (Sigma-Aldrich, St. Louis, MO, USA). The nanocrystal suspension was sonicated (Branson, Mod. 5510, Danbury, CT, USA) at 25 • C for 10 min. The resulting solution was homogenized using an Ultra-Turrax (IKA, Mod. T25 Basic S1, Staufen, Germany) at 9500 rpm for 2 min. Finally, 1% (v/v) NaClO solution was added and kept under constant stirring for 1 h, centrifuged (4 • C, 4000× g, 5 min), repeated 4 times to obtain a neutral pH and, finally, the pellet was dried 60 • C (Binder oven) for 24 h.
Cellulose (C) and Cellulose Nanocrystals (NC) Yield
The cellulose yield was obtained following the methodology of Gupta et al. [45], with some modifications. An acid hydrolysis was performed by placing the extracted cellulose in 5% (v/v) sulfuric acid at 20 g L −1 and heated at 100 • C for 3 h, followed by quantification of the glucose concentration using a glucose kit (GAGO20, Merck, Darmstadt, Germany). The glucose yield was obtained according to Equation (1).
where Y C is the cellulose yield, C G is the glucose concentration (g L −1 ), V is the volume (L) and S is the added substrate (g). The NC were dried at 105 • C until a constant weight was reached. Subsequently, the NC yield was obtained from Equation (2).
where Y NC is the NC yield in relation to the extracted cellulose, P C is the weight of the initial cellulose and P NC is the mass of nanocrystals obtained [46].
Nanocellulose Crystals Size
Crystal sizes were determined using atomic force microscopy (AFM: Park NX10, Seoul, Korea), applying the no contact method and using an aluminum-coated silicone tip PPP-FMR (Nanosensors, PointProbe, Neuchatel, Switzerland) with a resonance frequency of 286-362 kHz and a spring constant of 20-80 N m −1 . Samples of 0.5 × 0.5 cm were analyzed, and three 5 × 5 µm areas were scanned at a speed of 1 Hz with a resolution of 256 × 256 pixels [47]. A particle size of 20 particles was determined for each area of 5 µm × 5 µm, and the measurements were made on 5 different films and at 3 different areas of each one.
Edible Films Production
Three different solutions were prepared: the first was a 1% (w/v) solution of chitosan (CH) in 0.5% (v/v) acetic acid, followed by heating at 90 • C for 1 h under constant stirring [48]. Subsequently, the cellulose and NC solutions were prepared according to Ghosh et al. [49], with some modifications. The second solution was prepared by dissolving 1.5% (v/v) of cellulose (C) in 0.5% (v/v) acetic acid with constant stirring for 1 h. Finally, 0.3% (w/v) NC and 1.2% (w/v) C were dissolved in 0.5% (v/v) acetic acid under constant stirring for 1 h. From these solutions, three different films were made by the casting method. The first film was made from the CH solution, and another film contained the mixture of C and CH in a 1:1 ratio (v/v), while the third film was produced from the mixture of C-NC:CH (1:1 ratio, v/v). To each filmogenic solution, 1% glycerol was added as a plasticizer, stirred for 90 min and then homogenized using the Ultra-Turrax, at 9500 rpm for 2 min, followed by drying at 25 • C for 48 h.
Films Characterization Moisture Content
The moisture content (MC) of the different films was determined according to Gutiérrez [37]. The dry weight (W D ) of the films was obtained by cutting 2 × 2 cm squares of each film, followed by heating at 105 • C for 24 h and weighing each piece using an analytical balance (Sartorius, BA 110 S, Bohemia, NY, USA); the samples were initially weighed to obtain the wet weight (W W ). The MC was calculated using Equation (3).
Water Solubility Square pieces of the films were cut (20 mm × 20 mm) and dried at 105 • C to a constant weight (W 0 ). The dried films were immersed in 50 mL deionized water for 15 h. After this time, the solution was filtered using previously dried and weighed filter paper, and the undissolved films were dried at 105 • C for 24 h until constant weight (W 1 ). The water solubility (W S ) was calculated using Equation (4) [50].
Film Thickness The films' thickness were measured using a micrometer (Mitutoyo, 293-344-30, Aurora, IL, USA) at ten different random positions along the surfaces of the films. Values are reported as the mean ± standard deviation of five replicates.
Water Vapor Permeability
Water vapor permeability (WVP) was evaluated according to Escamilla-García et al. [51]. Permeability cells with known cross-sectional areas (A) were used, in which the different films with known thicknesses (L) were fitted, then the cells were placed inside a desiccator at constant temperature. To generate a water vapor pressure difference (∆P), different saturated solutions were poured inside the cells (NaCl; RH = 75%) and inside the desiccator (KNO 3 ; RH = 95.6%). Weight variations (∆W) were recorded every 15 min until the cell reached a constant weight, and the time (t) was recorded. WVP was obtained by using Equation (5).
Mechanical Properties
Tensile strength at the breaking point (TS) was determined using a texturometer (Brookfield, CT3, Middleborough, MA, USA). The films were cut into 80 × 25 mm rectangles, which were placed between two clamps, with an initial grip gap of 97.9 mm. During the measurements, an activation load of 4 N was established, and the films were stretched at a speed of 0.3 mm/s. The results were processed using TexturePro CT 1.6 software (Brookfield, Middleborough, MA, USA). The TS was calculated using Equation (6).
where L is maximum load (N), and A is the cross-sectional area of the film (m 2 ).
Color
Films' colors were evaluated following Escamilla-García et al. [47] using a colorimeter (Konica Minolta, CR-400, Ramsey, NJ, USA) with a D65 light source at a viewing angle of 10 • standardized using a white reference plate (L* = 90.9, a* = 0.021 and b* = 0.0376). Films placed on this plate were measured at five different positions along the surfaces of the films (center and outer parts), avoiding the edges. Color differences (∆E), measured as the magnitude of the vector resulting from the three components: luminosity difference (∆L), red-green chromaticity difference (∆a) and yellow-blue chromaticity difference (∆b) were calculated using Equation (7).
where ∆a = a i − a, ∆b = b i − b and ∆L = L i − L. The subscript i is the reference value of each parameter.
Films' Topography
The films' topographical characteristics were determined using an atomic force microscope (Park NX10, Korea), applying the no contact method and using an aluminumcoated silicone tip PPP-FMR (Nanosensors, Switzerland) with a resonance frequency of 286-362 kHz and a spring constant of 20-80 N m −1 . Samples of 0.5 × 0.5 cm were analyzed, and three 1 × 1 µm areas were scanned at a speed of 1 Hz with a resolution of 256 × 256 pixels [47]. The image analysis and the roughness parameters Ra and Rq were obtained using the Smartscan program (Czech Metrology Institute, CZE).
Fourier Transform Infrared Spectroscopy (FTIR)
FTIR spectra were obtained using an IR2 Module spectrophotometer (Horiba Jobin Ybon, Kyoto, Japan) equipped with a diamond ATR objective at a resolution of 4 cm −1 , a range of 400-4000 cm −1 and taking 32 scans per reading. The spectra were analyzed using the Spectragryph 1.1 program (Spectroscopy Ninja, USA).
Films squares (2.5 cm × 2.5 cm) were placed in Petri dishes containing salt-enriched agar, and 1 mL of spore mixture was inoculated; then, the plates were incubated at 30 • C and 85% RH for 28 d. During this time period, a visual inspection of fungi growth was carried out, and on day 28, a rating was given according to Table 5. Filter paper squares were used as a positive control. Additionally, a control plate without spores mixture was included.
Films' Compostability
The films' compostability was evaluated using the method described by Gutiérrez [37], with some modifications. Disks of 0.6 cm in diameter from each film were cut, and the initial dry matter content of each disk (Wi) was determined by oven drying at 105 • C until a constant weight. The compost was prepared according to Sintim et al. [53], consisting of a carbon-nitrogen ratio of 25-30:1 and a moisture content between 55 and 65% (w/w). The base compost used was commercially acquired (Compost-on, CDMX, Mexico). The mixture contained (w/w) broiler litter (28%), yard wastes (28%), manure (28%), animal bedding (14%) and fish carcasses (2%). The compost (500 g) was placed in plastic containers of 11.5 cm in diameter and 7.6 cm in height. At an approximate depth of 4.5 cm, the film disks and filter paper were placed, and four film disks were buried in each container: C-CH, C-NC-CH and CH, placing a marker on the surface of the compost to indicate the position of the film disks. On days 4, 8 and 12, after the initial time (day 0), the final dry matter content of each disk was obtained (Wf). The tests were carried out at room temperature (25 • C ± 2 • C) and 60-70% relative humidity. The compostability percentage (CP) was calculated using Equation (8).
Statistical Analysis
To obtain representative results, five different sections of three different films were tested. The results are the averages of these measurements ± standard deviation. Data were subjected to one-factor analysis of variance and analyzed by the comparison of means using Tukey's test (p < 0.05) using the SigmaPlot 14.0 program (Systat, Chicago, IL, USA).
Conclusions
The extraction of C and NC from corn cobs using alkaline conditions, bleaching and acid hydrolysis treatments produced low yields. Atomic force microscopy detected that films containing NC were the least homogeneous, associated with aggregate formations. Compared with the CH films, the C-CH and C-NC-CH films showed lower moisture contents, water solubility and luminosity, larger thicknesses and roughness values and higher biodegradability. The addition of NC conferred to the films better water barrier properties and increased the luminosity of the C-CH films, while the tensile strength and compostability were not significantly different from the other films. The biodegradability of the CH films was much lower than that shown by the C-CH and C-NC-CH films. From the FTIR spectroscopy, it was observed that adding CH to C films produced decreased tension vibrations of the OH groups, while strong interactions due to hydrogen bonds were revealed by the NC-CH films.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki and approved by the Bioethics Committee of the Faculty of Chemistry of the Autonomous University of Querétaro. Ethical review and approval were not applicable, because this study did not involve humans or animals.
Data Availability Statement: Not applicable. | 2022-09-15T15:08:15.239Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "facd3ec8231030e32070ceb1367b79fbdd907646",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/18/10560/pdf?version=1662976070",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6c41af267dab76ca7f47b504a52bc05f6d532a0",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235665385 | pes2o/s2orc | v3-fos-license | Measuring competitive intelligence Network and its role on Business Performance
Changes and uncertainties have compelled a dramatic change in organizational fundamentals over the last two decades. Owing to internal and external pressures, businesses have been forced to closely track their environments in order to build awareness of opportunities and obstacles in order to stay competitive. The aim of this study is to look into the role of competitive intelligence in small and medium businesses in Iraq's Kurdistan region making business performance. However, the researchers measured the direct impact on business performance at small and medium businesses using five competitive intelligence dimensions (extensiveness' network, third-party strategy, Homophily, Issue awareness, and promotion effort). Furthermore, the researchers used competitive intelligence as a mediator to quantify its impact on business performance, allowing the analysis to explore the indirect role of competitive intelligence. To investigate the role of competitive intelligence in making business performance at small and medium businesses in Iraq's Kurdistan region, the researchers used hierarchal multiple regression and the Sobel test. However, the researchers measured the direct impact on business performance at small and medium businesses using five competitive intelligence dimensions (extensiveness network, third-party strategy, homophily, issue awareness, and promotion effort). Furthermore, the researchers used competitive intelligence as a mediator to quantify its impact on business performance, allowing the analysis to explore the indirect role of competitive intelligence. Competitive intelligence dimensions (extensiveness network, third-party strategy, homophily, issue awareness, and promotion effort) were used to assess the direct and indirect effect of competitive intelligence on business performance at small and medium businesses. Keywords— extensiveness network, Business Performance, Small and Medium enterprises.
INTRODUCTION
Knowledge is available for the preparation and execution of business operations. "Intelligence required for the creation of national and theater-level strategy, policy, and military plans and operations" is the definition of strategic intelligence. In other words, strategic intelligence is the data used to create and implement a policy, typically a grand plan or a national strategy, as defined by the government. The justification that determines a strategy, not the plan itself, is referred to as a policy (López-Robles et al. 2018). A strategy helps progress toward goals by suggesting ways to meet and/or orchestrate a large number of variables-variables that are frequently too numerous for the planner to anticipate and comprehend on their own. Dealing with foreign countries necessitates in-depth expertise, which strategic intelligence provides . Without the insights of deep expertise-dependent on detailed knowledge of threats and rewards, enemies and allies in a foreign field-a strategy is nothing more than an abstract idea, or even a flight of fancy (Granados & Velez-Langs, O2018). The more strategic intelligence there is, the better, which is why the term "strategic intelligence" should not be so ambiguous (Kumar et al. 2020). Since different business organizations are governed by competitive advantage and the struggle for survival as a result of globalization, privatization, and sections of information technology and the digital economy, entail obligations are great at making decisions, so decision-making in business organizations should be based on scompetitive intelligence Network ntific methodology, a large ISSN: 2456-7620 https://dx.doi.org/10.22161/ijels.62. 50 330 number of tools, and a large number of factors. In order to grow Erbil and its organizations (Koriyow & Karugu, 2018), strategic intelligence and organizational ingenuity are needed to develop strategies for cases of repetition, impasse, and misconceptions in our organizations, as well as to identify a structure to follow up on these only competitive and accessible new horizons for business organizations . Because of modernity, researchers and academics began to understand the importance of strategic intelligence, as well as a number of definitions offered by writers and researchers who rushed to this form of intelligence, as well as diverse views of writers, scholars, and experts on the essence of strategic intelligence (Anwar, 2016). According to a comparative analysis of its dimensions and the various aspects that were based on, strategic intelligence is a job that deals with the business environment and demand, corporate identity and get sources, environmental variables, and social and technological forecasting, in order to achieve lasting and effective, gain expertise, and mental wisdom (Xuefei et al. 2018). It also deftly characterizes leaders with a promising future (prospective, reflection, organization, collaboration, and motivational ability). Because of the structure and process, they employ articles that result in good business intellectual resolutions (Abdullah et al. 2017). Strategic intelligence, according to , is the informational process by which an organization listens to the situation in order to determine the steps and activities required to achieve its goals. Citing (Anwar & Balcioglu, 2016), which intelligently describes the characteristics of leaders (prospective, systematic thinking, vision, partnership, ability to motivate employees). The versatility of this type of intelligence drew a lot of attention to this theme (adopted by a variety of countries and government institutions, as well as public and private organizations, corporations, and individuals). She now works with organisations that are addressing new challenges and risks (new mechanisms and strategic techniques to predict and plan for emergencompetitive intelligence Network s before they occur) as a result of witnessing the last decade of the twentieth century and the growth in intelligence requirements and potential (Gatibu & Kilika, 2017). Strategic intelligence was first used in military operations in the fourth century BC, when it was hired by , one of the world's most prominent military strategists, "so that a wise Commander military dominance could do things beyond the skill of the Ordinary leaders are former information, beyond knowledge outputs of wits with highlighting their value" (Demir, et al. 2020). According to Hameed and Anwar (2018), this form of intelligence is an area with a long history, but it lacks a consistent meaning and agreement. This is not to disparage the work of many of its practitioners; rather, the following points out that, considering the duration and scope of historical experience, there is much more work to be done in terms of exploring the limits and possibilities for this form of intelligence. See (Anwar & Abd Zebari, 2015) to Central Intelligence Agency (CIA) (Central Intelligence Agency) was the first to use this style of intelligence in the implementation of arms control agreements, and in supplying political decision makers and policy formulation, strategic intelligence, describing Agency Intelligence cycle in the process of information acquisition and transfer, and evaluated and strategize (Anwar & Ghafoor, 2017). As organizations began to realize the importance of this mode of intelligence and featured many metrics on the evolution of this intelligence, several institutions in Europe and North America began to create strategic intelligence units within organizations to provide insight to policymakers and academic training programmes on smart style intelligence (Kumar et al. 2020). Many businesses are also developing strategic intelligence, which is generated by a group of experts who provide basic advice that serves as the basis for senior management decisions on topics such as Chairperson integration with other organizations and new product development (Anwar & Surarchith, 2015). The decision-making process is difficult in circumstances where markets are experiencing substantial development in various ways (Andavar & Ali, 2020). There are new products and other withdrawal and emerging products, as well as an increase in the number of sellers or suppliers, and other factors that affect the decision in terms of products. This is a complex description of time marketing decisions since they are more complicated than any other decision taken by the Administration, and the complexity of this return to a sentence of explanations is largely dependent on the presence of variables (
II. THEORETICAL BACKGROUND
Because of modernity, researchers and academics began to understand the importance of strategic intelligence, as well as a number of definitions offered by writers and researchers who rushed to this form of intelligence, as well as diverse views of writers, scholars, and experts on the essence of strategic intelligence (Anwar, 2016). According to a comparative analysis of its dimensions and the various aspects that were based on, strategic intelligence is a job that deals with the business environment and demand, corporate identity and get sources, environmental variables, and social and technological forecasting, in order to achieve lasting and effective, gain expertise, and mental wisdom . It also deftly characterizes leaders with a promising future (prospective, reflection, organization, collaboration, and motivational ability). Because of the structure and process, they employ articles that result in good business intellectual resolutions (Abdullah et al. 2017). Strategic intelligence, according to Yun et al. (2020), is the informational process by which an organization listens to the situation in order to determine the steps and activities required to achieve its goals. Citing (Anwar & Balcioglu, 2016), which intelligently describes the characteristics of leaders (prospective, systematic thinking, vision, partnership, ability to motivate employees). The versatility of this type of intelligence drew a lot of attention to this theme (adopted by a variety of countries and government institutions, as well as public and private organizations, corporations, and individuals). She now works with organisations that are addressing new challenges and risks (new mechanisms and strategic techniques to predict and plan for emergencompetitive intelligence Network s before they occur) as a result of witnessing the last decade of the twentieth century and the growth in intelligence requirements and potential . Strategic intelligence was first used in military operations in the fourth century BC, when it was hired by , one of the world's most prominent military strategists, "so that a wise Commander military dominance could do things beyond the skill of the Ordinary leaders are former information, beyond knowledge outputs of wits with highlighting their value" (Mehl & Le Bon, 2019). According to Hameed and Anwar (2018), this form of intelligence is an area with a long history, but it lacks a consistent meaning and agreement. This is not to disparage the work of many of its practitioners; rather, the following points out that, considering the duration and scope of historical experience, there is much more work to be done in terms of exploring the limits and possibilities for this form of intelligence. See (Anwar, 2016) Agency Intelligence cycle in the process of information acquisition and transfer, and evaluated and strategize (Anwar & Ghafoor, 2017). As organizations began to realize the importance of this mode of intelligence and featured many metrics on the evolution of this intelligence, several institutions in Europe and North America began to create strategic intelligence units within organizations to provide insight to policymakers and academic training programmes on smart style intelligence (Caseiro & Coelho, 2019). Many businesses are also developing strategic intelligence, which is generated by a group of experts who provide basic advice that serves as the basis for senior management decisions on topics such as Chairperson integration with other organizations and new product development (Ali, 2020). The decision-making process is difficult in circumstances where markets are experiencing substantial development in various ways. There are new products and other withdrawal and emerging products, as well as an increase in the number of sellers or suppliers, and other factors that affect the decision in terms of products. This is a complex description of time marketing decisions since they are more complicated than any other decision taken by the Administration, and the complexity of this return to a sentence of explanations is largely dependent on the presence of variables (Naeini et al. 2019).
The difficulty in estimating relationships between various variables that are susceptible to shifting and switching over different time periods is usually limited to the dispersion of different sources of data and expertise, which often contain a high degree of risk and information for decision making (Anwar & Qadir, 2017). External variables can be forecasted even if the magnitude levels are high, since most categories in a single project cannot be expected. Finance Department cannot complete budget and production without expected revenue numbers, production table, or even making decisions unless marketing department sales figures are given (Anwar & Climis, 2017). There is a fact that must be understood, and that all administrative activities per initiative that are responsible for adhering to reduce Melisma, or marketing process, are responsible for the reduction and access setup bilaterally. Perhaps the problem with marketing is that decisions are seldom taken without the involvement of others (Saddhono et al. 2019). The information about this customer, in particular, Dodd is a marketing decision maker on the one hand, and marketing decision makers are recognizing this on the other. Another issue is that obtaining the necessary data and information is neither feasible nor convenient, particularly if it comes at a high cost in terms of both time and money. The collection of appropriate data and content information is important so that the outcome benefits the majority of parties, rather than teaser decisions about the consumer and how to satisfy his wants and wishes, though the decision would take antitrust regulators, stock and bond holders, and other considerations into account (Espinet & Alsina, 2017). Many multifaceted concepts are clear as a result of the various entrances to discuss the strategic decision by the book Department and researchers agreeing with many authors, like (Fanbing, 2017). Using market portals and the strategic judgment theory, also known as strategic decisions, as such decisions that dealt with its scale, complexity, and multidimensional nature. A selection of strategic alternatives that represents the best way to achieve the organization's objectives is won by fateful decisions involving areas related to growth and organization creation (Anwar & Louis, 2017). Strategic decisions, according to , are those that take into account internal and external problems and opportunities in order to promote long-term growth, which means strategic decisions that have a broad impact on the organization (Ali & Anwar, 2021 ). The strategic decision affects all aspects of the organization, not just one, and it has a long-term rather than a short-term effect. It also reflects the President's commitment to achieve the organization's main goals (Zhiyin & Jiakun, 2017 General Foundation biography in terms of the expected and unexpected factors that emerge in the world, and in the end form the Organization's actual objectives, and help draw the map, from which the Organization of function, and to exercise (López-Robles et al. 2020). As shown in the following chronological list of strategic marketing concepts, many of the meanings offered by writers and scholars on the subject of business performance were personal views on both the nature study and his approach to his topic. In the present period, extraordinary decisions are made with a high degree of importance in terms of future periods, which are based on the organization's intent by understanding how decision-making processes work, as well as creative abilities to convince internal and external environment changes (Shaitura et al. 2018). Also, (Zhiyin & Jiakun, 2017), because decisions made in the face of danger and a lack of knowledge are often erroneous, because they are made with inadequate information, and also because the law is uncertain in the future, such decisions require distinct capabilities and are more likely to be made with incomplete information. In the light of a larger picture of the company's prospects.
Decisions about the entity's management and future, as well as its climate, whether this sort of comparatively stable longterm, significant commitments or funds for deployment, and making such decisions at the top of the corporate pyramid (Anwar, 2016). Special resolutions often include long-term commitments and acquisitions, but any error that could expose the Organization to several threats should be avoided (Hameed & Anwar, 2018). Decisions made now with a high degree of urgency, in terms of their effect on the Bank in the future stages, and focused on achieving the Bank's goal by learning how to streamline the decision-making process during which, and the professional advocate for briefing the internal environment variables (Ali, 2020). If this industry requires mental flexibility and own creative skills to identify the greatest percentage of variables influencing the manufacturing process as a result of unexpected threats, and potentially influential environmental opportunities arise in the future, and the results of these decisions have long-term success for the Organization (Anwar & Balcioglu, 2016). Non-programmed decisions necessitate long-term goals and plans, as well as dealing with emerging problems that demand hard and strategic thinking from senior management (Caseiro & Coelho, 2019). The Organization's true goals are decisions that determine the organization and direction of the General Foundation course in light of expected and uncertain factors in the world. These choices aid in the creation of a map, the distribution of resources, and the assessment of the Organization's viability (Abdullah et al. 2017). The organization's long-term objectives, as determined by a longrange strategy and a medium-range plan . Before the onset of crises, non-traditional decisions involving several dimensions and planning issues of great complexity and depth, which are impossible to address with an instant decision, are made to determine how to respond to these problems (Anwar & Balcioglu, 2016). Decisions that consider opportunities, external threats, and internal capital in order to enhance the Organization's long-term success so that it has a broad impact and lasts too long (Hameed & Anwar, 2018). The resolution of value is based on foreseeing and anticipating the Organization's future and projecting their needs to enable both data and administrative, scompetitive intelligence Network ntific, and technical capacities, and it necessitates an efficompetitive intelligence Network nt professional and managerial leadership that is well aware of what will work and perspective in the future and decided with all factors surrounding it. To assist the company in adjusting to the external environment through analysis and notification, in order to achieve a wide range of development and desired outcomes (Anwar & Ghafoor, 2017). Senior management allocates resources to adapt to the organization's environment and competing agendas in order to ensure the organization's long-term survival (Demir, et al. 2020). Represents the organization's fundamental direction and makes job decisions based on methodology and simulation of trends and predicted external and internal dynamics. The organization's policy option, which determines the long-term pattern because it deals with non-traditional formulas and potential employment, is an example of higher-level strategic
III. METHODOLOGY
The research was conducted in Erbil's small and medium enterprises. The research looked at the perspective of competitive intelligence Network in the SMEs, specifically in private hospitals. To quantify SMEss' Business performance, the researchers used five competitive intelligence Network metrics, such as extensiveness network, Third-party technique, and Business performance, Homophily, degree of change, and promotion effort. Furthermore, the researcher used competitive intelligence as a mediator for all five independent variables to assess business performance in the SMEs. To find a competitive intelligence Network perspective in the SMEs field, the researchers used a quantitative analysis approach. A total of 130 administrative staff members from private hospitals were given the questionnaire at random. The participants in this study were 112 people from various private hospitals in Iraq's Kurdistan province. The questionnaire contained 59 things ranging from 1=Strongly Disagree, 2= Disagree, 3= Neutral, 4= Agree, and 5= Strongly Agree, all of which were measured using a fivepoint Likert scale ranging from 1=Strongly Disagree, 2= Disagree, 3= Neutral, 4= Agree, and 5= Strongly Agree. Table 4). The value of B =.602, the value of Beta =.606, and the P-value =.000 for model (1), the direct relationship between marketing intelligence and business performance, showed that there is a strong and optimistic relationship between marketing intelligence and business performance. Model (2), which used multiple regression analysis to discover both marketing intelligence as an independent factor and competitive intelligence as a mediator factor with business performance as a dependent factor, revealed that the value of B =.611, the value of Beta =.617 with P-value.001 as an indirect relationship between marketing intelligence and business performance. The results showed that marketing intelligence and business performance have a positive and significant direct and indirect relationship, and that competitive intelligence plays a positive and significant mediating function between marketing intelligence and business performance. (1) the direct relationship between third-party technique and Business performance, the value of B = .671, the value of Beta = .679 with P-value =.000 this indicated that there is a significant and positive relationship between third-party technique and Business performance. As for model (2) which applied multiple regression analysis to find both third-party technique as independent factor and Competitive intelligence as a mediator factor with Business performance as dependent factor, the findings showed that the value of B =.677, the value of Beta = .681 with P-value .001 as indirect relationship between third-party technique and Business performance, on the other hand the value of B =.639, the value of Beta = .643 with P-value .000 as mediation between Competitive intelligence and Business performance. The findings proved that there is a positive and significant direct and indirect relationship between third-party technique and Business performance, moreover Competitive intelligence has a positive and significant mediating role between third-party technique and Business performance. (1) the direct relationship between change assessment and Business performance, the value of B = .611, the value of Beta = .617 with P-value =.000 this indicated that there is a significant and positive relationship between change assessment and Business performance. As for model (2) which applied multiple regression analysis to find both dur change assessment as independent factor and Competitive intelligence as a mediator factor with Business performance as dependent factor, the findings showed that the value of B =.622, the value of Beta = .629 with P-value .001 as indirect relationship between change assessment and Business performance, on the other hand the value of B =.633, the value of Beta = .639 with P-value .000 as mediation between Competitive intelligence and Business performance. The findings proved that there is a positive and significant direct and indirect relationship between change assessment and Business performance, moreover Competitive intelligence has a positive and significant mediating role between change assessment and Business performance. with P-value =.000 this indicated that there is a significant and positive relationship between Issue knowledge and Business performance. As for model (2) which applied multiple regression analysis to find both Issue knowledge as independent factor and Competitive intelligence as a mediator factor with Business performance as dependent factor, the findings showed that the value of B =.591, the value of Beta = .595 with P-value .001 as indirect relationship between Issue knowledge and Business performance, on the other hand the value of B =.644, the value of Beta = .649 with P-value .000 as mediation between Competitive intelligence and Business performance. The findings proved that there is a positive and significant direct and indirect relationship between Issue knowledge and Business performance, moreover Competitive intelligence has a positive and significant mediating role between Issue knowledge and Business performance. with P-value =.000 this indicated that there is a significant and positive relationship between promotion effort and Business performance. As for model (2) which applied multiple regression analysis to find both promotion effort as independent factor and Competitive intelligence as a mediator factor with Business performance as dependent factor, the findings showed that the value of B =.691, the value of Beta = .695 with P-value .001 as indirect relationship between promotion effort and Business performance, on the other hand the value of B =.642, the value of Beta = .647 with P-value .000 as mediation between Competitive intelligence and Business performance. The findings proved that there is a positive and significant direct and indirect relationship between promotion effort and Business performance, moreover competitive intelligence has a positive and significant mediating role between promotion effort and business performance. Table (13), illustrates the findings of Sobel test to find the mediation analysis, the result demonstrates the direct relationship between promotion effort and business performance, P-value =.000 this indicated that there is a significant and positive direct relationship between promotion effort and Business performance. Furthermore, P-value is .002 as indirect relationship between promotion effort and Business performance. Moreover, the results proved that there is a positive and significant direct and indirect relationship between promotion effort and business performance, moreover competitive intelligence has a positive and significant mediating role between promotion effort and Business performance.
V. CONCLUSIONS
The competitive intelligence diagnosis and clarification of the outcomes, according to a high-level view of respondents, have an effect on marketing strategy decisions in Erbil's five-star small and medium enterprises. The degree attained by looking ahead and seeing the highest level, which, in the long run, provides potential marketing strategy systems and diagnoses additional opportunities before other small and medium businesses would catch up, seemed to be too large in the results. These close findings support the presence of a high stage. It is possible to comprehend effective small and medium company approaches in terms of opportunity diagnosis. The descriptive research showed a strong willingness to persuade employees of small and medium-sized businesses to believe in future vision and the ability to predict the future, which aids my ability to make sound business decisions. This finding supports a high level of foresight, insight, and strategy in small and medium businesses. According to the study, higher-level thought methodology was achieved, as well as systems that react to small and medium business management's eagerness to devote time to gather information from various sources. The findings of the investigation found that small and medium-sized business owners set aside time to collect data from various sources. The findings revealed a high degree of potential foresight. Small and medium businesses pursue marketing knowledge for strategic decisions, as well as the implementation of concrete methods in small and medium business voluntary community quota requirement material, making marketing strategy decisions easier to absorb and use. The skill manager will look at the patterns and factors for high-level performance in small and medium businesses using the high-level research findings. The findings showed a high level of motivation, with this metric indicating a willingness to pay workers to carry out small and medium firm visions and perceptions, as well as a higher level of motivation in other fields. As a business intelligence component, the findings revealed a high degree of innovation, indicating a willingness to provide creative solutions to small and medium-sized businesses' marketing issues, as well as a never-ending quest for new ways to offer small and medium-sized businesses' services. The marketing campaign choices available to small and medium-sized company management, as well as forecasting the future and providing service to small and medium-sized businesses, yielded high-level findings.
To investigate the role of competitive intelligence in determining market performance at small and medium businesses in Iraq's Kurdistan region, the researchers used hierarchal multiple regression and the Sobel test. However, the researchers measured the direct impact on market efficiency at small and medium businesses using five competitive intelligence dimensions (extensiveness network, third-party strategy, homophily, issue awareness, and promotion effort). Furthermore, the researchers used competitive intelligence as a mediator to assess its impact on business success, allowing the study to explore the indirect role of competitive intelligence.
As for the first research hypothesis, the results show a direct relationship between extensiveness network and business performance, with a P-value of.000 indicating a substantial and optimistic direct relationship between extensiveness network and business performance. Furthermore, the indirect relationship between extensiveness network and business success has a P-value of.000. Furthermore, the findings revealed a positive and significant direct and indirect relationship between extensiveness network and business performance, as well as a positive and significant mediating function for competitive intelligence between extensiveness network and business performance. In terms of the second research hypothesis, the results show a direct link between third-party technique and business performance, with a Pvalue of.000 indicating that there is an important and optimistic link between third-party technique and business performance. Furthermore, the indirect relationship between third-party methodology and business results has a P-value of.001. Furthermore, the findings revealed that third-party technique and business performance have a positive and meaningful direct and indirect relationship. Competitive intelligence plays a constructive and significant role in mediating the relationship between third-party technology and business success. In terms of the third research hypothesis, the results show a direct link between change assessment and business performance, with a P-value of.000 indicating that there is an important and optimistic link between change assessment and business performance. Furthermore, the indirect relationship between change appraisal and business results has a P-value of.000. Furthermore, the findings revealed that change assessment and business performance have a positive and meaningful direct and indirect relationship. Between Homophily and business success, competitive intelligence plays a constructive and important mediating role. In terms of the fourth research hypothesis, the results show a direct relationship between Issue knowledge and Business performance, with a P-value of.002 indicating a strong and optimistic direct relationship between Issue knowledge and Business performance. Furthermore, the indirect relationship between Issue information and Business success has a P-value of.003. Furthermore, the findings revealed a positive and significant direct and indirect relationship between Issue awareness and Business performance, as well as a positive and significant mediating function for Competitive intelligence between Issue knowledge and Business performance. The outcome of the fifth research hypothesis shows a direct relationship between promotion effort and business performance, with a P-value of.000 indicating an important and optimistic direct relationship between promotion effort and business performance. Furthermore, the indirect relationship between promotion effort and business success has a P-value of.002. Furthermore, the findings revealed a positive and significant direct and indirect relationship between promotion effort and business performance, as well as a positive and significant mediating function for competitive intelligence between promotion effort and business performance. | 2021-06-28T19:28:31.313Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "f455b847a253378b376954ea984d0d81c1a3a267",
"oa_license": "CCBY",
"oa_url": "https://ijels.com/upload_document/issue_files/50IJELS-104202131-Measuring.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f455b847a253378b376954ea984d0d81c1a3a267",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
233425669 | pes2o/s2orc | v3-fos-license | Risk for Significant Kidney Function Decline After Acute Kidney Injury in Adults With Hematologic Malignancy
Introduction Acute kidney injury (AKI) affects 30% of adults hospitalized with hematologic malignancy. Little is known about the long-term impact on kidney outcomes in this population despite the close relationship between kidney function and malignancy treatment eligibility. The purpose of this population-based cohort study was to determine the effect of AKI on kidney function in the year following a new diagnosis of acute leukemia or lymphoma. Methods Participants were adults hospitalized within 3 weeks of malignancy diagnosis. Baseline kidney function was determined and AKI diagnosed using standardized criteria. Cox proportional hazard modeling examined the relationship between AKI and a ≥30% decline in estimated glomerular filtration rate (eGFR) from baseline in the 1 year following hospitalization as the primary endpoint. Results AKI occurred in 33% of 1064 participants, with 70% of episodes occurring within 48 hours of hospitalization, and significantly increased risk for a ≥ 30% decline in eGFR (hazard ratio [HR] 2.7, 95% confidence interval [CI] 2.2–3.5) and incident chronic kidney disease (HR 2.2, 95% CI 1.7–2.8). AKI remained a significant predictor of eGFR decline in subgroup and multivariable analyses (adjusted HR 1.9, 95% CI 1.4–2.7). A ≥ 30% decline in eGFR increased the risk for death within 1 year in participants with AKI (HR 2.1, 95% CI 1.3–3.3). Conclusion Results aid in identifying individuals at highest risk for poor outcomes and highlight the need for research involving interventions that preserve kidney function from the time of initial hospitalization with a hematologic malignancy into the postdischarge period.
A KI is a major barrier to positive health outcomes and is known to affect at least 30% of adults hospitalized with newly diagnosed hematologic malignancies, often as a result of hypoperfusion, tumor lysis syndrome, and chemotherapy-induced nephrotoxicity. [1][2][3][4] Short-term consequences of AKI in this setting include increased hospital mortality and a tripling of lengths of stay and cost. 2,5,6 Any resultant permanent loss of kidney function may jeopardize eligibility for optimal cancer treatments and, ultimately, survival.
Less is known about the long-term impact of AKI in patients with hematologic malignancy, despite a growing interest in post-hospital consequences for AKI survivors. Individuals with hematologic malignancy comprise a small portion of study samples in large population studies and the degree to which these findings are generalizable in this unique sub-population is unknown. Conversely, small sample sizes and limited durations of follow-up from published studies of patients with AKI and hematologic malignancy make it difficult to appreciate the long-term effects on kidney function and health outcomes. A malignancy diagnosis (of any type) was associated with a 1.9-fold risk of rehospitalization with recurrent AKI in a cohort of adults surviving an initial AKI episode and not discharged on dialysis. 7 Results from a study of patients admitted to the intensive care unit with select, highgrade hematologic malignancies showed that those with AKI were 0.58 times as likely to be alive in complete remission at 6 months, and chemotherapy modifications were required in 15% of patients as a result of the AKI episode. 3 Critically ill patients included in this study were those with only the most aggressive hematologic malignancies and previously normal kidney function. Although existing findings are notable, the current literature contains key knowledge gaps. It is essential to further explore the relationship between AKI and long-term kidney outcomes in a more representative sample of hematologic malignancy diagnoses, in patients with pre-existing comorbidities, and in those outside of the intensive care unit. This will allow for identification of individuals at high risk for poor outcomes and inform interventions to protect and preserve kidney function, a key determinant of eligibility for preferred chemotherapy, hematopoietic cell transplantation, novel immunotherapy, and clinical trials. The purpose of this investigation was to determine the effect of AKI on kidney function in the year following a diagnosis of acute leukemia or lymphoma. Recognizing the unique differences between malignancies and treatment approaches, participants with acute leukemia and lymphoma were evaluated in aggregate as well as individually.
Study Design and Sample
A population-based cohort study was performed from 2005 to 2017 using the Rochester Epidemiology Project, a medical records linkage system that unifies records from multiple medical care providers in Olmsted County, Minnesota and the surrounding 27 counties. The Rochester Epidemiology Project includes demographic data and comprehensive information related to diagnoses, hospital admissions and outpatient follow-up care. 8 Residents have similar age-and sexspecific mortality rates to those found in Minnesota and the United States. 9 We identified adults ($ 18 years of age) with newly diagnosed or relapsed acute leukemia or lymphoma who were hospitalized within 3 weeks of the diagnosis. Diagnoses were identified using International Classification of Diseases, Ninth and Tenth Revision diagnosis codes recorded within 60 days of pathology or hematopathology studies (Supplementary Table S1). The requirement of corresponding pathology studies was implemented in order to enhance the validity of the diagnosis codes by decreasing the likelihood of misclassification bias. Participants were included only once, at the time of the index hospitalization within this time frame. Those diagnosed in 2005 were evaluated for a malignancy diagnosis assigned in the preceding year to ensure no prevalent cases were included. Relapsed disease was identified if a malignancy diagnosis code that matched the code assigned at cohort entry was recorded more than 1 year before study inclusion. Those with pre-existing end-stage kidney disease requiring dialysis were excluded.
Identification of AKI Data were electronically abstracted from the Rochester Epidemiology Project Database. Pertinent demographic data included age at diagnosis, anthropometrics, hematologic disease subtype, and a history of cardiovascular or chronic kidney disease, diabetes, or hypertension. Comorbidities were identified using International Classification of Diseases, Ninth and Tenth Revision, diagnosis codes. Diagnosis with one of these comorbidities required 2 codes separated by at least 30 days within 3 years of hospitalization in order to reduce the potential for misclassification bias. A baseline serum creatinine (SCr) was established for each member of the cohort using the median outpatient SCr values from the period of 6 months up to 7 days prior to hospitalization or, if unavailable, estimated using the MDRD formula. 10 This SCr was also used to determine baseline eGFR prior to study entry, calculated using the Chronic Kidney Disease Epidemiology Collaborative equation. Routine laboratory data collected from the hospital encounter included SCr measurements, which were used to identify and stage AKI per the KDIGO criterion. 11 AKI was classified as community-or hospital-acquired based on time of onset (within 48 hours of hospitalization for community-acquired AKI, > 48 hours for hospital-acquired AKI). 12 Referent participants were free of any stage AKI throughout the duration of their hospitalization.
Outcome Assessment
The primary outcome of interest was a $ 30% decline in eGFR from baseline in the 1 year following hospitalization. The threshold of $ 30% decline was selected because it has been strongly associated with increased risk for end-stage kidney disease and all-cause mortality in the general population. 13 As CKD requires persistent kidney dysfunction for a minimum of 90 days following an episode of AKI, 11 the outcome was assessed between 90 and 365 days after the last day of hospitalization using serially collected SCr data to calculate eGFR. A minimum of 1 SCr value, obtained in the outpatient setting, was required for inclusion in the analysis. Participants with a baseline eGFR $ 60 ml/ min per 1.73 m 2 were considered to have normal baseline kidney function and were evaluated for new CKD stage 3 or higher, defined as eGFR < 60 ml/min per 1.73 m 2 , during the follow-up period. 11 Review of medical records for follow-up was conducted for up to 12 months or until death or loss of follow-up, whichever occurred first.
Statistical Analysis
Descriptive statistics were used to report demographics for the entire cohort and for the groups with and without AKI. Univariate Cox proportional hazard modeling estimated the HR for a $ 30% decline in eGFR from baseline in the aggregate cohort and in subgroups based on malignancy type (acute leukemia or lymphoma). We also compared the HR for a $ 30% decline in eGFR from baseline according to maximum AKI stage. Multivariable Cox proportional hazard modeling included the following clinically relevant variables: age, sex, year of diagnosis, eGFR at baseline and hospital dismissal, malignancy subtype (including acute myeloid leukemia, acute lymphoblastic leukemia, other leukemia, non-Hodgkin lymphoma, and other lymphoma), relapsed disease, comorbidities (including cardiovascular or chronic kidney disease, diabetes, and hypertension), and hospital readmission, hematopoietic cell transplant, or subsequent AKI episode during the follow-up period. The model was checked for nonproportionality to ensure no violations were found. Cumulative incidence curves, adjusted for competing risk of death, described the time to a $ 30% decline in eGFR from baseline in those with and without AKI and by AKI stage. To explore the association between experiencing an episode of AKI and/or a $ 30% decline in eGFR from baseline and survival, Cox proportional hazard modeling was performed. This modeling used a 4-level categorical variable for the combinations of having an AKI and/or $ 30% decline in eGFR as a time-dependent covariate for the prediction of survival. All HRs are reported with their corresponding 95% CIs using a 2-tailed alpha level of < 0.05 to indicate statistical significance.
Participants
There were 1143 individuals with newly diagnosed or relapsed acute leukemia or lymphoma during the study time frame, of which 1069 were included ( Figure 1) and are described in Table 1. Relapsed disease was present in only 33 (3%) cohort participants. Demographics and baseline characteristics, including baseline kidney function, were similar between those with acute leukemia and those with lymphoma (Supplementary Tables S2 and S3).
Outcomes
Of the 1069 participants, 755 (71%) had adequate follow-up for assessment of the primary outcome of a $ 30% decline in eGFR. Outcome assessment was possible in 64% of participants with AKI and 74% of referent participants. The primary identifiable reason for insufficient follow-up was death (n ¼ 239). Loss of follow-up occurred in 35% and 25% of participants with acute leukemia and lymphoma, respectively, with death as the reason in 84% of the individuals with acute leukemia and 68% of the individuals with lymphoma. The median (IQR) number of follow-up SCr measurements between 90 and 365 days following hospitalization was 22 in participants with AKI and 11 in referent participants without AKI during index hospitalization. Cumulative incidence of a $ 30% decline in eGFR from baseline at 1 year was 68% (95% CI 57-76) for those with any stage AKI and 39% (95% CI 32-45) for referent participants without AKI during the index hospitalization, with a corresponding HR of 2.7 (95% CI 2.2-3.5; P < 0.001) in univariate analysis. Subgroup analyses by malignancy subtype and AKI stage revealed similar results (Table 3). Figure 2 depicts the time to a $30% decline in eGFR, adjusted for the competing risk of death. Table 4 describes the relationship between a $ 30% decline in eGFR and risk for death within 1 year.
The relationship between AKI during hospitalization and a $ 30% decline in eGFR from baseline remained significant in multivariable analysis, with an HR of 1.9 (95% CI 1.4-2.7; P ¼ 0.001) (Figure 3). Other factors that were associated with a $30% decline in eGFR included age, eGFR at baseline and at hospital dismissal, an AKI episode or hematopoietic cell transplant during the follow-up period after index hospitalization, and select malignancy diagnoses.
DISCUSSION
AKI affects approximately 1 of every 3 individuals hospitalized with newly diagnosed or relapsed acute leukemia or lymphoma. Though the bulk of participants experienced stage 1 AKI, 35% of AKI episodes were stage 2 or 3 severity. These rates of AKI are higher than those reported in the general population. 14 Frequency of AKI appeared higher in the subgroup with acute leukemia (45%-50% of those with acute myeloid or lymphoblastic leukemia) compared to the patients with lymphoma. This is possibly due to longer hospitalization and duration of neutropenia, higher infection rates, or increased exposure to nephrotoxins such as antibiotics in patients with leukemia.
The general population of AKI survivors is at increased risk for hospital readmission, cardiovascular events, CKD, and death, 7,15-18 but it is unclear how well these data extrapolate to patients with a diagnosis of a hematologic malignancy. 16,18 Our data showed that in a group of patients with new acute leukemia or lymphoma, an episode of AKI was associated with a 1.9-fold higher risk for significant eGFR loss and increased the risk for incident CKD among those with normal kidney function at baseline. In the overall cohort, risk for eGFR loss remained constant regardless of the stage of AKI. Although stage 2 and 3 AKI have been linked to higher risk for end-stage kidney disease and death relative to stage 1 AKI, other studies have demonstrated that the risk for outcomes such as recurrent AKI or CKD was not significantly affected by AKI stage. 7,19 These findings underscore the potential significance of even small changes in kidney function. In the present study, incident CKD among patients with a new hematologic malignancy occurred at higher rates than in all-comers by 1 year (50% vs. 30%). 12,20 This is consistent with population data indicating CKD is more prevalent in those with malignancy relative to the general population, 21 possibly because of higher frequency of acute illness and hospitalization or exposure to nephrotoxic chemotherapy. An increase in comorbidity burden such as with new CKD is especially consequential for patients with cancer and has implications for costs, care decisions, and quality of life. [22][23][24] Transplant programs and clinical trials for novel immunotherapy or chemotherapeutics often deny entry to patients with an SCr level > 1.5 mg/dL or an eGFR < 60 mL/min per 1.73 m 2 . Preferential regimens for leukemia and lymphoma often include renally eliminated and nephrotoxic agents such as methotrexate, cyclophosphamide, and cytarabine. Development of long-term kidney dysfunction may result in these drugs being omitted from treatment programs, doses being reduced, or second-or third-line regimens being selected, all of which could adversely affect the efficacy of cancer treatment. 3 Our analysis showed that the risk for death was significantly increased in those participants with a decline in eGFR $ 30% from baseline within 1 year of diagnosis. The association was strongest among those who did not experience AKI at the outset of diagnosis (the primary variable of interest in our study), but subsequently experienced a $ 30% decline in eGFR from baseline. This may have been the result of AKI that occurred later during the malignancy disease course, which is not uncommon. 25 Our multivariable analyses also revealed a significant association between AKI episodes occurring after the index hospitalization and a $ 30% decline in eGFR from baseline. Taken together, these findings may indicate that AKI that occurs later during a patient's treatment course has a greater effect on outcomes, for the reasons stated above. These data, as well as our primary findings of a doubling of risk for significant eGFR loss and CKD in a short, 1-year time frame, highlight the overall impact of AKI and the need for interventions that preserve kidney function from the outset of hematologic malignancy diagnosis.
Despite observing a higher proportion of patients with leukemia affected by AKI relative to those with lymphoma, the relationship between AKI and a $ 30% decline in eGFR from baseline was stronger in the subgroup with lymphoma. This may be due to increased exposure to nephrotoxic chemotherapy (e.g., methotrexate) in the lymphoma subgroup as part of the recommended standard-of-care treatment regimens during the study time period. [26][27][28] Experiencing AKI at the outset of a lymphoma diagnosis may reduce renal reserve, making individuals more susceptible to Table 3. Risk of a $30% decline in eGFR from baseline within 1 year in univariate analysis permanent eGFR loss with successive exposure to nephrotoxins. 29,30 It is also possible the lymphoma subgroup experienced subsequent episodes of AKI (after the index hospitalization) more frequently than the group with acute leukemia. Despite these differences between the subgroups, the relationship between AKI and eGFR loss and incident CKD remained significant in all analyses. The majority of AKI episodes were evident within the first 48 hours of hospitalization. This important finding is consistent with our prior research but previously unrecognized in published literature. 1 In such cases of community-acquired AKI, kidney damage likely began early in the course of illness or perhaps preceded hospitalization. 12 This may jeopardize individuals' candidacy for first-line therapies and aggressive malignancy treatment from the outset of diagnosis. 3,22 Higher baseline eGFR was also independently associated with significant eGFR loss. It may be that individuals with preserved eGFR more commonly undergo aggressive treatment and subsequently endure multiple kidney function insults. The pattern of AKI presentation and patients affected underscore the importance of efforts aimed at facilitating recovery from AKI, regardless of baseline kidney function, in addition to usual preventative measures. More research is needed to understand the patterns of recovery from AKI in this population. In our multivariable model, lower eGFR at hospital dismissal was independently associated with sustained loss of eGFR. This finding has also been reported elsewhere 16,18,31,32 and suggests that patients with AKI in whom kidney function does not recover by hospital dismissal represent an extremely high-risk population. Interventions such as careful medication reconciliation, purposeful kidney function monitoring during transitions of care, and patient education may be key to preventing comorbidity and optimizing malignancy outcomes.
This study has several limitations to consider. The potential for prevalence-incidence bias was addressed by including participants at the time the first malignancy diagnosis code was documented during the study time frame. We also evaluated those included during 2005 for a malignancy diagnosis assigned in the preceding year. It is possible that some individuals may have migrated into the population cohort with preexisting malignancies and thus their first malignancy diagnosis codes represent prevalent cases. Additionally, sample selection was limited by the sensitivity of diagnosis codes for acute leukemia and lymphoma. We addressed this through combination of relevant codes with hematopathology data to enhance validity and decrease misclassification. Results may not be generalizable to those not requiring hospitalization or Figure 3. Risk of a $ 30% decline in eGFR from baseline within 1 year given acute kidney injury during hospitalization in multivariable analysis. AKI, acute kidney injury; ALL, acute lymphoid leukemia; AML, acute myeloid leukemia; BMT, bone marrow transplant; CI, confidence interval; eGFR, estimated glomerular filtration rate; HCT, hematopoietic cell transplant; NHL, non-Hodgkin lymphoma; *td, time-dependent variable. experiencing AKI within 3 weeks of diagnosis, as such individuals may differ in overall health and complication rates from the present cohort. Prevalence of CKD at baseline appeared low in the cohort. This may be due to overestimation of eGFR in those without available baseline SCr data, poor sensitivity of CKD identifiers, or an indication of low comorbidity in individuals with a new malignancy diagnosis. This study was likely underpowered to assess how pre-existing CKD may affect the relationship between AKI and subsequent eGFR decline. Measurement bias may have resulted from different rates of follow-up assessments between individuals with and without AKI. Though the number of participants lost to follow-up was similar between groups, those with AKI were monitored more frequently. Additional effect modifiers or confounders, such as malignancy-specific characteristics or medication exposures, may exist that were not captured in this investigation, despite a robust multivariable analysis. This large population-based cohort study is among the first to explore the relationship between AKI and kidney function outcomes in patients with newly diagnosed acute leukemia or lymphoma. Kidney function is an important determinant of eligibility for preferred chemotherapy, hematopoietic cell transplantation, and clinical trials and thus its preservation plays a vital role in optimizing malignancy outcomes. These data demonstrate that AKI at the time of new diagnosis or relapse of acute leukemia or lymphoma was associated with an increased risk for significant decline in kidney function and incident CKD in the following year. A $30% decline in eGFR from baseline increased the risk for death in those with and without AKI. Results help to identify individuals at high risk for poor outcomes who stand to benefit most from action to protect and preserve kidney function. Further research is needed to determine interventions that may mitigate deleterious consequences of AKI in this unique subset of patients.
DISCLOSURE
EB is a consultant for FAST Biomedical (unrelated). This project was supported in part by CTSA Grant Number UL1 TR002377 from the National Center for Advancing Translational Science (NCATS), the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under award number K23AI143882 (principal investigator: EFB) and the resources of the Rochester Epidemiology Project, which is supported by the National Institute on Aging of the National Institutes of Health under award number R01AG034676. This project was also supported, in part, by a small grant from the Mayo Midwest Pharmacy Research Committee. All the other authors declared no competing interests.
ACKNOWLEDGMENTS
The contents of this work are solely the responsibility of the authors and do not necessarily represent the official views of the National Institute of Health. Select parts of this data have been presented in poster form at the American Society of Hematology national meeting in 2019 and a corresponding abstract published in Blood.
SUPPLEMENTAL MATERIAL
Supplementary File (PDF) Table S1. Diagnosis codes utilized to assess for hematologic malignancy. Table S2. Characteristics of participants with acute leukemia. Table S3. Characteristics of participants with lymphoma. STROBE checklist. | 2021-04-29T05:21:58.048Z | 2021-01-28T00:00:00.000 | {
"year": 2021,
"sha1": "b562b0c4a908848a9bb3448b67d4f9d61a73657e",
"oa_license": "CCBYNCND",
"oa_url": "http://www.kireports.org/article/S2468024921000188/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b562b0c4a908848a9bb3448b67d4f9d61a73657e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263830551 | pes2o/s2orc | v3-fos-license | URLOST: Unsupervised Representation Learning without Stationarity or Topology
Unsupervised representation learning has seen tremendous progress but is constrained by its reliance on data modality-specific stationarity and topology, a limitation not found in biological intelligence systems. For instance, human vision processes visual signals derived from irregular and non-stationary sampling lattices yet accurately perceives the geometry of the world. We introduce a novel framework that learns from high-dimensional data lacking stationarity and topology. Our model combines a learnable self-organizing layer, density adjusted spectral clustering, and masked autoencoders. We evaluate its effectiveness on simulated biological vision data, neural recordings from the primary visual cortex, and gene expression datasets. Compared to state-of-the-art unsupervised learning methods like SimCLR and MAE, our model excels at learning meaningful representations across diverse modalities without depending on stationarity or topology. It also outperforms other methods not dependent on these factors, setting a new benchmark in the field. This work represents a step toward unsupervised learning methods that can generalize across diverse high-dimensional data modalities.
Introduction
Unsupervised representation learning, also known as self-supervised representation learning (SSL), aims to develop models that autonomously detect patterns in data and make these patterns readily apparent through a specific representation.There has been tremendous progress over the past few years in the unsupervised representation learning community.Popular methods like contrastive learning and masked autoencoders [68; 6; 24; 70] work relatively well on typical modalities such as images, videos, audio, time series, and point clouds.However, these methods make implicit assumptions about the data domain's topology and stationarity.Given an image, topology refers to the neighboring pixels of each pixel, or more generally, the grid structure in images, the temporal structure in time series and sequences, or the 3D structure in molecules and point clouds.Stationarity refers to the property that the low-level statistics of the signal remain consistent across its domain.For instance, pixels and patches in images exhibit similar low-level statistics (mean, variance, co-variance) regardless of their locations within the domain.The success of state-of-theart self-supervised representation learning relies on knowing the prior topology and stationarity of the modalities.For example, joint-embedding SSL employs random-resized cropping augmentation [6], and masked auto-encoding [25] utilizes masked-image-patch augmentation.What if we possess high-dimensional signals without knowledge of their domain topology or stationarity?Can we still craft a high-quality representation?This is not only the situation that biological vision systems have to deal with but also a practical setting for many scientific data analysis problems.In this work, we introduce unsupervised representation learning without stationarity or topology (URLOST) and take a step in this direction.
As we mentioned earlier, typical modalities possess topology and stationarity prior information that can be utilized by unsupervised representation learning.Taking images as an example, digital cameras employ a consistent sensor grid that spans the entire visual field.However, biological visual systems have to deal with signals with less domain regularity.For instance, unlike camera sensors which have a uniform grid, the cones and rods in the retina distribute unevenly and non-uniformly.This results in a non-stationary raw signal.Retinal ganglion cells connect to more photoreceptors in the fovea than in the periphery.The correlation of the visual signal between two different locations in the retina depends not only on the displacement between these locations but also on their absolute positions.Yet, biological visual systems can establish precise retinotopy from the retina to neurons based on spontaneous retinal activities and external stimuli [67; 35; 18] and leverage retinotopic input to build unsupervised representation.This implies that we can potentially build unsupervised representation without relying on prior stationarity of the raw signal or topology of the input domain.From left to right: the unsupervised representation learning through joint embedding and masked auto-encoding; the biological vision system that perceives via unstructured sensor and understands signal without stationarity or topology [48]; and many more such diverse high dimensional signal in natural science that our method supports while most existing unsupervised methods don't.Data figures are borrowed from [48; 44; 69].
In this work, we aim to build unsupervised representations for general high-dimensional vectors.Taking images as an example again, let's assume we receive a set of images whose pixels are shuffled in the same order.How can we build representations in an unsupervised fashion without knowledge of the shuffling order?If possible, can we use such a method to build unsupervised representations for general high-dimensional data?Inspired by [53], we use low-level statistics and spectral clustering to form clusters of the pixels, which recovers a coarse topology of the input domain.These clusters are analogous to image patches except that they are slightly irregularly shaped and different in size.We mask a proportion of these "patches" and utilize a Vision Transformer [15] to predict the masked "patches" based on the remaining unmasked ones.This "learning to predict masked tokens" approach is proposed in masked autoencoders (MAE) [25] and has demonstrated effectiveness on typical modalities.Initially, we test the proposed method on the synthesized biological visual dataset, derived from CIFAR-10 [32] using a foveated retinal sampling mechanism [8].
Then we generalize this method to two high-dimensional vector datasets: a primary visual cortex neural response decoding dataset [57] and the TCGA miRNA-based cancer classification dataset [62; 66].Across all these benchmarks, our proposed method outperforms existing SSL techniques, establishing its effectiveness in building unsupervised representations for signals lacking explicit stationarity or topology.Given the emergence of new modalities in deep learning from natural sciences [59; 23; 49; 34], such as chemistry, biology, and neuroscience, our method offers a promising approach in the effort to build unsupervised representations for high-dimensional data.
Raw Signal
Aligned Clusters Signal Clustering
Self-organizing Layer
Enc Dec
Re-organizing Layer
Reconstructed Signal Figure 2: The overview framework of URLOST.The high-dimensional input signal undergoes clustering and self-organization before unsupervised learning using a masked autoencoder for signal reconstruction.
Motivation and Overall framework
Our objective is to build robust, unsupervised representations for high-dimensional signals that lack explicit topology and stationarity.These learned representations are intended to enhance performance in downstream tasks, such as classification.To achieve this, we begin by using low-level statistics and clustering to approximate the signal's topology.The clusters derived from the signal serve as input to a masked autoencoder.As depicted in Figure 1, the masked autoencoder randomly masks out patches in an image and trains a Transformer-based autoencoder unsupervisedly to reconstruct the original image.After the unsupervised training, the autoencoder's latent state yields high-quality representations.In our approach, signal clusters are input to the masked autoencoder.
Notably, the clusters differ from image patches in several key aspects due to the differences in the input signal: they are unaligned, exhibit varied sizes and shapes, and their clustering nodes are not confined to fixed 2D locations like pixels in image patches.To cope with these differences, we introduce a self-organizing layer responsible for aligning these clusters through learnable transformations.The parameters of this layer are jointly optimized with those of the masked autoencoder.
Our method is termed URLOST, an acronym for Unsupervised Representation Learning withOut Stationarity or Topology.Figure 2 provides an overview of the framework.URLOST consists of three core components: density adjusted spectral clustering, self-organizing layer, and masked autoencoder.The functionalities of these components are detailed in the following subsections.
Density Adjusted Spectral clustering
Representation learning for high-dimensional signals without explicit topology is challenging.We propose to define a metric to measure inter-dimensional relationships.This metric effectively approximates a topology for the signal.Similar to [53], where they use the absolute correlation values as the metric for pixels, we employ discrete mutual information (refer to Appendix A.1) as the metric.Let affinity matrix A ij denote the mutual information between dimension i and j, which approximates the manifold M that the signal lives on.We can define the discretized Laplacian operator based on A and use the eigenvector of the Laplacian operator to perform spectral clustering, which segments the manifold.The detailed definition and the algorithm are left in Appendix A.1.
Finding the eigenvector of the Laplacian operator is a discretized approximation of the following optimization problem in function space: where f (x) : M → [0, 1] is the normalized signal defined on M and p(x) is the density function.
The integral is taken over standard measure on M. Since spectral clustering heavily relies on the solution of equation 1, the definition of the density function p(x) affects the quality of the resulting clusters.Standard approaches often assume that nodes are uniformly distributed on the manifold, thereby treating p(x) as a constant and excluding it from the optimization process.However, this assumption does not hold in our case involving non-stationary signals.To cope with non-stationary signals, our work introduces a variable density function p(x) for each signal, making it a pivotal component in building good representations for the signal.This component is referred to as Density Adjusted Spectral Clustering.Empirical evidence supporting this design is provided through visualization and ablation studies in the experimental section.
Self-organizing layer
Transforming a high-dimensional signal into a sequence of clusters using the above method is not enough because it does not capture the internal structure within individual clusters.To effectively perform unsupervised learning on these clusters, it is essential to align them in some manner.Directly solving the exact alignment problem with low-level statistics of the signal is challenging.Thus, we propose a self-organizing layer with learnable parameters.Specifically, let vector x (i) denote the ith cluster.Each cluster x (i) is passed through a differentiable function g(•, w (i) ) with parameter w (i) , resulting in a sequence z 0 : 1) , w (1) ), (2) z 0 is comprised of projected and aligned representations for all clusters.The weights of the proposed self-organizing layer, {w (1) , • • • w (M ) }, are jointly optimized with the subsequent neural network introduced in the next subsection.
Masked autoencoder
After the self-organizing layer, z 0 is passed to a Transformer-based masked autoencoder (MAE) with an unsupervised learning objective.Masked autoencoder (MAE) consists of an encoder and a decoder which both consist of stacked Transformer blocks introduced in [64].The objective function is introduced in [25]: masking random image patches in an image and training an autoencoder to reconstruct them, as illustrated in Figure 1.In our case, randomly selected clusters in z 0 are masked out, and the autoencoder is trained to reconstruct these masked clusters.After training, the encoder's output is treated as the learned representation of the input signal for downstream tasks.The masked prediction loss is computed as the mean square error (MSE) between the values of the masked clusters and their corresponding predictions.
Result
Since our method is inspired by the biological vision system, we first validate its ability on a synthetic biological vision dataset created from CIFAR-10.Then we evaluate the generalizability of URLOST on two high-dimensional natural datasets collected from diverse domains.Detailed information about each dataset and the corresponding experiments is presented in the following subsections.Across all tasks, URLOST consistently outperforms other strong unsupervised representation learning methods.
Synthetic biological vision dataset
As discussed in the introduction, the biological visual signal serves as an ideal dataset to validate the capability of URLOST.In contrast to digital images captured by a fixed array of sensors, the biological visual signal is acquired through irregularly positioned ganglion cells, inherently lacking explicit topology and stationarity.However, it is hard to collect real-world biological vision signals with high precision.Therefore, we employ a retinal sampling technique to modify the classic CIFAR-10 dataset and simulate imaging from the biological vision signal.The synthetic dataset is referred to as Foveated CIFAR-10.To make a comprehensive comparison, we also conduct experiments on the original CIFAR-10, and a Permuted CIFAR-10 dataset obtained by randomly permuting the image.
Permuted CIFAR-10.To remove the grid topology inherent in digital imaging, we simply permute all the pixels within the image, which effectively discards any information related to the grid structure of the original digital image.We applied such permutation to each image in the CIFAR-10 dataset to generate the Permuted CIFAR-10 dataset.Nevertheless, permuting pixels only removes an image's topology, leaving its stationarity intact.To obtain the synthetic biological vision that has neither topology nor stationarity, we introduce the Foveated CIFAR-10.Foveated CIFAR-10.Much like photosensors installed in a camera, retina ganglion cells within the primate biological visual system sample from visual stimuli and project images.However, unlike photosensors that have uniform receptive fields and adhere to a consistent sampling pattern, retinal ganglion cells at different locations of the retina vary in their receptive field size: smaller in the center (fovea) but larger in the peripheral of the retina.This distinctive retina sampling pattern results in foveated imaging [63].It gives primates the ability to have both a high-resolution vision and a broad overall receptive field while consequently making visual signals sampled by the retina lack stationarity.The evidence is that responses of two ganglion cells separated by the same displacement are highly correlated in the retina but less correlated in the peripheral.To mimic the foveated imaging with CIFAR-10, we adopt the retina sampling mechanism from [8].Specifically, each retina ganglion cell is simplified and modeled using a Gaussian kernel.The response of each cell is determined by the dot product between pixel values and the Gaussian kernel.Figure 3 illustrates the sampling kernel locations.Applying this sampling grid and permuting the resulting pixels produces the foveated CIFAR-10.In the natural retina, retinal ganglion cell density decreases linearly with eccentricity, which makes fovea much denser than the peripheral, compared to the simulated lattice in Figure 3.However, considering the low resolution of the CIFAR-10 dataset, we reduce the simulated fovea's density to prevent redundant sampling.
Experiments.We compare URLOST on both of the synthetic vision datasets as well as the original CIFAR-10 with popular unsupervised representation learning methods SimCLR [6] and MAE [25].All the models conducted unsupervised learning followed by linear probing for classification accuracy.The evaluations are reported in Table 1.SimCLR excels on CIFAR-10 but struggles badly with both synthetic datasets due to its inability to handle data without stationarity and topology.MAE gets close to SimCLR on CIFAR-10 with a 4 × 4 patch size.However, the patch size no longer makes sense when data has no topology.So we additionally tested MAE masking pixels instead of image patches.It maintains the same performance on Permuted CIFAR-10 as on CIFAR-10, though poorly, invariant to the removal of topology as it should be.But It still drops greatly to 48.5% on the Foveated CIFAR-10 when stationarity is also removed.In contrast, only URLOST is able to maintain consistently strong performances when there is no topology or stationarity, achieving 86.4% on Permuted CIFAR-10 and 85.4% on Foveated CIFAR-10 when the baselines completely fail.
V1 neural response to natural image stimulus
After accessing URLOST's performance on synthetic biological vision data, we take a step further to challenge its generalizability with high-dimensional natural datasets.The first task is decoding neural response recording in the primary visual area (V1) of mice.
V1 neural response dataset.The dataset, published by [44], contains responses from over 10,000 V1 neurons captured via two-photon calcium imaging.These neurons responded to 2,800 unique images from ImageNet [12], with each image presented twice to assess the consistency of the neural response.In the decoding task, a prediction is considered accurate if the neural response to a given stimulus in the first presentation closely matches the response to the same stimulus in the second presentation within the representation space.This task presents greater challenges than the synthetic biological vision described in the prior section.For one, the data comes from real-world neural recordings rather than a curated dataset like CIFAR-10.For another, the geometric structure of the V1 area is substantially more intricate than that of the retina.To date, no precise mathematical model of the V1 neural response has been well established.The inherent topology and stationarity of the data still remain difficult to grasp [43; 42].Nevertheless, evidence of Retinotopy [18; 19] and findings from prior research [41; 7; 58] suggest that the neuron population code in V1 are tiling a low dimensional manifold.This insight led us to treat the population neuron response as highdimensional data and explore whether URLOST can effectively learn its representation.
Experiments.Following the approach in [44] we apply standardization and normalization to the neural firing rate.The processed signals are high-dimensional vectors, and they can be directly used for the decoding task, which serves as the "raw" signal baseline in Table 2.For representation learning methods, URLOST is evaluated along with MAE and β-VAE [26].Note that the baseline methods need to handle high-dimensional vector data without stationarity or topology, so SimCLR is no longer applicable.We use β-VAE instead.We first train the neural network with an unsupervised learning task, then use the latent state of the network as the representation for the neural responses in the decoding task.The results are presented in the table 2. Our method surpasses the original neuron response and other methods, achieving the best performance.
Gene expression data
In this subsection, we further evaluate URLOST on high-dimensional natural science data from a completely different domain, the gene expression data.
Gene expression dataset.The dataset comes from The Cancer Genome Atlas (TCGA) [62; 66], which is a project that catalogs the genetic mutations responsible for cancer using genome sequencing and bioinformatics.The project molecularly characterized over 20,000 primary cancers and matched normal samples spanning 33 cancer types.We focus on the pan-cancer classification task: diagnose and classify the type of cancer for a given patient based on his gene expression profile.
The TCGA project collects the data of 11,000 patients and uses Micro-RNA (miRNA) as their gene Conventional SSL models take a sequential input x = [x (1) , • • • x (M ) ] and embed them into latent vectors with a linear transformation: which is further processed by a neural network.The sequential inputs can be a list of language tokens [14; 50], pixel values [5], image patches [15], or overlapped image patches [6; 24; 70].E can be considered as a projection layer that is shared among all elements in the input sequence.The self-organizing layer g(•, w (i) ) introduced in Section 2.3 can be considered as a non-shared projection layer.We conducted an ablation study comparing the two designs to demonstrate the effectiveness of the self-organizing layers both quantitatively and qualitatively.To facilitate the ablation, we further synthesized another dataset.
Locally-permuted CIFAR-10.To directly evaluate the performance of the non-shared projection approach, we designed an experiment involving intentionally misaligned clusters.In this experiment, we divide each image into patches and locally permute all the patches.The i-th image patch is denoted by x (i) , and its permuted version, permutated by the permutation matrix E (i) , is expressed as E (i) x (i) .We refer to this manipulated dataset as the Locally-Permuted CIFAR-10.Our hypothesis posits that models using shared projections, as defined in Equation 3, will struggle to adapt to random permutations, whereas self-organizing layers equipped with non-shared projections can autonomously adapt to each patch's permutation, resulting in robust performance.This hypothesis is evaluated quantitatively and through the visualization of learned weights w (i) .
Unlike locally permuted CIFAR-10, a visualization check is not viable since the permutation is done globally.However, we can still quantitatively measure the performance of the task.
Quantitative results.Table 3 confirms our hypothesis, demonstrating a significant performance decline in models employing shared projections when exposed to permuted data.In contrast, the non-shared projection model maintains stable performance.
Visual evidence.Using linear layers to parameterize the self-organizing layers, i.e. let g(x, W (i) ) = W (i) x, we expect that if the projection layer effectively aligns the input sequence, E (i)T W (i) should exhibit visual similarities.That is, after applying the inverse permutation E (i)T , the learned projection matrix W (i) at each location should appear consistent or similar.The proof of this statement is provided in Appendix A.4.The model trained on Locally-Permuted CIFAR10 provides visual evidence supporting this claim.In Figure 4, the weights show similar patterns after reversing the permutations. i) .Similar to (B).
Density adjusted clustering vs Uniform Density Clustering
As explained in Section 2.2, the shape and size of each cluster depend on how the density function is defined.Let q(i) represent the eccentricity, the distance from ith kernel to the center of the sampling lattice, and let n(i) = j A ji where A is the affinity matrix, then the density is defined as: By setting α and β nonzero, the density function is eccentricity-dependent.Setting both α and β to zero will make n(i) constant which recovers the uniform density spectral clustering.We vary the parameters α and β to generate different sets of clusters for the foveated CIFAR-10 dataset and run URLOST using each of these sets of clusters.Results in Table 4 validate that the model performs better with density adjusted clustering.The intuitive explanation is that by adjusting the values of α and β, we can make each cluster carry similar amounts of information (refer to Appendix A.3.).A balanced distribution of information across clusters enhances the model's ability to learn meaningful representations.Without this balance, masking a low-information cluster makes the prediction task trivial, while masking a high-information cluster will make the prediction task too difficult.In either scenario, the model's ability to learn effective representations is compromised.
Additional related works
Several interconnected pursuits are linked to this work, and we will briefly address them here: Topology in biological visual signal.2-D topology of natural images is strong prior that requires many bits to encode [13; 2].Such 2-D topology is encoded in the natural image statistic [55; 28], which can be recovered [53].Optic and neural circuits in the retina result in a more irregular 2-D topology than the natural image, which can still be simulated [52; 46; 47; 45; 61; 30].This information is further processed by the primary visual cortex.Evidence of retinotopy suggests the low-dimensional geometry of visual input from retina is encoded by the neuron in primary visual cortex [40; 20; 27; 19; 65; 48].These evidences suggest we can recover the topology using signal from retinal ganglion cell and V1 neurons.
Evidence of self-organizing mechanism in the brain.In computational neuroscience, many works use the self-organizing maps (SOM) as a computational model for V1 functional organization: [16; 60; 1; 17; 39; 31].In other words, this idea of self-organizing is likely a principle governing how the brain performs computations.Even though V1 functional organizations are present at birth, numerous studies also indicate that the brain's self-organizing mechanisms continue after full development [22; 54; 29].
Learning with signal on non-euclidean geometry.In recent years, researchers from the machine learning community have made efforts to consider geometries and special structures beyond classic images, text, and feature vectors.[33] treats an image as a set of points but depends on the 2D coordinates.The geometric deep learning community tries to generalize convolution neural networks beyond the Euclidean domain [3; 37; 11; 21].Recent research also explores adapting the Transformer to domains beyond Euclidean spaces [10; 9].However, none of them has tried to tackle the issue when the data has no explicit topology or stationarity, which is the focus of URLOST.
Self-supervised learning.Self-supervised learning (SSL) has made substantial progress in recent years.Different SSL method is designed for each modality, for example: predicting the masked/next token in NLP [14; 50; 4], solving pre-text tasks, predicting masked patches, or building contrastive image pairs in computer vision [36; 25; 68; 6; 24; 70].These SSL methods have demonstrated descent scalability with a vast amount of unlabeled data and have shown their power by achieving performance on par with or even surpassing supervised methods.They have also exhibited huge potential in cross-modal learning, such as the CLIP by [51].However, we argue that these SSL methods are all built upon specific modalities with explicit topology and stationarity which URLOST goes beyond.
Discussion
The success of most current state-of-the-art self-supervised representation learning methods relies on the assumption that the data has known stationarity and domain topology, such as the grid-like RGB images and time sequences.However, biological vision systems have evolved to deal with signals with less regularity.In this work, we explore unsupervised representation learning under a more general assumption, where the stationarity and topology of the data are unknown to the machine learning model and its designers.We argue that this is a general and realistic assumption for highdimensional data in modalities of natural science.We propose a novel unsupervised representation learning method that works under this assumption and demonstrates our method's effectiveness and generality on a synthetic biological vision dataset and two datasets from natural science that have diverse modalities.We also perform a step-by-step ablation study to show the effectiveness of the novel components in our model.
During experiments, we found that density adjusted spectral clustering is crucial for the quality of representation learning.How to adjust the density and obtain a balanced clustering for any given data or even learning the clusters end-to-end with the representation via back-propagation is worth future investigation.Moreover, our current self-organizing layer is still simple though it shows effective performance.Extending it to a more sophisticated design and potentially incorporating it with various neural network architectures is also worth future exploration.
In summary, our method offers a handy and general unsupervised learning tool when dealing with high-dimensional data of arbitrary modality with unknown stationarity and topology, particularly common in the field of natural sciences, where many present strong unsupervised learning baselines cannot directly adapt.We hope it can provide inspiration for work in related fields.
A Appendix
A.1 Spectral clustering algorithm Given a high dimensional dataset S ∈ R n×m , Let S i be ith column of S, which represents the ith dimension of the signal.We create probability mass functions P (S i ) and P (S j ) and the joint distribution P (S i , S j ) for S i and S j using histogram.Let the number of bins be K.Then we measure the mutual information between P (S i ) and P (S j ) as: Let A ij = I(X i ; X j ) be the affinity matrix, p(i) be the density function defined in 4. We follow the steps from [38] to perform spectral clustering with a modification to adjust the density: 1. Define D to be the diagonal matrix whose (i,i)-element is the sum of A's i-th row, P be the identity matrix where P ii = p(i).Construct the matrix L = P 3. Form the matrix Y from X by renormalizing each of X's rows to have unit norms.(i.e.Some other interpretation of spectral embedding allows one to design a specific clustering algorithm in step 4. For example, [56] interprets the eigenvector problem in 1 as a relaxed continuous version of K-way normalized cuts problem, where they only allow X to be binary, i.e.X ∈ {0, 1} N ×K .This is an NP-hard problem.Allowing X to take on real value relaxed this problem but created a degeneracy solution.Given a solution X * and Z = D − 1 2 X * , for any orthonormal matrix R, RZ is another solution to the optimization problem 1.Thus, [56] designed an algorithm to find the optimal orthonormal matrix R that converts X * to discrete value in {0, 1} N ×K .From our experiment, [56] is more consistent than K-means and other clustering algorithms, so we stick to using it for our model.
A.2 Data synthesize process
We followed the retina sampling approach described in [8] to achieve foveated imaging.Specifically, each retina ganglion cell is represented using a Gaussian kernel.The kernel is parameterized by its center, denoted as ⃗ x i , and its scalar variance, σ ′2 i , i.e.N (⃗ x i , σ ′2 i I), which is illustrated in Figure 5.A.The response of each cell, denoted as G[i], is computed by the dot product between the pixel value and the corresponding discrete Gaussian kernel.This can be formulated as: where N and W are dimensions of the image, and I represents the image pixels.For foveated CIFAR-10, since the image is very low resolution, we first upsample it 3 times from 32 × 32 to 96 × 96, then use in total of 1038 Gaussian kernels to sample from the upsampled image.The location of each kernel is illustrated in Figure 5.B.The radius of the kernel scales proportionally to the eccentricity.Here, we use the distance from the kernel to the center to represent eccentricity.The relationship between the radius of the kernel and eccentricity is shown in Figure 5.C.As mentioned in the main paper, in the natural retina, retinal ganglion cell density decreases linearly with eccentricity, which makes the fovea much denser than the peripheral, unlike the simulated lattice we created.The size of the kernel should scale linearly with respect to eccentricity as well.However, for the low-resolution CIFAR-10 dataset, we reduce the simulated fovea's density to prevent redundant sampling.In this case, we pick the exponential scale for the relationship between the size of the kernel and eccentricity so the kernel visually covers the whole visual field.We also implemented a convolution version of the Gaussian sampling kernel to speed up data loading.
A.3 Density adjusted spectral clustering on foveated CIFAR10 dataset
We provide further intuition and visualization on why density adjusted spectral clustering allows the model to learn a better representation on the foveated CIFAR-10 dataset.
As shown in Figure 5, the kernel at the center is much smaller in size than the kernel in the peripheral.This makes the kernel at the center more accurate but smaller, which means it summarizes less information.Spectral clustering with constant density will make each cluster have a similar number of elements in them.Since the kernel in the center is smaller, the cluster in the center will be visually smaller, than the cluster in the peripheral.The effect is shown in Figure 6.Moreover, since we're upsampling an already low-resolution image (CIFAR-10 image), even though the kernel at the center is more accurate, we're not getting more information.There, to make sure each cluster has similar information, the clusters in the center need to have more elements than the clusters in the peripheral.In order to make the clusters at the center have more elements, we need to weight the clusters in the center more with the density function.Since the sampling kernels at the center have small eccentricity and are more correlated to their neighbor, increasing α and β will make sampling kernels at the center have higher density, which makes the cluster at the center larger.This is why URLOST with density adjusted spectral clustering performs better than URLOST with constant density spectral clustering, which is shown in Table 4.Meanwhile, setting α and β too large will also hurt the model's performance because it creates clusters that are too unbalanced.
A.4 self-organizing layer learns inverse permutation
For locally-permuted CIFAR-10, we divide each image into patches and locally permute all the patches.The i-th image patch is denoted by x (i) , and its permuted version, permuted by the permutation matrix E (i) , is expressed as E (i) x (i) .We use linear layers to parameterize the self-organizing layers.Let g(x, W (i) ) = W (i) x denotes the ith element of the self-organizing layer.We're providing the proof for the statement related to the visual evidence shown in Section 4.1 Statement: If the self-organizing layer effectively aligns the input sequence, then E (i)T w (i) should exhibit visual similarities.
Proof: we first need to formally define what it means for the self-organizing layer to effectively align the input sequence.Let e k denote the kth natural basis (one-hot vector at position k), which represents the pixel basis at location k.Permutation matrix E (i) will send kth pixel to some location accordingly.Mathematically, if the projection layer effectively aligns the input sequence, it means g(E (j) e k , W (j) ) = g(E (i) e k , W (i) ) for all i, j, k.We can further expand this property to get the following two equations: g(E (i) e k , W (i) ) = W (i) E (i) e k g(E (j) e k , W (j) ) = W (j) E (j) e k for all i, j, k.Since the above equation holds for all e k , by linearity and the property of permutation matrix, we have: E (i)T W (i) = E (j)T W (j) This implies E (i)T w (i) should exhibit visual similarities for all i.
A.5 Visualizing the weight of self-organizing
As explained in the previous section (Appendix A.4) and visualized in Figure 7, we can visualize the weights of the learned self-organizing layer when trained on the locally-permuted CIFAR-10 dataset.If we apply the corresponding inverse permutation E (i)T to its learned filter W (i) at position i, the pattern should show similarity across all position i.This is because the model is trying to align all the input clusters.We have shown this is the case when the model converges to a good representation.On the other hand, what if we visualize the weight E (i)T W (i) as training goes on?If the model learns to align the clusters as it is trained for the mask prediction task, E (i)T W (i) should become more and more consistent as training goes on.We show this visualization in Figure 7, which confirms our hypothesis.As training goes on, the pattern E (i)T W (i) becomes more and more visually similar, which implies the model learns to gradually learn to align the input clusters.
:,k , where k is the column number and i is the position index.In total, 9 columns are shown.
Figure 1 :
Figure1: From left to right: the unsupervised representation learning through joint embedding and masked auto-encoding; the biological vision system that perceives via unstructured sensor and understands signal without stationarity or topology[48]; and many more such diverse high dimensional signal in natural science that our method supports while most existing unsupervised methods don't.Data figures are borrowed from[48; 44; 69].
Figure 3 :
Figure 3: Retina sampling (A) An image in CIFAR-10 dataset.(B) Retina sampling lattice.Each blue dot represents the center of a Gaussian kernel, which mimics a retinal ganglion cell.(C) Visualization of the car image's signal sampled using the retina lattice.Each kernel's sampled RGB value is displayed at its respective lattice location for visualization purposes.(D) density-adjusted spectral clustering results are shown.Each unique color represents a cluster, with each kernel colored according to its assigned cluster.
Figure 4 :
Figure 4: Learnt weights of a self-organizing layer.(A) Image is cropped into patches, where each patch x (i) first undergoes a different permutation E (i) , then the inverse permutation E (i)T .(B) The learned weight of the linear self-organizing layer.The 12th column of W (i) at all positions i are reshaped into patches and visualized.When W (i) undergoes the inverse permutation E (i)T , they show similar patterns.(C) Visualization of the 37th column of W (i) .Similar to (B).
the k largest eigenvectors of L, and form the matrix X = [x 1 , x 2 , • • • , x k ] ∈ R n×k by stacking the eigenvectors in columns.
2 ) 4 .
Treating each row of Y as a point in R k , cluster them into k clusters via K-means or other algorithms.
Figure 5 :
Figure 5: Foveated retinal sampling (A) Illustration of a Guassian kernel shown in [8].Diagram of single kernel filter parameterized by a mean µ ′ and variance σ ′ .(B) the location of each Gaussian kernel is summarized as a point with 2D coordinate µ ′ .In total, the locations of 1038 Gaussian kernels are plotted.(C) The relationship between eccentricity (distance of the kernel to the center) and radius of the kernel is shown.
Figure 6 :
Figure 6: Effect of density adjusted clustering.Eccentricity-based sampling lattice.The center of the sampling lattice has more pixels which means higher resolution compared to the peripheral.(A) Result of density adjusted spectral clustering (α = 0.5, β = 2).Clusters in the center have more elements than clusters in the peripheral.But clusters look more visually similar in size than B. (B) Result of uniform density spectral clustering (α = 0, β = 0).Each cluster has a similar number of elements in them but the clusters in the center are much smaller than the clusters in the periphery.
Figure 7 :
Figure 7: Visualize the weight of the self-organizing layer after applying inverse permutation.A snapshot of E (i)T W (i) is shown at different training epoch.The number of epochs is shown on the top row.Each figure shows one column of the weight of the self-organizing layer, at different positions, i.e.W
Table 1 :
Evaluation on computer vision and synthetic biological vision dataset.ViT (Patch) stands for the Vision Transformer backbone with image patches as inputs.ViT (Pixel) means pixels are treated as input units.ViT (Clusters) means clusters are treated as inputs instead of patches.The number of clusters is set to 64 for both Permuted CIFAR-10 and Foveated CIFAR-10 dataset.
Table 3 :
Ablation study on self-organizing layer.Linear probing accuracy with varying parameters, keeping others constant.For Locally-Permutated CIFAR-10, we use 4 × 4 patch size.For Permutated CIFAR-10 and Foveated CIFAR-10, we set the number of clusters to 64 for the spectral clustering algorithm.We kept the hyperparameter of the backbone model the same as in table 1.
expression profiles.Like the V1 response, no explicit topology and stationarity are known and each data point is a high-dimensional vector.Specifically, 1773 miRNA identifiers are used so that each data point is a 1773-dimensional vector.Types of cancer that each patient is diagnosed with serve as the classification labels.Experiments.Similar to Section 3.2, URLOST is compared with the original signals, MAE, and β-VAE, which is the state-of-the-art unsupervised learning method on TCGA cancer classification [71; 72].We also randomly partition the dataset do five-fold cross-validation and report the average performance in Table2.Again, our method learns meaningful representation from the original signal.The learned representation benefited the classification task and achieved the best performance, demonstrating URLOST's ability to learn meaningful representation of data from diverse domains.
Table 4 :
Evaluation on foveated CIFAR-10 with varying hyperparameter for density function.For each set of values of α and β, we perform density adjusted spectral clustering and run URLOST with the corresponding cluster.The evaluation of each trial is provided in the table. | 2023-10-11T18:43:47.944Z | 2023-10-06T00:00:00.000 | {
"year": 2023,
"sha1": "782d400ba7aac6ccb2d4b6d3cadbf4b7c2600d50",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "782d400ba7aac6ccb2d4b6d3cadbf4b7c2600d50",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
12297179 | pes2o/s2orc | v3-fos-license | Signalling Pathways Involved in Adult Heart Formation Revealed by Gene Expression Profiling in Drosophila
Drosophila provides a powerful system for defining the complex genetic programs that drive organogenesis. Under control of the steroid hormone ecdysone, the adult heart in Drosophila forms during metamorphosis by a remodelling of the larval cardiac organ. Here, we evaluated the extent to which transcriptional signatures revealed by genomic approaches can provide new insights into the molecular pathways that underlie heart organogenesis. Whole-genome expression profiling at eight successive time-points covering adult heart formation revealed a highly dynamic temporal map of gene expression through 13 transcript clusters with distinct expression kinetics. A functional atlas of the transcriptome profile strikingly points to the genomic transcriptional response of the ecdysone cascade, and a sharp regulation of key components belonging to a few evolutionarily conserved signalling pathways. A reverse genetic analysis provided evidence that these specific signalling pathways are involved in discrete steps of adult heart formation. In particular, the Wnt signalling pathway is shown to participate in inflow tract and cardiomyocyte differentiation, while activation of the PDGF-VEGF pathway is required for cardiac valve formation. Thus, a detailed temporal map of gene expression can reveal signalling pathways responsible for specific developmental programs and provides here substantial grasp into heart formation.
may also contain genes that have been activated a few hours before, when the ecdysone titre starts rising (around 18h APF). Indeed, genes encoding ion channels (annotated inorganic ion transport) and genes known to be involved in muscle function are over-represented in cluster 2. In addition, a large proportion of genes involved in metabolism, that may allow larval heart function, are recovered in clusters 1 to 3.
Expression clusters 3 and 4, which comprise genes up-regulated during the first half of kinetic, are mainly characterized by over-representation of biological functions linked to programmed cell death. This is in agreement with our knowledge of cell death timetable in the cardiac tube with the destruction of abdominal segments A6 and A7, which occurs in between 30 and 36h APF. Associated biological functions, such as autophagic cell death, endocytosis (cluster 3) and histolysis are also specifically enriched in these clusters. In addition, enrichment in proteolysis ( Figure S2) and small GTPase are prominent features of cluster 1 and were also found over-represented in genes activated during salivary glands autophagic cell death. Arising with cell death mechanisms, these expression classes are also significantly enriched in genes involved in cytoskeleton organisation and biogenesis. This might indicate that, while morphogenetic event become detectable only from 30h APF onward, some of them may start earlier and could be induced at lower ecdysone titre. Alternatively, since transcription of genes involved in cellular remodelling has been observed in steroid induced salivary gland cell death, this could indicate that a similar response is observed in the dying cardiac myocytes.
Finally, functions related to DNA metabolism and regulation of cell cycle progression was significantly over-represented in cluster 5. However, cardiac myoblasts do not proliferate during adult heart formation. These over-represented functions could be related to the proliferation of the precursors that will contribute to the formation of the imaginal ventral somatic muscle that cover the ventral part of the adult heart. In support of this, functions related to myoblast fusion are activated slightly latter, in clusters 6 and 7 and could indicate the differentiation of the ventral muscle.
Transiently up-regulated genes: clusters 6 and 7
The transiently expressed genes were, a priori, most likely to be linked to the profound morphological changes that we observed in the cardiac tube during metamorphosis including larval aorta differentiation into adult heart and A5 segment trans-differentiation. The most salient feature of this gene set is the huge and highly significant enrichment in signal transduction related genes. This might point out that diverse signalling pathways are activated and may be required for adult heart formation. Indeed, our data clearly indicate that various signalling pathways are implicated in the process (see Results). Besides signal transduction, these clusters are also characterised by an over-representation of genes implicated in myogenesis and myoblast differentiation, what appears significant regarding cardiogenesis.
Progressively up-regulated genes: cluster 8 to 12
These clusters contain the genes that are progressively activated during adult heart formation (starting from 30h APF) and maintained thereafter. A high population of genes in these clusters encode for proteins involved in energetic metabolism, ionic transport and muscle contraction. Clusters comprising the earliest activated genes (cluster 8 and 9, activated at around 33 and 36h APF, respectively) are highly enriched in protein biosynthesis linked genes, and the activation of protein translation machinery most likely support the important cardiac growth already described during the process. Genes encoding muscle contraction annotated proteins were activated slightly after (clusters 9 and 10), and might be related to the huge increase in myofibrils observed in the larval aorta myocytes while they are remodelled to form the adult heart. In addition, genes annotated cell matrix adhesion were over represented in cluster 10, what may support also an important remodelling of extracellular matrix during the adult heart formation ( Figure S2). Finally, lipid metabolism appears to be induced from 42h onward (clusters 11 and 12) what might support cardiac activity re initiation. A graded activation of genes that are involved in energy generation mediated by electron transport oxidative phosphorylation was observed, these biological functions being highly over represented among genes of clusters 8 to 11. Activation of these genes may allow sustaining cardiac tube growth and heart beat recovery.
Transiently repressed genes: cluster 13
Among the genes that are down-regulated during the cardiac tube remodelling, but actively transcribed during heart beating periods (up to 24h and from 42h APF onward, cluster 13), the most salient feature is the over-representation of genes involved in carbohydrate metabolism and particularly in polysaccharide metabolism. This most probably reflects the dependence of myocyte contraction upon energy derived from sugar metabolism. | 2014-10-01T00:00:00.000Z | 2007-08-28T00:00:00.000 | {
"year": 2007,
"sha1": "1323f7f2efa4f1c11ea7c6c1143f6e18f570371d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.0030174&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "032133af17422378792ba7f0e714dc23ca367b9c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
204881578 | pes2o/s2orc | v3-fos-license | Ligand Structure Effects on Molecular Assembly and Magnetic Properties of Copper(II) Complexes with 3-Pyridyl-Substituted Nitronyl Nitroxide Derivatives
Reaction of Cu(hfac)2 with methyl- and bromo-3-pyridyl-substituted nitronyl nitroxides (LR) leads to assemble a diverse set of coordination complexes: mononuclear [Cu(hfac)2L2-Me], binuclear [{Cu(hfac)2}2(H2O)L2-Me], trinuclear [{Cu(hfac)2}3(L6-Br)2], pentanuclear [{Cu(hfac)2}5(L2-Me)2], and [{Cu(hfac)2}5(L2-Me)4], cocrystals [Cu(hfac)2(L2-Br)2]·[Cu(hfac)2(H2O)2] and [Cu(hfac)2(L2-Br)2]·2[Cu(hfac)2H2O], one-dimensional polymers [Cu(hfac)2L2-Br]n and [Cu(hfac)2L6-Br]n, and cyclic dimers [Cu(hfac)2L5-Me]2, [Cu(hfac)2L5-Br]2, and [Cu(hfac)2L6-Me]2. The molecular structures of the obtained complexes are strongly affected by the substituent type and its location in the pyridine heterocycle. Occupation of the second position of the pyridine ring increases the steric hindrance of both imine and nitroxide coordination sites of L2-R, which is favorable for the formation of various conformers and precipitation of complexes with different molecular structures. The pentanuclear [{Cu(hfac)2}5(L2-Me)2] and [{Cu(hfac)2}5(L2-Me)4] complexes do not have prior analogues and are valuable model objects for investigation of the mechanism of formation of various coordination polymers. The arrangement of long Cu–ONO bonds in {CuO6} square bipyramids due to the weakening nitroxide donor site in complexes, based on L2-Me, L2-Br, and L6-Br ligands, results in ferromagnetic exchange interactions between spins of Cu2+ ions and nitroxides. Complexes with substituents that do not considerably affect the coordination ability of ligands (L5-Me, L5-Br, and L6-Me) exhibit strong antiferromagnetic exchange interactions between spins of Cu2+ ions and nitroxides.
■ INTRODUCTION
The study of the effect of ligand structure modification on molecular and crystal structures of the resulting complexes is an important aspect of the rational design of materials with desirable structure and functional properties. Copper(II) complexes with nitroxide radicals are unique objects for studying the peculiarities of magnetic exchange pathways between different types of paramagnetic centers. 1,2 The magnetic behavior of Cu(II)−nitroxide heterospin systems strongly correlates with the geometry of the coordination sphere of Cu 2+ Jahn−Teller ions, which is substantially affected by the molecular structure of the nitroxide ligand. The equatorial coordination of the nitroxide group (NO group) results in strong antiferromagnetic Cu 2+ -radical exchange coupling (J ≪ 0), whereas axial binding leads to ferromagnetic exchange interactions (J > 0). 1−3 There are several well-known factors that may favor one of the geometries. 2 Equatorial coordination is often preferred because of the additional stabilization in the case of spin pairing. Presence of other strong coordination sites in the side chain of ligand forces the axial binding of the NO group. Steric factors and intermolecular interactions between uncoordinated NO groups can favor both axial and equatorial coordination types, depending on the molecular structure and crystal packing of the resulting compounds. In some cases, the energy gap between equatorial and axial coordination is very weak, which allows the magnetic exchange interactions to be switched by changing the temperature, pressure, or light irradiation. 1−12 However, only a few Cu(II) complexes with nitroxides exhibit structural-magnetic anomalies under external stimuli. 1−24 Even a minor modification of a ligand leads to significant changes in the molecular and crystal structures of the resulting Cu(II) complexes. This strongly affects the parameters of magnetic transitions and can result in the assembly of phases that cannot exhibit any magnetostructural anomalies. 5,[7][8][9][10][11][12][13]23,24 Cu(hfac) 2 complexes with various derivatives of 3-pyridyl-substituted nitronyl nitroxide (L) are of particular interest due to the diversity of molecular species and their magnetic properties (Scheme 1). 4,7,12,[16][17][18]25,26 Introduction of a Me group in the 4th position of the pyridine ring for L 4-Me nitronyl nitroxide has led to the formation of Cu(II) complexes with structure and magnetic properties completely different from those of the [{Cu(hfac) 2 } 4 L 2 ] compound based on the unsubstituted ligand. 4,12 Recently described copper(II) and mixed-metal copper(II)-lanthanide complexes with L 5-Br and L 6-OMe nitronyl nitroxides have demonstrated various topological structures and different magnetic behaviors in the absence of structural and magnetic transitions. [16][17][18]25,26 However, the effect of substituents at different positions of the pyridine ring on the coordination ability of 3-pyridyl nitronyl nitroxide ligand, structure, and magnetic behaviors of the resulting Cu(II) complexes has not been studied systematically. Therefore, the Me group and Br atom were chosen as substituents for the 2nd, 5th, or 6th positions of the pyridine heterocycle of L R nitroxide, because both of them have the same spatial size but different electronic properties (Scheme 1). 27 This approach allowed us to trace the general relationship between the donor ability of nitroxide coordination sites and the molecular structure of resulting complexes, as well as a substitution effect on magnetostructural correlations inherent in their nature.
■ RESULTS AND DISCUSSION
Nitronyl nitroxides L R (R = Me, Br) were synthesized by condensation of the corresponding aldehydes with 2,3bis(hydroxyamino)-2,3-dimethylbutane and subsequent oxidation of the resulting adduct with PbO 2 according to a wellestablished approach derived from Ullman's work. 28−30 The spin-labeled L R (R = 2-Me, 2-Br, 5-Me, 5-Br, 6-Br) were successfully isolated as single crystals suitable for X-ray diffraction (XRD) analysis (Figures 1S−3S and Table 1S). It was shown that the N− • O bond lengths are typical for nitronyl nitroxides (1.27−1.28 Å). 30 Reaction of synthesized Me-and Br-substituted nitronyl nitroxides with Cu(hfac) 2 2 . Interestingly, only one main product was obtained in the reactions of the 5th-and 6th-substituted nitronyl nitroxides, whereas L 2-Me and L 2-Br radicals generated several different complexes depending on the synthetic conditions and reagent ratio. Some of the resulting complexes are kinetic products and crystallized as a d m i x t u r e s .
and [{Cu(hfac) 2 } 3 (L 6-Br ) 2 ] complexes were obtained in small amounts and could not be manually separated because of the small size of crystals and similar appearance to the main product. This allowed us to study only their crystal structures.
The main reaction product of Cu(hfac) 2 and L 2-Me in a 1:1 ratio is the mononuclear [Cu(hfac) 2 L 2-Me ] complex. The crystal structure comprises two crystallographically independent Cu(hfac) 2 L 2-Me molecules ( Figure 1a) with slightly different bond lengths and angles. The coordination environment of Cu atoms in both molecules is square pyramidal, where one of the O hfac atoms is at the apex, and the base is formed by the other three O hfac atoms and the N Py atom (Cu− N Py = 2.003(9) and 2.021(8) Å; Figure 1a and Table 2S). The nearest O NO ···O NO distances between uncoordinated NO groups of neighboring [Cu(hfac) 2 L 2-Me ] molecules is 3.659(11) Å.
The χ M T in the temperature range 15−300 K for [Cu(hfac) 2 L 2-Me ] is 0.81 cm 3 ·K·mol −1 (Figure 2a At 300 K, the χ M T for [{Cu(hfac) 2 } 5 (L 2-Me ) 2 ] is 1.22 cm 3 ·K· mol −1 and does not change significantly when the temperature is lowered to 10 K (Figure 2b). The experimental dependence of χ M T(T) was analyzed using the PHI program 31 with the contribution of {Cu 2+ A-J CuR -RA-J′ CuR -Cu 2+ C-J′ CuR -RB-J CuR -Cu 2+ B} five-spin fragments (H = −2J CuR (S CuA S RA + S CuB S RB ) − 2J′ CuR (S RA S CuC + S CuC S RB )). The weakly interacting spins of the Cu 2+ ions that coordinate with the donor N Py atoms were introduced in the analysis by using the Curie law. The optimal values of the exchange parameters, g R = 2.00 (fixed), g Cu 2+ = 2.06, J CuR = −842 cm −1 , J′ CuR = +57 cm −1 , and zJ′ = +0.01 cm −1 , indicate the strong antiferromagnetic exchange interactions between the spins of terminal Cu 2+ ions and nitroxides in the {CuO 5 } coordination units due to direct overlapping of their magnetic orbitals. The almost-constant χ M T value in the temperature range 10−300 K is in good agreement with the theoretical magnitude 1.30 cm 3 ·K·mol −1 for uncompensated spins of three Cu 2+ ions.
The main product in the {Cu(hfac) 2 + L 2-Br } synthetic system is a one-dimensional [Cu(hfac) 2 L 2-Br ] n complex ( Figure 3a). Nitroxide molecules carry out the bidentate bridging function via the O NO donor atoms of both NO groups in the coordination sphere of Cu atoms. The resulting polymer chains consist of two types of alternating [Cu(hfac) 2 L 2-Br ] monomers, which slightly differ in bond lengths and angles (Table 3S)
ACS Omega
Article expansion for the uniform chain model ( 32 on the assumption that the exchange interactions J A and J B are weakly ferromagnetic in nature and their magnitude should be approximately the same, i.e., J CuR ≈ J A ≈ J B . The estimated optimum parameters of magnetic exchange interactions, g R = 2.00 (fixed), g Cu 2+ = 2.09, J CuR = +6.8 cm −1 , are in good agreement with the proposed scheme of the intrachain exchange interactions.
The temperature dependence of the molar magnetic susceptibility shows a similar magnetic behavior for both [Cu(hfac) 2 L 5-R ] 2 (R = Me, Br) complexes (Figures 6a and 4Sc). At 300 K, χ M T values are equal to 0.04 and 0.06 cm 3 ·K· mol −1 for [Cu(hfac) 2 L 5-Me ] 2 and [Cu(hfac) 2 L 5-Br ] 2 , respectively, much smaller than the theoretical spin value (1.62 cm 3 · K·mol −1 ) for four independent paramagnetic centers with spins 1,2,14,15,21,24 The χ M T(T) dependences are well described by the dimer model with the Hamiltonian H = −2J CuR (S Cu S R ) using the PHI program. 31 The optimum values of the exchange parameters are as follows: g R = 2.00 (fixed), The residual χ M T value is 0.01 cm 3 ·K·mol −1 at 2 K for both the dimers, attributed to the free spin defects being less than 0.6%.
Dark-red crystals of the [Cu(hfac) 2 L 6-Me ] 2 dimer complex were isolated in the synthesis containing equimolar amounts of Cu(hfac) 2
ACS Omega
Article can be initiated by increasing the temperature, 12 we performed the differential thermal and thermogravimetric analysis of the obtained dimer complexes. However, the TG-DTA curves for [Cu(hfac) 2 L 5-Me ] 2 , [Cu(hfac) 2 L 5-Br ] 2 , and [Cu(hfac) 2 L 6-Me ] 2 ( Figure 5S) did not show any specific heat effects associated with phase transitions up to the samples' decomposition temperature. Thus, the strongly coupled state is the most stable one for the [Cu(hfac) 2 (Table 4S). The shortest distances between O NO atoms from uncoordinated nitroxide groups of the ligands from neighboring chains are more than 4 Å.
The χ M T for [Cu(hfac) 2 L 6-Br ] n is 0.80 cm 3 ·K·mol −1 at 300 K, in good agreement with the theoretical value (0.81 cm 3 ·K· mol −1 ) for two independent spins S = 1/2 of Cu 2+ ion and nitroxide radical per formula fragment [Cu(hfac) 2 L 6-Br ] (Figure 6b). χ M T gradually increases with decreasing temperature and reaches 1.68 cm 3 ·K·mol −1 at 2 K, indicating the domination of ferromagnetic exchange interactions, which are typical for axial coordination of NO groups to central Cu 2+ ions. 1,2,10,14,24 The experimental dependence of χ M T(T) is well described by the model including the contribution of the threespin {RA-J CuR -Cu 2+ A-J CuR -RB} exchange cluster (H = −2J CuR (S CuA S RA + S RB S CuA )) and isolated Cu 2+ ion spins from coordination units {CuO 4 N 2 } according to the Curie law, with the best fit parameters as follows: g R = 2.00 (fixed), g Cu 2+ = 2.10, J CuR = +16 cm −1 , and zJ′ = +0.07 cm −1 .
General Comparisons. Comparing data of the Cu(hfac) 2 complexes with substituted derivatives of 3-pyridyl nitronyl nitroxides obtained within the scope of the current study and previously described elsewhere 4,12,26 showed that the substituent type and its position in the pyridine heterocycle considerably affect the structure and magnetic behavior of the resulting compounds. Initially, we suspected that a substituent at the 2nd position of the pyridine ring should sterically hinder the coordination of both imine and nitroxide donor sites. Occupation of the 4th or 6th positions should mainly weaken the donor properties of only the nitroxide or the imine, respectively. The location of the substituent at the 5th position of the pyridine heterocycle should have a much smaller effect on donor abilities of a ligand. Additionally, we assumed that Br substituent should considerably weaken the nearest donor site due to a strong negative inductive effect. However, the ability of ligand donor sites to allow coordination is only one of the factors affecting the molecular structures of complexes. The spatial arrangements of hfac and nitroxide ligands in different types of coordination units and the number of coordinated donor sites of ligands are strongly influenced by the reagents ratio, their concentration, the solvent used, ambient temperature, and humidity. 1,2,7,11,23,24 The low solubility of the resulting solids and the strong antiferromagnetic exchange interactions between paramagnetic centers can result in some particular molecular and high-dimensional species. 1,2,23,24 The angle between the planes of the nitronyl nitroxide O • − N−CN→O fragment and the pyridine ring is one of the key indicators of steric hindrance for 3-pyridyl-substituted nitronyl nitroxides (Scheme 2 and Table 5S). It is 36 and 53°for crystallographically independent molecules in the crystal structure of L, 4 showing a broad range of possible conformations of the substituted derivatives. The smallest values of the CN 2 −Py angle (10−19°) were found for nitroxides with the Me group or Br atom at the 5th position of the pyridine ring. This angle increases to ∼37°when the Br atom is shifted to the 6th position of the pyridine heterocycle. The largest values were found for L 2-R and L 4-Me 12 derivatives (47−72°), which are in perfect agreement with the expected increased steric hindrance.
A similar correlation between the CN 2 −Py interplanar angle and the arrangement of substituents in L R (R = Me, Br) is found for the molecular structures of the complexes (Table 6S). However, this relationship is much more complicated and required consideration of the steric properties of bulky Cu(hfac) 2 fragments in the different types of coordination units and the variation of a number of coordinated donor sites of ligands. Notably, the most sterically hindered L 2-R and L 4-Me 12 radicals produce the largest number of complexes, whereas the use of ligands bearing substituents at the 5th and 6th positions of the pyridine heterocycle produces only one main coordination complex for each paramagnetic ligand. Apparently, a high steric hindrance for complexes with L 2-R and L 4-Me derivatives induces the formation of many conformers with various CN 2 −Py angles in a solution, leading to the precipitation of compounds with different molecular structures.
The comparison of distances between Cu atoms and imine and nitroxide donor sites allowed us to clarify the key correlations between the type and position of the substituent in the pyridine heterocycle and the coordinating ability of nitroxide ligands (Table 6S)
ACS Omega
Article at the 5th position of the pyridine ring. Similar cyclic dimers were obtained for Cu(hfac) 2 complexes with L 4-Me and L 6-Me ligands. Shift of the Me group to the 6th position of the pyridine ring does not considerably change the donor ability of the imine site and Cu−N Py distance (∼2.04 Å). In contrast, the nitroxide donor site for L 4-Me is more sterically hindered, which results in lengthening of the Cu−O NO bond (∼2.43 Å). 12 The complexes with L 6-Br ligand are the one-dimensional polymer [Cu(hfac) 2 L 6-Br ] n and the trinuclear molecule [{Cu-(hfac) 2
2-(2-
Bromopyridin-3-yl)-4,4,5,5-tetramethyl-4,5-dihydro-1H-imidazole-3-oxide-1-oxyl (L 2-Br ). 2-Bromo-3-formylpyridine (249.8 mg, 1.34 mmol) was added to a solution of 2,3-bis(hydroxyamino)-2,3-dimethylbutane (207.4 mg, 1.40 mmol) in MeOH (7.0 mL) at room temperature. The reaction mixture was stirred for 30 h; then, the precipitate was filtered off, washed with MeOH, and dried in air. This gave 3-(1,3dihydroxy-4,4,5,5-tetramethyl-imidazolidin-2-yl)-2-bromopyridine (394.9 mg) as a white powder, which was subsequently used without additional purification. PbO 2 (1.9 g) was added to a suspension of the adduct in MeOH (10.0 mL) and the mixture was stirred for 2 h. The resulting violet solution was filtered, the filtrate was evaporated, and the residue purified by column chromatography (Al 2 O 3 , EtOAc) followed by recrystallization from a CH 2 Cl 2 /n-hexane mixture. Yield [{Cu(hfac) 2 } 5 (L 2-Me ) 2 ]. n-Hexane (3 mL) was added to a solution of Cu(hfac) 2 (59.2 mg, 0.125 mmol) and L 2-Me (12.5 mg, 0.050 mmol) in a mixture of Et 2 O (0.5 mL) and CH 2 Cl 2 (1.0 mL). The volume of the dark-red solution was decreased to ∼2 mL with a slow flow of nitrogen. The reaction mixture was kept in a sealed flask at −5°C for 12 h. The dark-red crystals suitable for XRD analysis were filtered off, washed with hexane, and dried in air. Yield: 63.7 mg (89% 4 ] manually with SC XRD identification for an extensive SQUID analysis and studied only the crystal structure. [Cu(hfac) 2 L 2-Me ]. Cu(hfac) 2 (59.9 mg, 0.125 mmol) and L 2-Me (25.7 mg, 0.100 mmol) were dissolved in a mixture of Et 2 O (0.5 mL) and CH 2 Cl 2 (1.0 mL). Then, n-hexane (3 mL) was added and the volume of the dark-red solution was reduced to ∼2 mL with a slow flow of nitrogen. The reaction mixture was kept in a sealed flask at −5°C for 16 2 (48.0 mg, 0.100 mmol) in n-heptane (20.0 mL) was placed in a 50 mL flask and was refluxed until the volume of the reaction mixture decreased to ∼8 mL. Then, a solution of L 2-Me (12.5 mg, 0.050 mmol) in CH 2 Cl 2 (2.0 mL) was added and the volume of the reaction mixture was reduced to ∼4 mL with a slow flow of nitrogen. The reaction mixture was kept in a sealed flask at −5°C. After 12 h, a mixture of agglomerates of green crystals and violet powder was filtered off, washed with cold hexane, and dried in air. Several green crystals suitable for XRD were separated manually.
[Cu(hfac) 2 L 2-Br ] n . n-Hexane (3.0 mL) was added to a solution of Cu(hfac) 2 (42.1 mg, 0.088 mmol) and L 2-Br (15.6 mg, 0.050 mmol) in a mixture of Et 2 O (0.5 mL) and CH 2 Cl 2 (1.0 mL). The volume of the resulting dark-red solution was reduced to ∼4 mL with a slow flow of nitrogen. The reaction mixture was kept at −5°C in a sealed flask for 12 − [Cu(hfac) 2 L 6-Br ] n . A solution of Cu(hfac) 2 (71.9 mg, 0.150 mmol) in n-heptane (20.0 mL) was placed in a 50 mL flask and was refluxed until the volume of the reaction mixture reduced to ∼8.0 mL. Then, a solution of L 6-Br (31.5 mg, 0.100 mmol) in CH 2 Cl 2 (1.0 mL) was added and the volume of the reaction mixture was reduced to ∼3 mL with a slow flow of nitrogen. The reaction mixture was kept at −5°C in a sealed flask for 10−16 h. Resulting dark-blue crystals were filtered off, washed with cold hexane, and dried in air. Yield The crystals of [{Cu(hfac) 2 } 3 (L 6-Br ) 2 ] complex were obtained as an admixture by quickly decreasing the volume of the reaction mixture with a flow of nitrogen. Because of the small size of [{Cu(hfac) 2 } 3 (L 6-Br ) 2 ] crystals, we collected only several of them manually for SC XRD study.
X-ray Crystallography (XRD). X-ray diffraction intensities were collected for the selected single crystals individually mounted on glass fibers using Bruker diffractometers SMART-APEX II with a CCD and D8-QUEST with a CMOS area detector (Mo Kα radiation). Data reduction was performed using SAINT, and intensities were corrected for absorption by using SADABS. 36,37 The structures were solved by direct methods and refined by the full-matrix least-squares procedure anisotropically for nonhydrogen atoms. The H atoms were calculated geometrically and included in the refinement as riding groups. All calculations were fulfilled with the program package SHELXT 2014/5 and SHELXT-2018/ 3. 38 The crystallographic data and details of experiments are presented in Tables 1S−4S in the Supporting Information.
Magnetic Measurements. The magnetic susceptibility measurements were performed using an MPMS-5 SQUID magnetometer (Quantum Design) in the temperature range 2−350 K in a 5 kOe magnetic field. The molar magnetic susceptibility was calculated using diamagnetic corrections for the complexes according to the Pascal scheme. 39 Analysis and fitting of magnetochemistry data for heterospin complexes was performed with the PHI program. 31
Author Contributions
The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript.
Notes
The authors declare no competing financial interest. | 2019-10-10T09:27:50.075Z | 2019-10-07T00:00:00.000 | {
"year": 2019,
"sha1": "5ded81097ee0ec9673aae24543ed5d619e6fa8e5",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b01575",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "058c7df61a7cea95b880bb0adaf7176abc4dc381",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
222304110 | pes2o/s2orc | v3-fos-license | The Secret Is in the Spectra: Predicting Cross-lingual Task Performance with Spectral Similarity Measures
Performance in cross-lingual NLP tasks is impacted by the (dis)similarity of languages at hand: e.g., previous work has suggested there is a connection between the expected success of bilingual lexicon induction (BLI) and the assumption of (approximate) isomorphism be-tween monolingual embedding spaces. In this work we present a large-scale study focused on the correlations between monolingual embedding space similarity and task performance, covering thousands of language pairs and four different tasks: BLI, parsing, POS tagging and MT. We hypothesize that statistics of the spectrum of each monolingual embedding space indicate how well they can be aligned. We then introduce several isomorphism measures between two embedding spaces, based on the relevant statistics of their individual spectra. We empirically show that 1) language similarity scores derived from such spectral isomorphism measures are strongly associated with performance observed in different cross-lingual tasks, and 2) our spectral-based measures consistently outperform previous standard isomorphism measures, while being computationally more tractable and easier to interpret. Finally, our measures capture complementary information to typologically driven language distance measures, and the combination of measures from the two families yields even higher task performance correlations.
Introduction
The effectiveness of joint multilingual modeling and cross-lingual transfer in cross-lingual NLP is critically impacted by the actual languages in consideration (Bender, 2011;Ponti et al., 2019). Characterizing, measuring, and understanding this cross-language variation is often the first step towards the development of more robust multilingually applicable NLP technology (O'Horan et al., 2016;Bjerva et al., 2019;Ponti et al., 2019). For instance, selecting suitable source languages is a prerequisite for successful cross-lingual transfer of dependency parsers or POS taggers (Naseem et al., 2012;de Lhoneux et al., 2018). In another example, with all other factors kept similar (e.g., training data size, domain similarity), the quality of machine translation also depends heavily on the properties and language proximity of the actual language pair (Kudugunta et al., 2019).
In this work, we contribute to this research endeavor by proposing a suite of spectral-based measures that capture the degree of isomorphism between the monolingual embedding spaces of two languages. Our main hypothesis is that the potential to align two embedding spaces and learn transfer functions can be estimated through the differences between the monolingual embeddings' spectra. We therefore discuss representative statistics of the spectrum of an embedding space (i.e., the set of the singular values of the embedding matrix), such as its condition number or its sorted list of singular values. We then derive measures for the isomorphism between two embedding spaces based on these statistics.
To validate our hypothesis, we perform an extensive empirical evaluation with a range of crosslingual NLP tasks. This analysis reveals that our proposed spectrum-based isomorphism measures better correlate and explain greater variance than previous isomorphism measures Patra et al., 2019). In addition, our measures also outperform standard approaches based on linguistic information (Littell et al., 2017), The first part of our empirical analysis targets bilingual lexicon induction (BLI), a cross-lingual task that received plenty of attention, in particular as a case study to investigate the impact of crosslanguage variation on task performance Artetxe et al., 2018). Its popularity stems from its simple task formulation and reduced resource requirements, which makes it widely applicable across a large number of language pairs (Ruder et al., 2019b).
Prior work has empirically verified that for some language pairs BLI performs remarkably well, and for others rather poorly . It attempted to explain this variance in performance by grounding it in the differences between the monolingual embedding spaces themselves. These studies introduced the notion of approximate isomorphism, and argued that it is easier to learn a mapping function (Mikolov et al., 2013;Ruder et al., 2019b) between language pairs whose embeddings are approximately isomorphic, than between languages pairs without this property (Barone, 2016;. Subsequently, novel methods to quantify the degree of isomorphism were proposed, and were shown to significantly correlate with BLI scores (Zhang et al., 2017;Patra et al., 2019).
In this work, we report much higher correlations with BLI scores than existing isomorphism measures, across a variety of state-of-the-art BLI approaches. While previous work was limited only to coarse-grained analysis with a small number of language pairs (i.e., < 10), our study is the first large-scale analysis that is focused on the relationship between quantifiable isomorphism and BLI performance. Our analysis covers hundreds of diverse language pairs, focusing on typologically, geographically and phylogenetically distant pairs as well as on similar languages.
We further show that our findings generalize beyond BLI, to cross-lingual transfer in dependency parsing and POS tagging, and we also demonstrate strong correlations with machine translation (MT) performance. Finally, our spectral-based measures can be combined with typologically driven language distance measures to achieve further correlation improvements. This indicates the complementary nature of the implicit knowledge coded in continuous semantic spaces (and captured by our spectral measures) and the discrete linguistic information from typological databases (captured by the typologically driven measures).
Quantifying Isomorphism with Spectral Statistics
Following the distributional hypothesis (Harris, 1954;Firth, 1957), word embedding models learn the meaning of words according to their co-occurrence patterns. Hence, the word embedding space of a language whose words are used in diverse contexts is intuitively expected to encode richer information and greater variance than the word embedding space of a language with more restricting word usage patterns. The difference between two monolingual embedding spaces may also result from other reasons, such as the difference between the training corpora on which the embedding induction algorithm is trained, and the degree to which this algorithm accounts for the linguistic properties of each of the languages. While the exact combination of factors that govern the difference between the embedding spaces of different languages is hard to figure, this difference is likely to be indicative of the quality of crosslingual transfer. This is particularly true when the embedding spaces are used by cross-lingual transfer algorithms. Our core hypothesis is that the difference between two monolingual spaces can be quantified by spectral statistics of the two spaces.
Spectrum Statistics
Given a d-dimensional embedding matrix X, we perform Singular Value Decomposition (SVD) and obtain a diagonal matrix Σ whose main diagonal comprises of d singular values, σ 1 , σ 2 , . . . , σ d , sorted in a descending order. 1 Our aim is to quantify the difference between two embedding spaces by comparing statistics of their singular values. We next describe such statistics and in §2.2 use them to measure the isomorphism between the spaces.
Condition Number. In numerical analysis, a function's condition number measures the extent of change of the function's output value conditioned on a small change in the input (Blum, 2014). Consider the case of ϕ : X → Y, where X and Y are two embedding spaces mapped via ϕ. The condition number, κ(X), represents the degree to which small perturbations in the input X are amplified in the output ϕ(X). Following Higham et al. (2015), we compute the condition number of an input matrix X with d singular values as the ratio between its first (largest) and last (smallest) singular values: Why is it a relevant statistic? A smaller condition number denotes a more "stable" matrix that is less sensitive to perturbations. Consequently, learning a transfer function ϕ from one embedding space to another is more robust to noise when dealing with spaces with smaller κ(X). We thus expect that embedding matrices with high condition numbers might impede the learning of good transfer functions in cross-lingual NLP: A function learnt on an embedding space that is sensitive to small perturbations may not generalize well.
Are small singular values reliable? Small singular values are associated with noise, or with the least important information, and many noise reduction techniques remove them (Ford, 2015). If the smallest singular value indeed captures noise, this might affect the condition number (Eq. (1)). It is thus crucial to distinguish between "small but significant" and "small and insignificant" singular values. This is what we do below.
Effective Rank. Given sorted singular values, how can we determine the last effective singular value? For a matrix with d singular values σ 1 ≥ σ 2 ≥ · · · ≥ σ d ≥ 0, the -numerical rank can be defined as: r = min{r : σ r ≥ }, which means that singular values below a certain threshold are removed. However, this formulation introduces a dependency on the hyper-parameter . To avoid this, Roy and Vetterli (2007) proposed an alternative method that considers the full spectrum of singular values before computing the so-called effective rank of the input matrix X: where H(Σ) is the entropy of the matrix X's normalized singular value distributionσ i = , rounded to the smaller integer, yields the index of the last singular value that is considered significant, and is interpreted as the effective dimensionality, or rank, of the matrix X. If d is the dimensionality of the embedding space X, and we assume that the number of word vectors in X is typically much larger than d, it then holds that (see Roy and Vetterli (2007)): The dimensionality of an embedding space is intuitively assumed to be equal to the dimensionality of its constituent vectors: the matrix rank. Effective rank undermines this assumption: with effective rank matrices of the same 'initial dimension-ality' can have very different 'true dimensionalities' (Yin and Shen, 2018). Effective rank is used for various problems outside NLP, such as source localization for acoustic (Tourbabin and Rafaely, 2015) and seismic (Leeuwenburgh and Arts, 2014) waves, video compression (Bhaskaranand and Gibson, 2010), and for the evaluation of implicit regularization in neural matrix factorization (Arora et al., 2019). We propose to use it to inform and improve the estimation of the condition number.
Effective Condition Number. We replace σ d in Eq.
(1) with the singular value at the position of X's effective rank (see Eq. (2)), and compute the effective condition number κ ecn as follows: In §5 we empirically validate the quality of the effective condition number in comparison to the standard condition number.
Having defined spectral statistics of an embedding space, we move to define means of comparing two spaces using these statistics.
Spectral-Based Isomorphism Measures
The statistics described in §2.1 capture properties of a single embedding space, but it is not straightforward how to employ them in order to quantify the similarity between two distinct embedding spaces.
In what follows, we introduce isomorphism measures based on the spectral statistics.
Let us assume two embedding matrices X 1 and X 2 , with their condition numbers, κ(X 1 ) and κ(X 2 ). We combine the two numbers using the harmonic mean function (HM) to derive an isomorphism measure between two embedding spaces, COND-HM(X 1 , X 2 ): We similarly define the ECOND-HM measure over κ ecn (X 1 ) and κ ecn (X 2 ). Why harmonic mean? The higher the (effective) condition number of an embedding space, the higher its sensitivity to perturbations (i.e., the performance of transfer functions will be low). We view the condition number as a constraining factor on transferability, but what is the right way to evaluate the 'transferability potential' of two spaces via their condition numbers? There are multiple ways to combine two condition numbers, but we have empirically validated ( §5) that HM is a robust choice that outperforms some other possibilities (e.g., the arithmetic mean). We hypothesize this is because HM treats large discrepancies between two numbers in a manner that leans towards the smaller one (unlike e.g. arithmetic mean). Two noisy and two stable embedding spaces would have high and low HMs, respectively, but a noisy embedding space and a stable one would have an HM that leans towards the stable one. 2 Our results suggest that embedding spaces with small condition numbers can often tolerate noisy mappings from embedding spaces with high condition numbers, which might result from the improved stability of the former spaces.
Singular Value Gap. In addition to COND-HM and ECOND-HM, we introduce another measure that empirically quantifies the divergence between the full spectral information of two embedding spaces. This measure quantifies the gap between the singular values obtained from the matrices X 1 and X 2 sorted in descending order. We define the measure of Singular Value Gap (SVG) between two ddimensional spaces X 1 and X 2 , as the squared Euclidean distance between the corresponding sorted singular values after log transform: where σ 1 i and σ 2 i , i = 1, . . . , d are the sorted singular values characterizing the two embedding matrices X 1 and X 2 . The intuition here is that two embedding spaces with similar singular values at the same index will be more isomorphic and therefore easier to align into a shared space, and enable more effective cross-lingual transfer.
In summary, this section has presented methods that estimate the degree of isomorphism between any given pair of embedding spaces, which may differ in their language, training corpus, embedding induction algorithm or in other factors. While the focus of our empirical analysis ( §4, §5) is crosslanguage learning and transfer, we note that the scope of our methods may be wider, and that they have not been developed only with cross-lingual learning in mind.
Related Work and Baselines
We now provide an overview of prior research that focused on two relevant themes: 1) measuring approximate isomorphism between two embedding spaces, and 2) more generally, quantifying the (dis)similarity between languages, going beyond isomorphism measures. The discussed approaches will also be used as the main baselines later in §5.
Measuring Approximate Isomorphism. We focus on two standard isomorphism measures from prior work which are most similar to our work, and use them as our main baselines. The first measure, termed Isospectrality (IS) , is based on spectral analysis as well, but of the Laplacian eigenvalues of the nearest neighborhood graphs that originate from the initial embedding spaces X 1 and X 2 (for further technical details see Appendix A). argue that these eigenvalues are compact representations of the graph Laplacian, and that their comparison reveals the degree of (approximate) isomorphism. Although similar in spirit to our approach, constructing nearest neighborhood graphs (and then analyzing their eigenvalues) removes useful information on the interaction between all vectors from the initial space, which our spectral method retains.
The second measure is the Gromov-Hausdorff distance (GH) introduced by Patra et al. (2019). It measures the maximum distance of a set of points to the nearest point in another set, or in other words the worst case distance between two metric spaces X and Y (for further technical details see again Appendix A). Patra et al. (2019) propose this distance to test how well two language embedding spaces can be aligned under an isometric transformation.
While both IS and GH were reported to have strong correlations with BLI performance in prior work, they have not been evaluated in large-scale experiments before. In fact, the correlations were computed on a very small number of language pairs (IS: 8 pairs, GH: 10 pairs). Further, both measures do not scale well computationally. Therefore, for computational tractability, the scores are computed only on the sub-matrices spanning the sub-spaces of the most frequent subsets from the full embedding spaces (IS: 10k words, GH: 5k words). In this work, we provide full-fledged empirical analyses of the two measures on a much larger number of pairs from diverse languages, and compare them against the spectral-based measures introduced in §2. The fact that the proposed spectral-based meth-ods are grounded in linear algebra theory (cf. §2.1) also arguably provides a more intuitive understanding of their theoretical underpinning than what is currently offered in the relevant prior work.
Measuring Language Similarity. At the same time, distances between language pairs can also be captured through (dis)similarities in their discrete linguistic properties, such as overlap in syntactic features, or proximity along the phylogenetic language tree. The properties are typically handcrafted, and are extracted from available typological databases such as the World Atlas of Languages (WALS) (Dryer and Haspelmath, 2013) or URIEL (Littell et al., 2017), among others (O'Horan et al., 2016;Ponti et al., 2019). Such distances were found useful in guiding and informing cross-lingual transfer tasks (Cotterell and Heigold, 2017;Agić, 2017;Lin et al., 2019;Ponti et al., 2019).
In particular, we compare against three precomputed measures of language distance based on the URIEL typological database (Littell et al., 2017). Phylogenetic distance (PHY) is derived from the hypothesized phylogenetic tree of language descent. Typological distance (TYP) is computed based on the overlap in syntactic features of languages from the WALS database (Dryer and Haspelmath, 2013). Geographic distance (GEO) is obtained from the locations where languages are spoken; see the work of Littell et al. (2017) for more details.
We use these isomorphism measures and linguistic measures as language distance measures. We simply compute language distance between two languages L 1 and L 2 as LDist(L 1 , L 2 ) = D(X 1 , X 2 ), where D = {SVG, COND-HM, ECOND-HM, GH, IS, PHY, TYP, GEO}. Later in §5 we show that "proxy" language distances originating from the proposed spectral-based isomorphism measures (see §2.2) correlate better with cross-lingual transfer scores across several tasks, than these language distances which are based on discrete linguistic properties. We verify that implicit knowledge coded in continuous embedding spaces and linguistic knowledge explicitly coded in external databases often complement each other.
Experimental Setup
The conducted empirical analyses can be divided into two major parts. First, we run large-scale BLI analyses across several hundred language pairs from dozens of languages, comparing the correlation of spectral-based isomorphism measures ( §2.2) and all baselines ( §3) with performance of a wide spectrum of state-of-the-art BLI methods. Second, we run further correlation analyses with performances in cross-lingual downstream tasks: dependency parsing, POS tagging, and MT. We first provide the details of the experiments that are shared between the two parts, and then provide further specifics of each experimental part.
Monolingual Word Embeddings. For all isomorphism measures (SVG, COND-HM, ECOND-HM, GH and IS) and languages in our analyses we use publicly available 300-dim monolingual fastText word embeddings pretrained on Wikipedia with exactly the same default settings (see Bojanowski et al. (2017)), length-normalized and trimmed to the 200k most frequent words. 3 Isomorphism Measures: Technical Details. For our spectral-based measures, we compute a full SVD decomposition (i.e., no dimensionality reduction) of the embedding space. We compute SVG scores for BLI based on the first 40 singular values, which we empirically found to produce slightly better results; 4 for the other tasks we use all singular values. For IS and GH, we replicate the experimental setup from prior work: we compute the IS score over the top 10k most frequent words in each monolingual space, while the GH score is computed over the top 5k words from each monolingual space. 5
Bilingual Lexicon Induction
We conduct correlation analyses of the results from previous studies that report BLI scores for a large number of language pairs. On top of that, we complement the existing results from previous research with new results obtained with state-of-the-art BLI methods, applied to additional language pairs. BLI Setups and Scores. ran BLI experiments on 210 language pairs, spanning 15 diverse languages. Their training and test dictionaries (5k and 2k translation pairs) are derived from PanLex (Baldwin et al., 2010;Kamholz et al., 2014). We complement the original 210 pairs with additional 210 language pairs of 15 closely related (European) languages using dictionaries extracted from PanLex following the procedure of . With the additional language set, the aim is to probe if isomorphism measures can also capture more subtle and smaller language differences. 6 We also analyze the BLI results of 108 language pairs from MUSE (Conneau et al., 2018). This dataset systematically covers English, with 88 translation pairs that involve English as either the source or target language. Finally, we analyze the available BLI results of (referred to as GTrans) that are based on dictionaries obtained from Google Translate and include 28 language pairs spanning 8 different languages. For the full list of language pairs involved in previous BLI studies, we refer the reader to prior work (Conneau et al., 2018;. BLI Methods in Comparison. The scores in each BLI setup were computed by several state-of-theart BLI methods based on cross-lingual word embeddings, briefly described here. 1) SUP is the standard supervised method (Artetxe et al., 2016;Smith et al., 2017) that learns a mapping between two embedding spaces based on a training dictionary by solving the orthogonal Procrustes problem (Schönemann, 1966). 2) SUP+ is another standard supervised method that additionally applies a variety of pre-processing and post-processing steps (e.g., whitening, dewhitening, symmetric reweighting) before and after learning the mapping matrix, see (Artetxe et al., 2018). 3) UNSUP is a fully unsupervised method based on the "similarity of monolingual similarities" heuristic to extract the seed dictionary from monolingual data. It then uses an iterative self-learning procedure to improve on the initial noisy dictionary (Artetxe et al., 2018). For more technical details on the fully unsupervised model, we refer the reader to prior work (Ruder et al., 2019a;Vulić et al., 2019). 7 In sum, our analyses are conducted in three BLI setups (PanLex, MUSE, GTrans) and examine three types of state-of-the-art mapping-based methods, both supervised and unsupervised (SUP, SUP+, UNSUP). Altogether, these span 556 language pairs, and cover both related and distant 6 The initial set of comprises Bulgarian, Catalan, Esperanto, Estonian, Basque, Finnish, Hebrew, Hungarian, Indonesian, Georgian, Korean, Lithuanian, Norwegian, Thai, Turkish. The additional 210 language pairs are only composed of Germanic, Romance and Slavic languages. For a full list of the languages see Table 4 in the appendix. 7 The SUP+ and UNSUP methods are based on the VecMap framework (github.com/artetxem/vecmap) which showed very competitive and robust BLI performance across a wide range of language pairs in recent comparative analyses Doval et al., 2019). languages. 8 Following prior work , our BLI evaluation measure is Mean Reciprocal Rank (MRR). We note that identical findings emerge from running the correlation analyses based on Precision@1 scores in lieu of MRR.
Downstream Tasks
Following the large-scale nature of our BLI analyses, we run similar correlation analyses on several downstream tasks that comprise a large number of (both similar and distant) language pairs. 9 We rely on results from a recent study of Lin et al. (2019) that focused on cross-lingual transfer performance in MT, dependency parsing, and POS tagging. 10 Machine Translation. Lin et al. (2019) report BLEU scores when translating 54 source L 1 languages into English as the target language. We report correlations between the different language distance measures and these 54 BLEU scores.
Dependency Parsing. We base our analysis on the cross-lingual zero-shot parser transfer results of Lin et al. (2019): The standard biaffine dependency parser is trained on the training portions of Universal Dependencies (UD) treebanks from 31 languages (Nivre et al., 2018), and is then used to parse the test treebank of each language, now used as the target language. We report correlations between the language distance measures and the Labeled Attachment Scores (LAS) for all combinations of 31 languages, resulting in 930 pairs. POS Tagging. We use POS tagging accuracy scores reported by Lin et al. (2019). These scores span 26 low-resource target languages and 60 source languages which measure the utility of each source language to each of the 26 target languages in POS tagging. We use a sample of 840 language pairs for the correlation analysis, as 16 lowresource target languages and 49 source languages have readily available pretrained fastText vectors. 8 We report all results for each BLI method, dictionary and language pairs in the supplementary material (and also here https://tinyurl.com/skn5cf7). We also report scores with another method, RCSLS (Joulin et al., 2018), benchmarked in the GTrans BLI setup (see Table 1). 9 For the full list of languages that were analyzed throughout all our experiments see Table 4 in the appendix. 10 For full details regarding the models used to compute the scores for each downstream task, we refer the interested reader to the work of Lin et al. (2019) and the accompanying repository: https://github.com/neulab/ langrank. We note that scores for each language pair in each task have been produced with the same task architectures.
Correlation Analyses and Statistical Tests
All scores from isomorphism measures and BLI scores were log-transformed 11 prior to any correlation computation. We report Pearson's correlation coefficients in all tasks. This allows us to investigate which of the different individual measures is most important to predict task performance.
Regression Analyses. The individual (i.e., singlevariable) analyses are not sufficient to account for the complex interdependencies between the distance measures themselves, and how they interact with task performance when combined. Therefore, we also use standard linear stepwise regression model (Hocking, 1976;Draper and Smith, 1998): Here, task performance Y is predicted using a set of regressors, x 1 , . . . , x n (i.e., SVG, COND-HM, ECOND-HM, GH, IS, PHY, TYP, GEO), that are added to the model incrementally only if their marginal addition to predicting Y is statistically significant (p < .01). This method is useful for finding variables (i.e., in our case distance measures) with maximal and unique contribution to the explanation of Y, when the variables themselves are strongly cross-correlated, as in our case. This model is able to: (a) discern which variables overlap in their information; (b) detect variables that complement each other; and (c) evaluate their joint contribution in predicting task performance. We compute the regression model's score for all statistically significant variables, and report its square-root,r. Importantly,r is not a one-number description of a language, but rather an illustrative quantification of the joint contribution of several different distance measures to the explanation of Y. Its goal is to investigate potential gains achieved through the combination of several distance measures, as opposed to using a single-best measure. The distance measures that are found statistically significant in the regression analyses are marked by superscripts overr (see later in Tables 1 and 2).
Analyses and Results
The results are summarized in Tables 1 and 2. The first main finding is that our proposed spectral-based isomorphism measures are strongly correlated with performance across all tasks and settings. 12 In fact, they show the strongest individual correlations with task performance among all isomorphism measures and linguistic distances alike. The only exception is the MT task, where our measures fall short of TYP (see Table 2), although we mark that they still hold a strong advantage over the baseline GH and IS isomorphism measures that do not seem to capture any useful language similarity properties needed for the MT task.
ECOND-HM systematically outperforms COND-HM on 2 of 3 BLI datasets and 2 of 3 downstream tasks, validating our assumption that discarding the smallest singular values reduces noise. Additionally, SVG shows greater stability across tasks and datasets than ECOND-HM. A general finding across all tasks is that our spectral measures are the most robust isomorphism measures: they substantially outperform the widely used baselines GH and IS.
As stepwise regression discerns between overlapping and complementing variables (see §4.3), another finding indicates that our spectral measures are complemented by linguistically driven language distances. Indeed, their combination achieves very high correlation scores. The results demonstrate this across all tasks and settings (see bottom rows of the tables). For instance, when combining spectral measures with the linguistic distances, the regression model reaches outstanding correlation scores up to r = .91 on PanLex BLI (Table 1); with 420 language pairs, PanLex is the most comprehensive BLI dataset in our study. In addition, GH and IS are not chosen as significant regressors in the stepwise regression model, which indicates that they capture less information than our spectral methods. 13 Overall, the regression results support the notion that conceptually different distances (i.e., isomorphism-based versus measures based on linguistic properties) capture different properties of similarities between languages, which has a synergistic effect when they are combined.
Concerning individual tasks, we note that our spectral-based measures outperform the baselines Additional results and analyses are provided in Appendix B. They further demonstrate that our measures also indicate transfer quality of different target languages for a given source language, and transfer quality of source languages for a given target language, for the tasks discussed in this paper.
Further Discussion and Conclusion
This work introduces two spectral-based measures, SVG and ECOND-HM, that excel in predicting performance on a variety of cross-lingual tasks. Both measures leverage information from singular values in different ways: ECOND-HM uses the ratio between two singular values, and is grounded in linear algebra and numerical analysis (Blum, 2014;Roy and Vetterli, 2007), while SVG directly utilizes the full range of singular values. We suspect that the use of the full range of singular values is what makes SVG more robust across different tasks and datasets, compared to ECOND-HM that shows greater variance, as observed in our results above.
While the spectral methods are computed solely on word vectors from Wikipedia, the results in the downstream tasks are computed with different sets of embeddings (e.g., multilingual embeddings for dependency parsing), or the embeddings are learnt during training (for POS tagging and MT). We believe that this discrepancy does not constitute a shortcoming of our analyses, but rather the opposite: spectral-based methods maintain their high correlations in the downstream tasks as well, and this supports the notion that these measures might indeed capture deeper linguistic information than mere similarities between embedding spaces.
Our use of effective rank in improving the condition number (via effective condition number) is also inspired by recent work that aimed to automatically detect true dimensionality of embedding spaces. However, previous work has taken an empirical approach by simply tuning embedding dimensionality to evaluation tasks at hand (Wang, 2019;Raunak et al., 2019;Carrington et al., 2019). Our intention, on the other hand, is to extract the true embedding dimensionality directly from the embedding space. Another recent study (Yin and Shen, 2018) employed perturbation analysis to study the robustness of embedding spaces to noise in monolingual settings, and established that it is also related to effective dimensionality of the embedding space. All these inspired us to replace the standard matrix rank with effective rank when computing the condition number, and to introduce the statistic of effective condition number in §2.1.
Our study is also the first to compare language distance measures that are based on discrete linguistic information (Littell et al., 2017) with measures of isomorphism (i.e., our spectral-based measures, IS, GH), which can also be used as proxy language distance measures. Our findings, suggesting that it is possible to effectively combine these two types of language distance measures, call for further research that will advance our understanding of: 1) what knowledge is captured in monolingual and cross-lingual embedding spaces (Gerz et al., 2018;Pires et al., 2019;Artetxe et al., 2020); 2) how that knowledge complements or overlaps with linguistic knowledge compiled into lexical-semantic and typological databases (Dryer and Haspelmath, 2013;Wichmann et al., 2018;Ponti et al., 2019); and 3) how to use the combined knowledge for more effective transfer in cross-lingual NLP applications Eisenschlos et al., 2019).
The differences in embedding spaces of different languages do not only depend on linguistic properties of the languages in consideration, but also on other factors such as the chosen training algorithm, underlying training domain, or training data size and quality Arora et al., 2019;Vulić et al., 2020). In future research we also plan an in-depth study of these factors and their relation to our spectral analysis.
We believe that the main insights from this study will inform and guide different cross-lingual transfer learning methods and scenarios in future work. These might range from choosing source languages for transfer in low-data regimes, over monolingual word vector induction guided by the spectral statistics, even to more effective hyperparameter search.
A IS and GH Isospectrality (IS) After length-normalizing the vectors, compute the nearest neighbor graphs using a subset of the top N most frequent words in each space, and then calculate the Laplacian matrices LP 1 and LP 2 of each graph. For LP 1 , the smallest k 1 is then sought such that the sum of its k 1 largest eigenvalues k 1 i=1 λ 1i is at least 90% of the sum of all its eigenvalues. The same procedure is used to find k 2 . They then define k = min(k 1 , k 2 ). The final IS measure ∆ is then the sum of the squared differences of the k largest Laplacian eigenvalues: The lower ∆, the more similar are the graphs and, consequently, the more isomorphic are the two embedding spaces.
Gromov-Hausdorff Distance (GH) It measures the worst case distance between two metric spaces X and Y with a distance function m as: It computes the distance between the nearest neighbors that are farthest apart. The Gromov-Hausdorff distance then minimizes this distance over all isometric transforms X and Y: Computing GH directly is computationally intractable in practice, but it can be tractably approximated by computing the Bottleneck distance between the metric spaces (Chazal et al., 2009).
B Source and Target Selection Analysis
In addition to the correlation analyses in the main text that aggregate all language pairs, some tasks and datasets also support analyses where one language is fixed as a target language (i.e., sourcelanguage selection analysis), or as a source language (i.e., target-language selection analysis). Such analyses could inform us on how to choose the right transfer language. That is, given a target language one would like to transfer to, which is the best source language to transfer from, and vice versa. These analyses are conducted for the following tasks with sufficient language pairs: BLI with PanLex, parsing, and POS tagging. For these analyses we report average correlation: across target languages in the source-language selection analysis, or across the source languages in the target-language selection analysis. We provide the percentage of times each compared measure scored the highest for a particular task and setting.
Stepwise regression analysis is not suitable for the selection analysis due to the limited number of language pairs in each language selection setup (e.g., PanLex offers 14 language pairs for each source-or target-language selection analysis). These conditions impede the statistical significance power of the tests which stepwise regression requires. We therefore opt for a standard multiple linear regression model instead; the regressors include the isomorphism measure with the highest individual correlation combined with the linguistic measures. Similarly to the stepwise analysis, we report the unified correlation coefficient,r.
We observe that the same findings reported for the aggregated correlation analyses (Tables 1 and 2 in the main text) also hold for the language selection analyses (Table 3 below). This indicates that our spectral measures have an applicative value as they can facilitate better cross-language transfer.
We also observe interesting patterns in the selection analyses for the POS tagging task in Table 3: While the results in the target-language selection analysis largely follow the main-text results, the same does not hold for source-language selection (Table 3, POS Target and Source columns). We speculate that this is in fact an artefact of the experimental design of Lin et al. (2019). Their set of target languages deliberately comprises only truly low-resource languages, and such languages are expected to have lower-quality embedding spaces. Transferring to such languages is bound to fail with most source languages regardless of the actual source-target language similarity. The difficulty of this setting is reflected in the actual scores: average accuracy scores for the best source-target combination is 0.55 in the source-language selection analysis, and 0.92 for target-language selection.
C Single and Combined Analysis
We show (Figure 1) a single experimental condition (SUP method in the PanLex BLI dataset, leftmost column in Table 1 in the main text) to illustrate the data distribution and the correlation for spectralbased measures (e.g., ECOND-HM), and their improvement once this measure is combined with linguistic measures through regression analysis. Table 3: Correlation scores in source-language (Source) and target-language (Target) selection analyses. The best distance measure per column is provided in bold. The percentage of cases a measure topped the others is shown in superscript (see details in Appendix B).r refers to the unified correlation coefficient from the multiple regression model (see details in Appendix B). Table 1 of the main paper). The left panel presents results for the best single isomorphism measure, ECOND-HM. The right panel presents results for the combined unified model based on the regression analysisr that includes linguistic measures.r's sign was flipped (right panel) to make the graphs directly comparable. | 2020-10-30T05:07:26.375Z | 2020-01-30T00:00:00.000 | {
"year": 2020,
"sha1": "2f67502d4749446871e75dc69361e90ed5a78c9a",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/2020.emnlp-main.186.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "b89cdb3a89007663cd47ac94fa8c0cc1bfa2b795",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221679081 | pes2o/s2orc | v3-fos-license | Innovation and borrower discouragement in SMEs
In this paper, we investigate whether innovative small- and medium-sized enterprises (SMEs) are more likely to be discouraged from applying for external finance than non-innovators. These so-called discouraged borrowers are credit worthy SMEs who choose not to apply for external finance despite the fact that this is needed. We find that SMEs undertaking pure product and joint product and process innovation have a significantly higher incidence of borrower discouragement than non-innovative counterparts. Moreover, radical and incremental product innovators are more likely to be discouraged relative to non-innovative counterparts. Innovative activity can increase borrower discouragement for a myriad of reasons including fear of rejection, reluctance to take on additional risk, negative perceptions of the funding application process and perceived negative economic conditions. Overall, our results suggest a need for targeted policy interventions in order to alleviate borrower discouragement within innovative SMEs, as well as a closer alignment between innovation and SME finance policy. Innovative SMEs play a crucial role in driving technological change and productivity growth. Therefore, understanding the factors shaping access to finance for innovative SMEs is of crucial importance to the economy. We investigate the potential impact of innovation activity on the incidence of borrower discouragement, credit worthy firms who choose not to apply for external finance despite the fact that it is required. The results of our empirical investigation suggest that SMEs undertaking pure product and joint product and process innovation have a significantly higher incidence of borrower discouragement than non-innovative counterparts. The principal implication of this study is that innovation is a factor, which self-limits access to finance for innovative SMEs. We offer recommendations to mitigate borrower discouragement in this context.
Introduction
In this paper, we investigate the incidence of borrower discouragement in innovative small-and medium-sized enterprises (SMEs). Innovative start-ups and SMEs are crucial for job creation, innovation and productivity growth (Hall & Lerner, 2010). A crucial issue facing many innovative SMEs is their ability to access external finance, an issue made even more salient in the wake of the global financial crisis (Lee et al., 2015;Lee & Brown, 2017). Consequently, how SMEs are financed is a central policy issue given that access to appropriate levels of funding has significant implications for firm growth, performance and long-term survival.
Despite growth in alternative forms of finance such as venture capital, business angel investment and equity crowdfunding in recent years, bank debt continues to represent the primary source of external financing for the vast majority of SMEs (Lee & Brown, 2017;Robb & Robinson, 2014). Extant research suggests there are significant structural impediments, in the form of informational asymmetries, asset intangibility and skewed returns, which face innovative SMEs seeking bank funding (Berger & Udell, 1998;Cassar, 2004). Limited collateral and unstable cash flows further exacerbate access to external bank finance for innovative SMEs (Hall & Lerner, 2010;Lee et al., 2015). Indeed, negative expectations regarding the likelihood of obtaining external finance can be so acute that some SMEs become discouraged from applying altogether (Cowling et al., 2016). 1 The prevalent academic definition of borrower discouragement follows Kon and Storey (2003, p.37), where a '…good borrower may not apply for a loan to a bank because they feel they will be rejected.' Due to the nature of many official SME surveys, prior studies adopt (often by necessity) rather restrictive definitions of borrower discouragement-focusing purely on the fear that a loan application is rejected (Neville et al., 2018;Nguyen et al., 2020). For example, survey instruments such as the European Central Bank's Survey on the Access to Finance of Enterprises (SAFE) survey use this somewhat abbreviated definition of discouragement (Calabrese et al., 2020;Mol-Gómez-Vázquez et al., 2021). This is likely to significantly underestimate the true extent of borrower discouragement across SMEs. Fortunately, the depth of information compiled in the UK Department for Business, Energy and Industrial Strategy (BEIS) Longitudinal Small Business Survey (LSBS) used in this study allows a multi-faceted measure of borrower discouragement to be constructed, which encompasses whether an SME had a requirement for finance in the last 12 months, but did not apply for any of the following reasons: fear of rejection; cost of credit; additional risk-taking; poor credit history; prevailing economic conditions; knowledge of financial sources; and the time and hassle associated with applying. 2 The mutually exclusive options selected by SME respondents as the reason for not applying for external finance are useful in discerning potential market imperfections (discouraged borrowers) and other reasons that SMEs do not seek credit when required. In this paper, we present novel evidence suggesting that innovative SMEs are significantly more likely to be discouraged from applying for external finance than their non-innovative counterparts.
There are compelling theoretical and policy reasons for investigating the drivers of borrower discouragement, given that these may ultimately lead creditworthy SMEs to forego credit, leading to negative knock-on effects for the real economy via declines in future innovative activity and employment creation. Despite this, discouraged borrowers have until recently been a relatively under researched and neglected cohort of SMEs (Cowling et al., 2016). This is somewhat surprising given these firms significantly outnumber firms that apply for external finance but are subsequently denied credit (Freel et al., 2012;Levenson & Willard, 2000;Wernli & Dietrich, 2021). Recent evidence reveals that over half of discouraged borrowers (55%) would secure external finance if they had applied for a loan (Cole & Sokolyk, 2016;Cowling et al., 2016). Prior research suggests that borrower discouragement is driven by a variety of entrepreneurial and firm-level characteristics. However, to date evidence regarding the impact of innovation on borrower discouragement is scarce, despite a priori expectations that borrower discouragement is likely to be higher amongst innovative SMEs given their informational opacity and the inherent risk and uncertainty of outcomes associated with innovation activity (Hutton & Nightingale, 2011).
The issue of borrower discouragement is critical given that undercapitalisation during initial phases of a SME's development can lead to subsequent underperformance (Marlow & Patton, 2005). Therefore, borrower discouragement is clearly important from a public policy perspective (BEIS, 2017;Hutton & Nightingale, 2011). For example, in the UK, the state-owned British Business Bank and the Scottish National Investment Bank have recently acknowledged the need to tackle SME borrower discouragement (British Business Bank, 2020;Scottish Government, 2019). The European Central Bank has similarly identified borrower discouragement as a problem facing many SMEs (Ferrando & Mulier, 2015). The US Federal Reserve and government agencies located in various developing economies are also giving this issue increasing attention (Nguyen et al., 2020). 3 Therefore, the results of an investigation of borrower discouragement are of direct relevance to policymakers tasked with alleviating funding gaps confronting innovative SMEs.
In this paper, we investigate the incidence of borrower discouragement for innovative SMEs using the aforementioned LSBS, commissioned by the BEIS. The LSBS is a large-scale representative annual survey of UK SME owners and managers with annual sample sizes ranging from 6,600 to 15,500 UK SMEs. We conduct an econometric analysis using Heckman probit models to investigate the association between innovation and borrower discouragement. 4 Firm-level characteristics are also incorporated into our estimable models in order to control for other factors that are likely to affect borrower discouragement.
By way of preview, the results of our empirical analysis suggest that innovative SMEs have a significantly higher incidence of borrower discouragement than non-innovative counterparts. The type of innovation carried out affects the incidence of borrower discouragement. Specifically, product innovators and SMEs engaged in a combination of product and process innovation are more likely to be discouraged borrowers than non-innovative counterparts. Our results also suggest that the novelty of innovation is an important driver of borrower discouragement. In particular we find that both radical and incremental product innovators are also more likely to be discouraged borrowers relative to process innovator counterparts. In addition, our results provide evidence that not only fear of rejection is a key driver of borrower discouragement for innovative firms, but also reluctance to take on additional risk, negative perceptions on the funding application process and perceived economic conditions are also likely to be important. The results presented in this study provide new insights for the literature and respond to calls for more research to verify 'the existence, extent and characteristics' of this phenomenon (Fraser, 2014, p. 85).
The remainder of this paper is structured as follows. In Section 2, we review relevant literature. Section 3 outlines the data, definitions and methods. In Section 4, we present and discuss the main results. Section 5 provides further discussion of the results and resultant policy implications emerging from the study.
Relevant literature
To examine the interconnections between finance, innovation and borrower discouragement, we discuss salient literature regarding the structural issues typically confronting innovative SMEs seeking external finance. We then review the literature on borrower discouragement and its links with firm-level innovative activity.
2.1 The supply of finance for innovative SMEs Schumpeter (1934) was the first to draw connections between innovation and finance. An integral part of the Schumpeterian story is that financial institutions 1 3 Vol:. (1234567890) play an essential role as facilitators of the innovative efforts undertaken by entrepreneurs (King & Levine, 1993;Revest & Sapio, 2012). One of the ways financial markets are believed to play this role is by allocating capital to firms with the greatest potential to implement new processes and to commercialise new technologies (Kerr and Nanda, 2015). Lee et al. (2015) claim that there are three fundamental theoretical reasons access to finance can be problematic for innovative SMEs. First, the returns to innovation are highly skewed with only a small number of innovations generating significant revenues, while the remainder yield little or no return. Not only does one not know the probabilities associated with outcomes, but even the forms of the potential outcomes are not clear. While increased innovative activity may increase the probability of superior performance, it cannot guarantee it (Coad & Rao, 2008). Therefore, from a financier's perspective, this makes it significantly harder to evaluate potential innovative projects requiring funding, particularly since often the only way to accurately assess the potential of a particular innovation is to invest in it (Kerr and Nanda, 2015).
Second, given that SMEs have more information regarding the likely success of any innovation, banks cannot accurately estimate the likely returns to innovative investments due to informational asymmetries (Berger & Udell, 1998;Hall & Lerner, 2010). These information asymmetries are a significant factor determining borrower discouragement (Kon & Storey, 2003). On the whole, asymmetric information issues tend to be most acute for SMEs with higher levels of intangible assets (Mina et al., 2013). Consequently, many innovative firms seek finance from specialised financial intermediaries such as business angels and venture capitalists that address asymmetric information issues via ex-ante soft information collection and ex-post performance monitoring (Liberti & Petersen, 2019;Robb & Robinson, 2014).
Finally, in such a situation, collateral is an important tool for banks to mitigate these informational asymmetries and thus resolve the credit-rationing problem. However, intangible assets produced by the innovation process may be difficult to value or transfer beyond an individual firm. Typically, large banks rely on objective lending technologies such as small business credit scoring, asset-based lending and fixed-asset lending techniques (Berger & Udell, 2006). Each of these techniques relies on either the personal or business collateral that the firm can provide to secure the repayment of the loan. Consequently, innovative SMEs without significant tangible or re-deployable assets have insufficient collateral to obtain external finance (Cosh et al, 2009;Hall & Lerner, 2010).
Due to the aforementioned issues, prior literature supports the view that innovative SMEs have difficulties obtaining bank finance (Hall & Lerner, 2010;Lee et al., 2015). Freel (2007) find that innovative UK SMEs were less likely to receive bank finance. Schneider and Veugelers (2010) show that the German innovative SMEs view external financing constraints as an important factor in hampering innovation. Whereas large firms can fund innovation via internal cash flows, smaller innovative firms often have insufficient or unpredictable cash flows to service bank loans adequately (Hall & Lerner, 2010). Recent evidence also suggests that innovative SMEs can be penalised in other ways, for example, being charged higher interest rates for loans than less innovative counterparts (Rostamkalaei & Freel, 2016). In continental Europe and the USA, there is also evidence suggesting that firms engaged in innovative activities often face substantial external financing constraints (Hall et al., 2016;Kerr & Nanda, 2015). Innovative SMEs also appear to be more affected by exogenous liquidity shocks. For example, evidence for Brazil (Paunov, 2012) and the UK (Lee et al., 2015) suggests that innovative SMEs were more likely to be turned down for finance in the aftermath the global financial crisis.
Borrower discouragement
In a seminal contribution, Kon and Storey (2003) outline how actual or perceived barriers to accessing external finance may deter SMEs from applying for credit altogether-so-called discouraged borrowers. Prior evidence suggests that there are significant variations in borrower discouragement across countries (Macan Bhaird et al., 2016;Qi and Nguyen, 2021). Rostamkalaei et al. (2018) report that the incidence of SME borrower discouragement varies between 1 and 45%. In most developed economies, borrower discouragement affects between 10 and 20% of SMEs (Christensen & Hain, 2014;Cowling et al., 2016;Freel et al., 2012;Mac an Bhaird et al., 2016; 1 3 Vol.: (0123456789) Rostamkalaei et al., 2018). For example, Calabrese et al. (2020) find that 6.5% of SMEs from a number of EU countries are discouraged borrowers. The incidence of borrower discouragement is significantly higher in developing countries (Chakravarty and Xiang, 2013). Intra-country variations in borrower discouragement are also prevalent. For example, in the UK, Fraser (2004) and Freel et al. (2012) find that approximately 8% of SMEs are discouraged borrowers, while Cowling et al. (2016) and Rostamkalaei (2017) find a prevalence of borrower discouragement of 2.7% and 2.1%, respectively.
The results of previous research suggest that borrower discouragement is associated with various entrepreneurial and firm-level traits. Gender differences, age and ethnicity are also important factors which relate directly to the ongoing debate regarding the democratisation of entrepreneurial finance across under-represented groups of entrepreneurs (Cumming et al., 2021). The entrepreneurial and personal characteristics associated with a higher incidence of borrower discouragement include inter alia ethnic minorities (Cavalluzzo et al., 2002;Fraser, 2009), female-led (Cowling et al., 2016;Freel et al., 2012;Moro et al., 2017), older (Cole & Sokolyk, 2016), less well-educated (Cole & Sokolyk, 2016;Nguyen et al., 2020) and less wealthy entrepreneurs (Han et al., 2009). Evidence emanating from the USA suggests that socio-historical experiences and shared knowledge of inequalities within certain racial minorities (e.g. African Americans, Hispanic Americans and Asian Americans) may influence borrower discouragement (Neville et al., 2018). Nguyen et al. (2020) show that firms with wider business networks suffer less information asymmetries and consequently are less likely to be discouraged. Serial entrepreneurs are also much more likely to be discouraged borrowers (Freel et al., 2012). Han et al. (2009) find that riskier individuals have a higher incidence of borrower discouragement. Cowling et al. (2016) find that since the global financial crisis, experienced entrepreneurs are more likely to be discouraged borrowers.
In terms of firm-level characteristics, smaller and younger SMEs are significantly more likely to be discouraged borrowers (Han et al., 2009;Freel et al., 2012;Chakravarty and Xiang 2013;Cowling et al., 2016;Mac an Bhaird et al., 2016;Rostamkalaei, 2017). Clearly, newly born firms will have less experience in the credit market and may self-ration as a result of this inexperience (Calabrese et al., 2020). Therefore, in line with a priori theoretical expectations, the smallest most informationally opaque SMEs encounter the greater levels of borrower discouragement (Berger & Udell, 1998;Cowling et al., 2016). Such SMEs are less likely to have established relationships with lenders (Rostamkalaei et al., 2018). SMEs that have established banking relationships are less likely to discouraged borrowers (Freel et al., 2012). This suggests that established firm-bank relationships facilitate information exchange between borrowers and lenders (Cowling et al., 2016).
The considerable variation in aggregate levels of borrower discouragement reported in prior literature is likely to stem from differences in definitions used. This suggests that considerable caution should be exercised when drawing direct comparisons across studies regarding the incidence of borrower discouragement. Most previous studies typically view borrower discouragement as a binary choice between borrowers who fear rejection and those who do not and consequently have failed to assess the strength or depth of borrower discouragement. This is rather surprising given the multi-dimensional nature of this cognitive phenomenon. Table 1 highlights the various definitions used in prior studies of borrower discouragement. It also suggests that the definitions of borrower discouragement and how it is examined need to be clearly articulated and delineated when exploring the concept empirically. Overall, there appear to be a complex mix of inter-related factors shaping borrower discouragement in SMEs (Freel et al., 2012). However, owing to this pervasive borrower heterogeneity, perhaps unsurprisingly, within the literature, there are definitional ambiguities in relation to the precise nature of the underlying causes of discouragement. In most surveys, questions relate to whether SMEs enact selfimposed credit constraints for fear of rejection. However, in some surveys, the terms and conditions (collateral and covenants) are also included as reasons for borrower discouragement (Chakravarty and Xiang 2013;Cowling et al., 2016). The range and scope of definitions of borrower discouragement are broader and more inclusive in some survey questionnaires and studies of borrower discouragement than others. Borrowing costs (interest rates, overdraft charges) are also likely to play a role in mediating the demand for finance. A formal loan application can also be costly 'is a firm that did not apply for a loan during the previous 3 years because the firm feared rejection, even though it needed credit' (p. 47) Cowling et al. (2016) UK SME Business Barometer Surveys 'demand for but not applying for any finance either because the firm feared rejection or the owner thought the finance was too expensive' (p. 1054) Mac an Bhaird et al. (2016) ECB Survey on the access to Finance of SMES (SAFE) 'With respect to banks' loans (either new or renewal): did you apply for them over the past 6 months, or not? 1. Applied. 2: No, because of possible rejection' (p. 49) Christensen and Hain (2014) Bespoke panel sample of SMEs in North Jutland, Denmark 'Did expectations of rejection make you abstain from applying for external finance for either development activities or working capital during the past year' (p.14) Chakravarty and Xiang (2013) World Bank Enterprise Surveys 'as firms with a need for a loan who nevertheless choose to not apply for a bank loan because (1) the loan procedure was too complicated; (2) interest rates were too high; (3) collateral requirement were too high; and (4) there was corruption in allocation' (p. 67) Freel et al. (2012) UK biennial survey by the Federation of Small Businesses 'in the past two years has the fear of rejection stopped you from seeking a bank loan for your business' (p. 407) Han et al. (2009) US Survey of Small Business Finances 'all businesses (both high and low risk) with capital demands, but which did not apply because of fear of rejection' (p.416)
3
Vol.: (0123456789) in time and human resources (Rostamkalaei et al., 2018) inhibiting SMEs from applying for finance given the opportunity cost and hassle of applying (Chakravarty and Xiang 2013;Rostamkalaei et al., 2018). 5 While the issue of cost of finance is broadly consistent with the original concept of borrower discouragement proposed by Kon and Storey (2003), this issue of credit restrictions based on the price of finance are clearly pushing the boundaries of the original concept. Given this, it can be difficult to distinguish between SMEs who do and do not need finance (Xiang et al., 2015). Moreover, in some studies, factors behind borrower discouragement hinge upon issues such as collateral requirements and corruption (Chakravarty and Xiang 2013). Clearly, the concept of discouragement appears to be used inconsistently throughout the literature. This may account for discrepancies in estimates of borrower discouragement presented in the literature. This suggests that considerable caution should be exercised when comparing the empirical findings across studies of borrower discouragement.
Borrower discouragement and SME innovative activity
To date, there is a paucity of evidence regarding the potential effects of innovation on the likelihood of credit self-rationing in SMEs. Yet product and process innovations are important strategies used by SMEs seeking to improve efficiency in production and/or increase revenues by stimulating demand for innovative products and services. Often access to external finance is a fundamental pre-requisite driving firm-level innovation (Kerr & Nanda, 2015;Lipczynski et al., 2017). Given the risk and uncertainty of outcome associated with entrepreneurial endeavours and innovative activity (Mazzucato, 2013), innovators are more likely to encounter significant barriers accessing finance. This manifests itself in rejected funding applications (Freel, 2007;Lee et al., 2015), higher interest rates on bank loans (Rostamkalaei & Freel, 2016), onerous collateral requirement and covenants, which can all ultimately foster borrower discouragement.
The link between innovation and discouragement has been considered in some studies. For example, Rostamkalaei et al. (2018) utilise the SME Finance Monitor survey (2011)(2012)(2013)(2014)(2015) to examine the underlying characteristics of two different types of non-applicants who needed financing but did not apply: informally turned down (when lenders verbally inform an SME owner that a loan application is likely be denied) and discouraged borrowers (when the SME thought that it would be turned down or it was not the right time to apply for borrowing). While innovation was not the focus of the study, the authors find that informal turndowns are more prevalent amongst innovators and that (product and process) innovative activity is not associated with borrower discouragement. Freel et al. (2012) also find no evidence to suggest that SMEs which perceive innovation as a business strength were more likely to be discouraged borrowers. 6 Innovation is traditionally modelled as a binary choice capturing whether a firm innovates or not. However, not all innovation can be classified the same (Beck et al., 2016). Indeed, innovation can take many forms and includes activities such as product, process, radical and incremental innovations (Lipczynski et al., 2017). In the present study, we posit that the type, nature and scope of innovation is likely to have a material impact on borrower discouragement, given variations in the risk and uncertainty associated with different forms of innovation (Teece et al., 2016). There is no certainty that process innovation will lead to lowering the average cost function of a firm nor that a radical innovation (which implies advancements in knowledge due to the development of new products and processes that are new to the market) will gain traction in the market. However, radical innovation is likely to be characterised by higher levels of unknown unknowns due to both 'high technical and market uncertainty' than incremental innovations which merely entails modifications to existing products and processes (O'Connor & Rice, 2013, p. 3).
Overall, prior evidence suggests that a multitude of factors are likely to determine the incidence of borrower discouragement. However, the role of innovation for the most part has been overlooked. This is surprising given the growing literature investigating the financing constraints facing innovative firms (Hall & Lerner, 2010;Lee et al., 2015). The paucity of evidence regarding whether borrower discouragement is affected by innovation is clearly a compelling issue for further investigation for academics and policymakers alike and hence the focus of the present study.
Data
For the purposes of our empirical analysis, we utilise the UK Longitudinal Small Business Survey (LSBS) which is a detailed nationally representative survey of the UK SMEs. The LSBS is a telephonebased survey of the UK small business owners and managers administered by BEIS. The survey is constructed using a stratified sample of owner-managers of SMEs with less than 250 employees across the four constituent parts of the UK (England, Northern Ireland, Scotland and Wales). The data available for use in the present study comprises SMEs that were interviewed during the period 2015-2018. The survey collects detailed information relating the financial and non-financial activities of SMEs, including: the nature of any innovative activities, attitudes toward accessing external finance and reasons for borrower discouragement.
Identifying innovative SMEs
Firm-level innovation can be defined in various ways. Product innovation involves the introduction of a new product, while process innovation normally involves the introduction of a cost-saving technologies. The distinction between product and process innovation is not always clear cut, however. New products often require new methods of production, while new production processes often alter the characteristics of the final product (Lipczynski et al., 2017). Innovation is also differentiated by the degree of novelty (Beck et al., 2016). Radical innovations represent significant advances or new forms of knowledge occurring primarily through the creation of new products and processes. Incremental innovations on the other hand involve the continuous improvement to existing products, processes or services that are new to the firm (Beck et al., 2016). While incremental innovation can lead to competitive advantages for SMEs by increasing efficiency, radical innovation can lead to substantial improvements to growth and returns (Love et al., 2016;Saridakis et al., 2019). Given the inherent lack of resources required to undertake radical innovation, incremental innovation is often the most common form of innovation for SMEs.
In the present study, we measure innovation as the introduction of new products (goods and services) and processes. Table 2 presents the definitions and specific LSBS survey questions used in the present study. Product innovation is proxied by firms that introduced any new or significantly improved goods (this excludes the resale of goods purchased from other businesses or changes of a solely aesthetic nature) and/or services innovations in the last 3 years. These firms represent 28% of our sample as shown in Table 3. Process innovation is proxied by the following survey question: 'Has your [business] introduced any new or significantly improved processes for producing or supplying goods or services in the last three years?' We observe in Table 3 that 15.9% of SMEs in our sample has introduced process innovation in the last 3 years.
Using the classification above, we classify innovative SMEs into three mutually exclusive groups comprising pure product innovators (17.6% of SMEs), pure process innovators (5.5% of SMEs) and SMEs that produce product and process innovations simultaneously (10.3%). The detailed nature of the data enables us to distinguish between radical and incremental innovators (Beck et al., 2016;Saridakis et al., 2019). The responses to the survey question, 'Were any of these new or significantly improved goods and services innovations new to the market, or were they all just new to your [business]?', allow us to create a categorical variable that classifies product innovation as radical (new to the market) or incremental (new to the business). A similar approach is used to (2018) Process innovation Business introduced any new or significantly improved processes for producing or supplying goods or services in the last 3 years J3
Innovation types
No innovation (base category) Firm has not been an innovator in the last 3 years Pure product innovation Business introduced any new or significantly improved goods and/or services in the last 3 years Own elaboration based on J1SUM, J1 and J3 Pure process innovation Business introduced any new or significantly improved processes for producing or supplying goods or services in the last 3 years Own elaboration based on J1SUM, J1 and J3 Product and process innovation Introduction of product and process innovation Own elaboration based on J1SUM, J1 and J3
Novelty of product innovation
No product innovation (base category) Business has not introduced any product innovation in the last 3 years At least some new to the market If introduced any new or significantly improved goods or services innovations in the last 3 years: they were at least some new to the market J2 All just new to the business If introduced any new or significantly improved goods or services innovations in the last 3 years: they were all just new to the business J2
Novelty of process innovation
No process innovation (base category) Business has not introduced any process innovation in the last 3 years At least some new to the industry If introduced any improved processes for producing or supplying goods or services in the last 3 years: they were at least some new to the industry Table 3 suggests that around 8.8% of SMEs in our sample have introduced a radical product innovation in the last 3 years. We also find that 18.9% of SMEs introduced incremental product innovations. The percentage of SMEs undertaking radical and incremental process innovations is generally lower than for product innovation accounting for 3.8% and 11.9%, respectively.
Identifying discouraged SMEs
The holistic definition of borrower discouragement used in the present study is whether SMEs had a need for finance in the last 12 months, but did not apply because one of the following main reasons: fear of rejection, cost of credit, additional risk-taking, poor credit history, prevailing economic conditions, knowledge of financial sources and the time and hassle associated with applying (See Figure A1 in the appendix). Figure A2 in the Appendix provide further descriptive statistics on the sample composition of discouraged borrowers (conditional on needing finance). Our results suggest that 64.2% of discouraged SMEs in our sample made profits in the previous financial year. We also observe that the turnover increased or remained roughly similar to the previous 12 months for 70.7% of the discouraged SMEs in our sample (See Table A1). Figures relating to expectations regarding future turnover (See Figure A2) suggest that 89.1% of discouraged SMEs expect turnover to increase or stay the same in the next 12 months. Overall, these descriptive statistics provide evidence that the sample of discouraged is characterised by borrowers of a relatively good economic profile. Consequently, borrower Table 2 1 3 Vol.: (0123456789) discouragement can be seen as a potentially inefficient self-rationing mechanism. 7 Figures A3 and A4 in the Appendix present the distribution of the sample of discouraged borrowers and innovative SMES across industry sectors. Discouraged borrowers are largely present across all industry sectors with a relatively higher proportion in the construction sector (F). Innovative firms are in a similar way present across all industry sectors with a higher proportion in the professional/scientific sector (M). 8
Control variables
We control for several variables that are likely to affect borrower discouragement. These include growth intentions, size, firm age, change in turnover, profitability, location, gender, ethnicity, family ownership, legal structure, region and industry sector. Growth-oriented SMEs are more likely to require external funding and thus more likely to be discouraged borrowers. These SMEs represent 51.8% of our sample. SME size is measured by total employment according to one of four size categories: 0 employees (75.8% of SMEs), 1-9 employees (19.9% of SMEs), 10-49 employees (3.7% of SMEs) and 50-249 employees (0.6% of SMEs). Our sample of SMEs is predominantly mature (20+ years old, 39.1%) and located in urban areas (70.5%). Profitability is measured using an indicator variable that captures whether a SME made a profit in the last financial year (80.8% of SMEs). Turnover (sales revenue) remained constant or increased in around 80% of the sample.
We also control for cases where the owner is either a female (21.1%), ethnic minority group (5.2%) or the SME is family owned (86.7%). We also differentiate between proprietorships, partnerships and companies in order to control for differences in legal form. Companies and sole proprietorship constitute around 43% and 47% of our sample, respectively. The majority of SMEs are located in England (87.9%) and are distributed across all of the main industry sectors.
Methodology
One important feature of defining borrower discouragement is that these firms require external finance, but do not apply for it. This is crucial as innovative firms are more likely to seek external finance (Lee et al., 2015). Since borrower discouragement is only observed for those SMEs, which have a need for finance, estimating both events independently could lead to sample selection problems. The problem is that the unobserved variables that determine the need for finance may be correlated with the likelihood of being a discouraged borrower and so lead to biased coefficient estimates. Given the binary nature of the dependent variable, we address this bias by using a probit model with sample selection (Van de Ven & Van Praag, 1981).
This model assumes that there is an underlying relationship (latent equation) y * j = X j β + μ 1j such that we observe only the binary outcome (outcome equation: discouraged borrower) y probit j = y * j > 0 . The dependent variable, however, is not always observed. Rather, the dependent variable for observation j is observed if (selection equation: SME needs finance) y select j = Z j γ + μ 2j > 0 where μ 1~N (0, 1); μ 2~N (0, 1); corr(μ 1, μ 2 ) = ρ (rho). When ρ = 0, there is no evidence of selection bias; the outcome and selection equations are independent, making estimation of the selection model unnecessary.
However, since the model is estimated by maximum likelihood (ML), ρ is not directly estimated. Instead, the Heckprobit routine directly estimates a nonlinear transformation of ρ (athrho) defined as athrho = 1 2 ln 1+ρ 1−ρ . A significant athrho implies that ρ ≠ 0 and indicates the presence of selection bias in the model. A Wald test is reported in all tables; it compares the log likelihood of the full model with sample selection with the sum of the log likelihoods of running simple probits for each equation. If the test is significant, there is statistical difference between both models, suggesting that selection bias is present and providing further support that ρ ≠ 0.
All our estimated models include lagged independent variables and dummy variables to account for the legal status, region, sector of the SME and year of survey. All results are reported in terms of average marginal effects of the explanatory variables on the probabilities of needing finance (selection equation) 7 The lack of additional proxies regarding the availability of profitable investment projects for SMEs or other metrics to identify credit worthy SMEs is a limitation of the LSBS Survey and our empirical analysis. 8 We are thankful to an anonymous referee for drawing attention to this issue. and on the probability of being discouraged conditional on selection (i.e. need finance). The average marginal effects indicate the change in probability when the independent variable switches from the reference category to the category in question. All models are estimated via maximum-likelihood, and standard errors are clustered at the firm level to account for potential correlation of errors within clusters. Table 4 present results for the impact of product innovation on borrower discouragement. Models 1 and 2 successively but separately introduce additional control variables in addition of fixed effects (legal, region, sector and survey year) which are present in all models. The exclusion restriction used in the selection equation in model 1 is legal status dummy, model 2 includes both business plan and legal status, while only a business plan dummy is used in model 3 along with the full set of control variables.
Results
In line with previous studies, we find that innovation (either product or process) is positively related to the need for finance in the selection equation. We do not comment further on these results given the consistency across all estimated models. For the outcome equation (i.e. the probability of being a discouraged borrower conditional on requiring finance), we find that product innovation has a positive influence on borrower discouragement. More precisely, the results of estimating model 1 suggest that being product innovator increases the likelihood of borrower discouragement by 12.6 percentage points compared to a non-product innovative counterpart. However, the magnitude of the marginal effect decreases to 6.5-5.5% but remains positive and statistically significant after including additional control variables in models 2 and 3, respectively. Results for SMEs introducing process innovations are presented in Table 5. According to the results, process innovation is also positively associated with borrower discouragement, but the magnitude of this effect is approximately half of that obtained for product innovation and merely marginally significant in model 3. This is in line with prior research demonstrating that cutting-edge R&D investments are more likely to face difficulties in obtaining credit whereas those of a more prosaic and routine nature such as process innovations do not encounter such credit obstacles due to lower informational asymmetries (Czarnitzki & Hottenrott, 2011).
A potential issue regarding the definition of innovation in Tables 4 and 5 is that the reference category for product (process) innovators includes both noninnovators and process (product) innovators bundled in one group. To disentangle the effect of innovation, we classify SMEs into three mutually exclusive comprising: pure product innovation, pure process innovation and product and process innovation. The reference category are exclusively non-innovator SMEs. This approach allows us to make a clear comparison with respect to the group of non-innovators. The results reported in Table 6 suggest that SMEs introducing pure product and both product and process innovations simultaneously are more likely to be discouraged borrowers compared to their non-innovative counterparts (by 6.2% and 8.5%, respectively) based on model 3. In this model, pure process innovation coefficient loses statistical significance. The marginal effect of engaging in both product and process innovation on borrower discouragement is around two percentage points higher with respect to pure product innovators. Overall, the results suggest that relative to other forms of innovation, engaging in a combination of product and process innovation has a stronger impact on borrower discouragement.
In Table 7, we examine the association between the novelty of product innovation and the incidence of borrower discouragement. The results suggest that introducing either radical or incremental product innovation increases the SME's propensity of being discouraged by 6.6% and 5%, respectively (model 3) compared to SMEs that did not introduce product innovations. However, our results for novelty of process innovations in Table 8 suggest, in line with our previous results, that either radical or incremental process innovation is not influencing borrower discouragement. 9 9 Our baseline results remain largely unchanged after controlling by SME's seeking external advice or information (as a proxy for education level of entrepreneurs), switching banks, using the perceived market competition as an obstacle for doing busines as an additional exclusion restriction and employing a standard probit model using the full sample without sample selection. A summary of the estimation results is reported in Table A2 in the Appendix. We are thankful to an anonymous referee for suggesting these additional robustness tests. Turning to the control variables in Tables 4, 5, 6, 7 and 8, we are able to observe which business-related characteristics are more likely to increase borrower discouragement. The results suggest that larger (>10 employees), profitable and growing SMEs (i.e. exhibiting an increased turnover) are less likely to be discouraged borrowers. Increased turnover and profitability tend to improve the cash position of SMEs and hence increase confidence that any application for external finance is likely to be accepted. We also find that SMEs led by entrepreneurs belonging to an ethnic minority are more likely to be discouraged borrowers. The marginal effect is economically and statistically significant across all models, in line with previous literature (Neville et al., 2018). These control variables affecting borrower discouragement remain significant across all estimations.
Propensity score matching exercise
If ex-ante innovators are more likely to be discouraged borrowers than non-innovators with comparable characteristics, the results of the empirical analysis could be affected. In order to explore this possibility and provide further evidence supporting our previous results, we follow Rosenbaum and Rubin (1983) and use propensity score matching (PSM) as a means of addressing such concerns. Matching restricts inference to the sample of innovators (the treatment group) and non-innovators (the control group). The treatment group is matched with the control group on the basis of a propensity score, which is a function of firm-level observable characteristics. Propensity scores are estimated via a logit model utilising a variety of SME characteristics (aims to grow, size, business age, turnover change, profits, urban location, women lead, minority ethnic lead, family owned, business plan, legal status, region, sector and year dummies) as independent variables. We match innovative SMEs with one, four and eight corresponding (nearest neighbour) non-innovative SMEs.
One of the assumptions required to use treatmenteffects estimators is the overlap assumption, which states that each individual has a positive probability of receiving each treatment. Figure A5 in the appendix displays the estimated density of the predicted probabilities that an untreated SME is assigned to treatment and the estimated density of the predicted probabilities that a treated SME is assigned to treatment. Consistent with the overlap assumption, the estimated density plots have considerable mass in the regions where they overlap, little mass around 0, and little mass around 1. Thus, there is no evidence that the overlap assumption is violated.
If models are well specified, they should also balance all covariates. For example, reviewing the covariate balance summary in Table A3 corresponding to the results reported in Table 9, panel A (1 match per observation) for innovators (product and/or process), we see that the weighted standardised differences are largely close to zero and the variance close to one, which suggests that our model balances all covariates. 10 Table 9 presents the average treatment effect (StataCorp, 2019). The selection equation relates to the probability of needing finance. The outcome equation relates to the probability of being a discouraged borrower conditional on needing finance. All regressions include a constant term. The exclusion restriction used in the selection equation is business plan and legal status dummy variables in models 1-2 and only business plan dummy in model 3. The excluded variables for control variables are zero employees (size), 0-5 years (business age), 18-30 years old (owner's age) and decreased (turnover change). Z-statistics adjusted for clustering at firm level are reported in parentheses. ***, ** and *Significant at the 1%, 5% and 10% levels, respectively 10 The raw columns illustrate that differences are large prior to weighting. Covariate balance summaries for the remaining of models reported in Table 9 offer similar results and are available upon request. To further verify the quality of matching, Figure A6 in the Appendix shows the distribution of the propensity score for both groups before and after matching and suggests that the matches are appropriate. on the treated (ATET) for the three groups of SMEs: innovator (product and/or process), product innovator and process innovator. Control group includes non-innovators (product and/or process), non-product innovators and non-process innovators. We observe that ATETs are positive and statistically significant for all types of innovators. Considering the results in the first column, for a SME, on average, the effect of being innovative (product and/or process) increases the likelihood of being a discouraged borrower by around 5.2 to 6.6% compared with what would have occurred if none of these firms had been innovative. Results for product innovators are consistent with those obtained previously.
Main reasons for discouragement and innovative activity
In this final section, we provide new evidence regarding the relationship between innovation and borrower discouragement, based upon the specific reasons for discouragement reported by SMEs in the LSBS survey.
The small sample size of firms reporting discouragement and complexity of implementing a double selection Heckman multinomial model to investigate this issue makes the use of propensity score matching a good tool to shed some light on this matter for the first time. Table 10 summarises the ATET results for three groups of SMEs: innovator (product and/or process), product innovator and process innovator. Control groups for each treated group include non-innovators (product and/or process), non-product innovators and non-process innovators, respectively. The results suggest that innovators (product/process) and product innovators are more likely to be discouraged due to a fear of rejection relative to non-innovative and nonproduct innovative counterparts. Interestingly, reluctance to take on additional risk, the timing of the decision to apply and perceived transaction costs (i.e. the decision would have taken too long/too much hassle) appear to be three additional important factors contributing to borrower discouragement across innovative SMEs. The role played by 'informal turndowns' by banks in the context of innovative SMEs certainly warrants further research (see Rostamkalaei et al., 2020).
Discussion and conclusions
This study provides important insights into the incidence and nature of borrower discouragement in SMEs. It adds (StataCorp, 2019). The selection equation relates to the probability of needing finance. The outcome equation relates to the probability of being a discouraged borrower conditional on needing finance. All regressions include a constant term. The exclusion restriction used in the selection equation is business plan and legal status dummy variables in models 1-2 and only business plan dummy in model 3. The excluded variables for control variables are zero employees (size), 0-5 years (business age), 18-30 years old (owner's age) and decreased (turnover change). Z-statistics adjusted for clustering at firm level are reported in parentheses. ***, ** and *Significant at the 1%, 5% and 10% levels, respectively (Schumpeter, 1934, p.93). If borrower discouragement prevents entrepreneurs from undertaking growthoriented activities, there are strong grounds to suggest much greater academic, and policy attention should be directed toward understanding both the causes and consequences of this complex phenomenon.
We adopt a more expansive definition of borrower discouragement, augmenting prior studies, which typically adopt a narrow definition of borrower discouragement (i.e. due to a fear of rejection) thus underestimating the true extent of credit self-rationing. As such, we find that the overall incidence of borrower discouragement across SMEs is much higher than the levels reported in prior UK studies (Cowling et al., 2016;Rostamkalaei, 2017). 11 The rich nature of the LSBS dataset used in this study suggests that borrower discouragement is a multi-faceted phenomenon with multiple underlying determinants. However, the lack of specific metrics to identify the creditworthiness of discouraged SMEs is a limitation of the LSBS survey and certainly an issue meriting further academic enquiry.
The results of our empirical analysis also suggest that SMEs engaging in innovation are much more (StataCorp, 2019). The selection equation relates to the probability of needing finance. The outcome equation relates to the probability of being a discouraged borrower conditional on needing finance. All regressions include a constant term. The exclusion restriction used in the selection equation is business plan and legal status dummy variables in models 1-2 and only business plan dummy in model 3. The excluded variables for control variables are zero employees (size), 0-5 years (business age), 18-30 years old (owner's age) and decreased (turnover change). Z-statistics adjusted for clustering at firm level are reported in parentheses. ***, ** and *Significant at the 1%, 5% and 10% levels, respectively 0.033*** 0.033*** likely to be discouraged borrowers. This is especially true for SMEs undertaking the riskiest form of innovation (product innovation). In other words, the more innovative a SME is, the more likely they self-ration credit. As such, we augment and complement evidence highlighting the structural problems impacting the supply and demand of finance for innovative SMEs (Lee et al., 2015) with new evidence suggesting that radically innovative firms are also those most likely to self-impose credit constraints by refraining from external finance applications. From a theoretical perspective, our findings regarding which types of firm are most likely to be discouraged borrowers are also largely consistent with informational theories of firm-level borrowing discussed earlier. These theories suggest that SMEs are likely to encounter credit restrictions and that these will be amplified for the most informationally opaque and risky firms (with limited collateral, volatile cash flows and higher proportions of intangible assets). It may be the case that innovative SMEs are acutely aware of their own risky status and consequently self-ration debt finance. Indeed, this is often an explanation provided by policymakers for this phenomenon (BEIS, 2017). 12 Moreover, and in line with the theoretical expectations discussed earlier, one plausible explanation for the higher incidence of borrower discouragement across radical innovators owes to the greater levels of absolute uncertainty associated with these types of innovations (O'Connor & Rice, 2013). Incremental innovations on the other hand are associated with lower levels of uncertainty and consequently easier to assess ex-ante. (StataCorp, 2019). The selection equation relates to the probability of needing finance. The outcome equation relates to the probability of being a discouraged borrower conditional on needing finance. All regressions include a constant term. The exclusion restriction used in the selection equation is business plan and legal status dummy variables in models 1-2 and only business plan dummy in model 3. The excluded variables for control variables are zero employees (size), 0-5 years (business age), 18-30 years old (owner's age) and decreased (turnover change). Z-statistics adjusted for clustering at firm level are reported in parentheses. ***, ** and *Significant at the 1%, 5% and 10% levels, respectively (StataCorp, 2019). The selection equation relates to the probability of needing finance. The outcome equation relates to the probability of being a discouraged borrower conditional on needing finance. All regressions include a constant term. The exclusion restriction used in the selection equation is business plan and legal status dummy variables in models 1-2 and only business plan dummy in model 3. The excluded variables for control variables are zero employees (size), 0-5 years (business age), 18-30 years old (owner's age) and decreased (turnover change). Z-statistics adjusted for clustering at firm level are reported in parentheses. ***, ** and *Significant at the 1%, 5% and 10% levels, respectively Table 9 Propensity score matching: average treatment effect on the treated (ATET) of being an innovative SME on the likelihood of being discouraged borrower This table shows the computation of the average treatment effect of the treated (ATET). That is, for a SME, on average, the effect of being innovative on the likelihood of being discouraged borrower. We match innovative firms with one (panel A), four (panel B) and eight corresponding (panel C) non-innovative firms. Robust z-statistics are reported in parentheses. ***, ** and * denotes statistical significance at the 1%, 5% and 10% levels, respectively Innovator (product/process) Product innovator Process Innovator Theoretically, we can also speculate that one outcome of discouragement may result in increased levels of 'bricolage' in these resource-constrained innovative SMEs. A stream of work on bricolage and innovation has investigated patterns of behaviour and organisational processes (i.e. making do with what is at hand, creating something from nothing and experimenting by combining resources for new purposes) enabling resource-constrained entrepreneurs to exploit their potential by making creative use of their limited resources to innovate (Baker & Nelson, 2005). While this improvisational process may allow some firms to innovate despite attendant resource constraints (Senyard et al., 2014), excessive use of bricolage may eventually hinder product development effectiveness in the longer term (Moorman & Miner, 1998). Consequently, if firms rely excessively on bricolage, the proliferation of sub-standard solutions and limited resources directed at improving 'stop gaps' and repetitive 'making do' may 'reduce or even fully offset the positive effects of bricolage on innovativeness' (Senyard et al., 2014, p.216).
The findings presented in this paper are relevant for practitioners and policymakers alike. While problems accessing finance are often used to justify government intervention toward small innovative firms, This table shows the computation of the average treatment effect of the treated (ATET). That is, for a SME, on average, the effect of being innovative on the likelihood of being discouraged borrower. We match innovative firms with one corresponding noninnovative firms. Robust z-statistics are reported in parentheses. ***, ** and * denotes statistical significance at the 1%, 5% and 10% levels, respectively frequently these policy efforts focus upon the provision of supply-side measures such as credit guarantee schemes, grants and public equity finance instruments (Bloom et al., 2019;Schneider & Veugelers, 2010). By comparison, the issue of insufficient demand and credit self-rationing is rarely discussed in policy documents or addressed explicitly via targeted policy initiatives. This appears to be a crucial oversight. Given the potential sub-optimal economic outcomes resulting from borrower discouragement (Cowling et al., 2016;Hutton & Nightingale, 2011), more concerted policy efforts to alleviate borrower discouragement appear appropriate.
Fittingly, there appears to be indicative evidence that the traditional supply-side approach to policy may be gradually shifting. Indeed, in light of the recent declining levels of demand for bank finance across UK SMEs, the British Business Bank recently launched a Demand Development Unit to help SMEs better understand and identify suitable sources of finance (British Business Bank, 2020). This seems a desirable step, given that a lack of awareness of different funding options together with an over-reliance on their main bank may explain why SMEs become discouraged borrowers in the first place (Wernli & Dietrich, 2021). Another approach would be to offer de novo start-ups free financial advice on different funding sources and financial products which are often difficult to comprehend by time-constrained entrepreneurs. 13 Access to information regarding external sources of finance for start-ups and SMEs can be helpful for enabling entrepreneurs to access the right type of financing for their ventures (Wilson, 2015). An additional benefit of such informational support is its inexpensive nature and ease of operation.
Overall, the results of this study suggest a need for a greater policy emphasis on alleviating borrower discouragement within innovative SMEs and a closer alignment between innovation and SME finance policy initiatives. Going forward, policymakers (such as the UK British Business Bank) could pro-actively target these informational initiatives toward the types of innovative SMEs examined herein. 14 Using evidence such as the survey data utilised in this paper, stateowned banks could potentially monitor borrower discouragement on an ongoing basis in order to assess how these types of policies are performing over time.
In addition, targeted measures can be used to improve access of external finance for innovative SMEs including the use of loan guarantee schemes and public guarantees for innovative firms. Those measures could contribute to increase the loan supply for SMEs by decreasing the perceived risks of innovative SMEs and improving access to both debt and equity finance. These policy measures may have equal applicability elsewhere given a high incidence of borrower discouragement prevalent in the USA and other European countries. | 2020-09-08T21:18:28.172Z | 2022-01-11T00:00:00.000 | {
"year": 2022,
"sha1": "cc821c7fd3564d5c9f5889d230f0fab764e28b43",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11187-021-00587-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "7ac127b96e89e24f2db6b7e2409064419c082e86",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
92994052 | pes2o/s2orc | v3-fos-license | Stability of a bidimensional relative velocity lattice Boltzmann scheme
In this contribution, we study the theoretical and numerical stability of a bidimensional relative velocity lattice Boltzmann scheme. These relative velocity schemes introduce a velocity field parameter called"relative velocity"function of space and time. They generalize the d'Humi\`eres multiple relaxation times scheme and the cascaded automaton. This contribution studies the stability of a four velocities scheme applied to a single linear advection equation according to the value of this relative velocity. We especially compare when it is equal to 0 (multiple relaxation times scheme) or to the advection velocity ("cascaded like"scheme). The comparison is made in terms of L1 and L2 stability. The L1 stability area is fully described in terms of relaxation parameters and advection velocity for the two choices of relative velocity. These results establish that no hierarchy of these two choices exists for the L1 notion. Instead, choosing the parameter equal to the advection velocity improves the numerical L2 stability of the scheme. This choice cancels some dispersive terms and improve the numerical stability on a representative test case. We theoretically strengthen these results with a weighted L2 notion of stability.
Introduction
Lattice Boltzmann schemes are applicable to many different fields such as hydrodynamics, acoustics, magnetohydrodynamics and multiphase fluids [8,22,24,7,19], because the associated algorithm is simple, fast and flexible. This algorithm, associated with a cartesian lattice and a finite set of velocities, consists in computing some particle distribution functions at discrete values of time: they are updated through two successive phases of transport (exact) and relaxation (also called collision).
In the d'Humières multiple relaxation times (MRT) framework [8], the relaxation is made thanks to a set of moments, linear combinations of the particle distributions. Each of these moments relaxes towards an equilibrium with a proper time scale. The MRT schemes have been widely studied in terms of consistency [10,11,20] but slightly in terms of stability [22,17]. They can be particularly used to solve the Navier-Stokes equations [9,5,8,22,16]. They however undergo some instabilities in the small viscosity limit [22]. the algorithm of this automaton and its improvements in terms of stability. First, it has been written in the MRT framework with a generalized equilibrium depending on the non conserved quantities [1]. It has then been included in a new class of relative velocity schemes [12,14] that are the center of interest of the authors. Both MRT and cascaded are particular cases of the relative velocity schemes.
These relative velocity schemes use a set of moments depending on a velocity field parameter called relative velocity for the relaxation. This parameter is a function of space and time. Their consistency has been studied for one and two conservation laws [12,14]. Their stability has also been investigated numerically in the case of the D 2 Q 9 scheme for the Navier-Stokes equations [13]. This study, using the L 2 von Neumann stability [22,26], shows the importance of the choice of the polynomials defining the moments, that justifies the stability gain obtained by the cascaded automaton.
This contribution studies the theoretical and the numerical stability of a bidimensional scheme with four velocities for the simulation of a single linear advection equation. We expect to improve the stability for a relative velocity equal to the advection velocity ("cascaded like" scheme) instead of 0 (MRT scheme). This study is based on two stability notions. The first is a L ∞ notion that has been used for one dimensional and vectorial schemes [6,18]. The second is a weighted L 2 notion introduced in [3]. It has been applied to MRT schemes linearized around the zero velocity or in the BGK (Bhatnagar, Gross, Krook) case [4] for advection equations [3,21,25].
In the first part, we briefly recall the framework of the relative velocity schemes. Then we present the four velocities scheme of interest. In the second part, the effect of the relative velocity on the stability is illustrated thanks to the method of the equivalent equations [27,10]. We particularly focus on the structure of the dispersion terms for two choices of relative velocity (0 or the advection velocity). These results are then illustrated by a numerical L 2 von Neumann stability study. The two last parts give theoretical stability results. In the third part, the L ∞ stability area is fully described in terms of relaxation parameters and advection velocity for the two choices of relative velocity. In the fourth one, a weighted L 2 notion [21] is used to validate the numerical results obtained in the second part according to the relative velocity.
Framework
This section presents first the relative velocity schemes in a general framework. It is then particularized to a bidimensional scheme with four velocities for the simulation of a linear advection equation.
Presentation of the relative velocity schemes
For d ∈ N , we consider a cartesian lattice L of R d associated with a space step ∆x. The space time is given by the acoustic scaling ∆t = ∆x/λ for λ ∈ R a velocity scale. A set of q ∈ N velocities denoted by v = {v 0 , . . . , v q−1 } depending on the velocity scale is chosen such that for each node x ∈ L, x + v j ∆t is still a node of L. The lattice Boltzmann method computes the discrete values of several particle distributions denoted by f j , 0 j q − 1, the distribution f j being associated with the velocity v j . We denote by f = (f 0 , . . . , f q−1 ) the vector of the particle distributions. An iteration of the algorithm consists in the succession of a phase of relaxation, nonlinear and local in space, and a phase of transport distributing the particles on the neighbouring nodes.
The relaxation operator of the relative velocity schemes originates from the framework of the MRT schemes [8]. Some moments, linear combinations of the particle distributions, relax with a proper relaxation rate. We define a matrix of moments depending on a velocity field u function of space and time. This matrix, supposed to be invertible, is characterized by a set of polynomials P k , 0 k q−1, Let us note that the MRT scheme corresponds to u = 0. The vector of moments is then obtained from the particle distributions through a linear transformation The relaxation then carries on the components m k ( u), 0 k q − 1, of m( u) = (m 0 ( u), . . . , m q−1 ( u)) where m eq k ( u) are the moments at equilibrium and s k the relaxation parameters. Some relaxation parameters are chosen null to enforce some conservation laws. The equilibrium is supposed to derive from an equilibrium distribution f eq ∈ R q independent of u m eq ( u) = M ( u)f eq .
The equilibrium f eq is an a priori non linear function of these conserved moments. The postcollision distributions are obtained by the linear transformation Then, the particles are advected from node to node In the following, the relative velocity u is a real constant. When u is specified, the relative velocity scheme is called scheme relative to u.
1.2. The twisted relative velocity D 2 Q 4 scheme for a linear advection equation In this section, we present a relative velocity D 2 Q 4 scheme for a linear advection equation. We expect the scheme relative to the advection velocity ( u = V ), also called "cascaded like" scheme in the following, to be more stable than the MRT scheme u = 0 when this velocity V increases.
We want to approximate the following bidimensional advection equation where V = (V x , V y ) ∈ R 2 is the advection velocity. With this aim, we consider the four velocity scheme called twisted D 2 Q 4 defined by the set of velocities v = {(λ, λ), (−λ, λ), (−λ, −λ), (λ, −λ)}, for λ = ∆x/∆t the velocity scale (figure 1). The moments are associated with the polynomials 1, X, Y, XY . Another choice with four velocities could be the D 2 Q 4 defined by the set v = {(λ, 0), (0, λ), (−λ, 0), (0, −λ)}, and by the moments associated with 1, X, Y, X 2 −Y 2 . However, we have chosen to work with the twisted scheme because this scheme is exactly a D 1 Q 2 2 in terms of velocities and moments. This feature simplifies the proofs of consistency and stability. Moreover, the stability results of the D 2 Q 4 scheme can be deduced of the results for the twisted D 2 Q 4 scheme: the stability areas correspond thanks to the composition of a rotation and a homothety (Appendix A).
The framework containing only one equation, we choose one conservation law on the density The other moments are relaxed with a relaxation parameter s q ∈ R for the first order moments and s xy ∈ R for the second order one. To approach the equation (5), the equilibrium must read where m eq 3 ( u) is the second order moment equilibrium determining the diffusion terms. Two different equilibria are chosen: the non intrinsic equilibrium and the intrinsic equilibrium The first equilibrium results from the will to see the twisted D 2 Q 4 as the product of the D 1 Q 2 by himself: it could be seen as the product of two one dimensional equilibria. The second equilibrium is called intrinsic because it leads to a diffusion term that is independent of the basis of writing (proposition 2.2) but is not isotropic. It is important to note that these equilibria are chosen such that the relation (3) is satisfied. Then the second order asymptotics do not depend on the relative velocity u [12,14].
Exploitation of the equivalent equations for the stability
The purpose of this section is to predict the stability behaviour of the twisted scheme defined in the section 1 according to the choice of u. To do so, we present the third order asymptotics of the scheme: we discuss on the definite positivity of the diffusion tensor and on the dispersive third order terms. The discussion is then illustrated on a numerical study of L 2 stability. We show that the cascaded like scheme contrary to the MRT scheme cancels some dispersive terms and stabilizes the scheme in the same time.
Discussion for the scheme with a non intrinsic diffusion
We study here the third order equivalent equations of the twisted D 2 Q 4 with a non intrinsic diffusion. The following proposition results from the particularization of a general derivation of the equivalent equations for a D d Q q scheme with one conservation law, d, q ∈ N [14]. We are interested in the structure of the upper orders according to the velocity field u and their influence on the stability.
Proposition 2.1 (Diffusion and dispersion operators). Given V ∈ R 2 , (s q , s xy ) ∈ R 2 , the twisted D 2 Q 4 scheme relative to a constant velocity u ∈ R 2 associated with the equilibrium (6,7) and with the relaxation parameters (0, s q , s q , s xy ), approaches the third order equation where the diffusion matrix D 2 is given by and the dispersion matrix D 3 by .
The Hénon's parameters reas σ q = 1/s q − 1/2 and σ xy = 1/s xy − 1/2, the second and third order derivative operators are given by and : is the scalar product for the matrices viewed as vectors of R 4 .
These equations are obtained thanks to a formal calculus software, that implements an algorithm to obtain the equivalent equations in the linear case [2]. The following lemma exhibits the area of well-posedness of the second order truncation of (9). Proof. The spectrum of the matrix Because of the relation (3), the velocity parameter u appears only at the third order of the equations. Let us discuss of the structure of these third order dispersive terms according to the choice of u.
First we choose the velocity field u equal to the advection velocity V ("cascaded like" scheme). The dispersion terms are then small compared to the diffusion terms. Indeed, this choice cancels the two off-diagonal components of D 3 depending on the parameter σ xy . Thus resting dispersion terms are V x (λ 2 − (V x ) 2 )(1 − 12σ 2 q )/6 and its symmetric in y. When σ q tends to 0, those terms are equivalent to V (λ 2 − (V x ) 2 ) multiplied by a constant. Thus when the diffusion (λ 2 − (V x ) 2 ) decreases, the dispersion decreases at least with the same speed.
On the contrary, the dispersion terms of the MRT scheme ( u = 0) create instabilities when the diffusion (represented by σ q ) is weak. Indeed, the off-diagonal terms are conserved for this choice of velocity parameter: they are given by −2σ q V y (λ 2 − (V x ) 2 )(σ q − σ xy ) and its symmetric in y. This term is equivalent to 2σ xy σ q V y (λ 2 − (V x ) 2 ), when σ q gets close to 0. The dispersion terms are thus depending on the size of the parameter σ xy . If σ xy is important and σ q tends to 0, the numerical stability should be deteriorated by dispersion phenomena for V x or V y close to λ: indeed, the dispersion, behaving as 2σ xy σ q λ(λ 2 − (V x ) 2 ), becomes greater than the diffusion given by σ q (λ 2 − (V x ) 2 ). Instead, taking σ xy close to 0 should limit the dispersion effects.
Remark 2.1 (Exact third order scheme). We note that the choices u = V and σ q = 1/ √ 12 cancel the dispersion matrix D 3 given by (11). Thus the scheme is consistent with the second order advection diffusion truncation of (9) at the third order. Moreover, there is still a degree of freedom with the parameter σ xy . This is impossible for the MRT scheme ( u = 0) excepted when there is only one relaxation parameter (BGK scheme: σ q = σ xy ).
Discussion for the scheme with an intrinsic diffusion
We discuss on the structure of the third order equivalent equations for the twisted D 2 Q 4 intrinsic scheme obtained thanks to the linear algorithm derived in [2]. The study is completely analogous to the one of the section 2.1.
Proposition 2.2 (Diffusion and dispersion operators). Given V ∈ R 2 , (s q , s xy ) ∈ R 2 , the twisted D 2 Q 4 scheme relative to a constant velocity u associated with the equilibrium (6,8) and with the relaxation parameters (0, s q , s q , s xy ), approaches the following third order equation where the diffusion matrix D 2 is given by and the dispersion matrix D 3 by The Hénon's parameters σ q and σ xy are given by 1/s q − 1/2 and 1/s xy − 1/2, the operators ∇ 2 and ∇ 3 are given by (12) and : is the scalar product for the matrices viewed as vectors of R 4 . Remark 2.2. As expected, the diffusion is intrinsic. It is independent of the basis of writing. Indeed it is equal to Proof. The eigenvalues of the matrix D 2 are λ 2 and λ 2 − |V | 2 2 , that closes the proof. Let us compare the well-posedness areas of the schemes with a non intrinsic diffusion (section 2.1) and an intrinsic diffusion. These areas are represented on the figure 2. We note that the scheme with a non intrinsic diffusion has a greater area in V . Consequently, it should authorize larger stable velocities than the scheme with an intrinsic diffusion.
We now get back to the intrinsic case and discuss on the dispersion terms. As in the non intrinsic case, the scheme should be stable for larger velocities with the choice u = V than with u = 0. Indeed, when σ q tends to 0, the term φ 2 ( u x , u y , V x , V y ) is of third order in V if u = V . Instead, if u = 0, it is of first order in V . More dispersion is created for u = 0 when V x or V y are close to λ and σ xy is big.
Illustration with numerical stability experiments
In this section, we study the numerical L 2 stability for the twisted D 2 Q 4 scheme according to the choice of the velocity parameter u. The link with the previous consistency study is made: the third order dispersion terms have an influence on the numerical stability. The choice u = V provides larger stability areas in V .
We present a first numerical experiment with the advection of a circular spot: the initial conditions are given by where C is the disc centered in (1/2, 1/2) of radius 0.1 and 1 C is the function equal to 1 on this disc, null elsewhere. This spot moves with an advection velocity V ∈ R 2 in the domain [0, 1] 2 constituted of 128 2 points with periodic boundary conditions. Table 2. Maximal |V | stable in λ scale for θ = π/4, σ xy = 1/ √ 3 and 128 2 points.
The scheme is considered as stable if it has not broken after 2000 iterations. We present the biggest advection velocities keeping the scheme stable for different choices of relaxation parameters. Two different directions of advection velocities are considered: θ = 0 and θ = π/4. We choose different values for the parameter σ q , the parameter σ xy being set to 1/ √ 3. The results are presented in the tables 1 and 2.
The scheme relative to the advection velocity u = V authorizes larger stable velocities than the MRT scheme ( u = 0). Most of the time, its stability areas do not depend on the relaxation parameters while they are bounded by 0 and 2. Instead, the stability areas of the MRT scheme decreases when σ q tends to 0. This phenomenon occurs for the scheme relative to u = V with an intrinsic diffusion in the direction θ = π/4: however the instabilities appear for larger V than the MRT scheme.
The dispersion terms exhibited in the equivalent equations (section 2) originate these phenomena. Their presence causes instabilities when σ q is too small compared to σ xy and |V |. We have seen that these terms cancel for u = V but are present when u = 0. This explains the constance of the stability area for u = V and its deterioration for u = 0 when σ q decreases.
These dispersive phenomena are illustrated on the figures 3 and 4 for the scheme with an intrinsic diffusion. We choose σ xy = 1/ √ 3, σ q = 1/20 and V = (0.9λ, 0) for these draws. According to the data of the table 1, this configuration is a stable case for the scheme relative to u = V and an unstable case for the MRT scheme. The spot is represented at the times t = 0 and t = 0.4 for 256 2 points. The dispersion for the MRT scheme is clearly visible on the figure 3: some oscillations appear behind the advected spot. The scheme is going to break since the density ρ is increasing. These phenomena are absent for the scheme relative to u = V (figure 4) that remains numerically stable.
Our second numerical experiment uses the L 2 linear stability of von Neumann. We discuss on the spectrum of the amplification matrix of the scheme. This matrix, characterizing an iteration of the scheme, has to be first determined. The equilibrium being linear (6,7,8), there exists a matrix E = E(V , s) for s = (0, s q , s q , s xy ) and V ∈ R 2 so that f eq = Ef .
The relaxation phase of the relative velocity D 2 Q 4 scheme reads where D = diag(s) is the diagonal matrix of the relaxation parameters. This expression holds for each node x of the lattice, the relaxation being local in space. The expression of the distribution after the transport phase is given by Taking the Fourier transform of (14), the transport operator becomes local in space and is represented by the diagonal matrix A = A(k) for k ∈ R 2 whose diagonal components are given by e i∆tk.v j , 0 j 3. The amplification matrix then reads L( u) = L( u, V , k, s) = A(I + M ( u) −1 DM ( u)(E − I)). It characterizes a time iteration of the scheme in the Fourier space where f is the Fourier transform of f . We want to exhibit the advection velocities V for which the scheme verifies the necessary condition of L 2 stability max k∈R 2 r(L( u)) 1, where r is the spectral radius. We are thus interested in the set The figures 5 to 10 present this set in the plan (V x , V y ) for the twisted D 2 Q 4 scheme with a non intrinsic diffusion and some relaxation parameters s q and s xy . The left draw is about the MRT scheme ( u = 0) and the right one illustrates the case u = V . We expect the right areas to be larger than the left ones.
Firstly, the draws corresponding to a BGK scheme (s q = s xy = 1) are independent of u ( figure 5). This result was expected because the velocity field u does not appear in the scheme for one single relaxation parameter since (3) is verified. Secondly, the scheme relative to u = V verifies the necessary condition of stability on the biggest area |V | ∞ λ (CFL) for all the s chosen (figure 5 to 10). The constance and the optimality of these areas confirm the phenomena observed for the advected spot Figure 7. Velocities V stable in λ scale for the scheme relative to u = 0 (MRT on the left), u = V (on the right), for a non intrinsic diffusion with s q = 1 and s xy = 1.9.
(tables 1 and 2). Finally, the stability areas of the MRT scheme decrease as the two relaxation parameters are moving away from each other. The stability area is reduced to V = 0 for s q = 2 or s xy = 2. Whatever the choice of s, the areas of the MRT scheme are included in those of the scheme relative to u = V . These results are also consistent with the test case of the advected spot: the deterioration of the areas are due to the third order dispersive terms of the equivalent equations (proposition 2.1).
For the D 2 Q 4 scheme with an intrinsic diffusion (figures 11 to 16), the results are different but the general trend is the same. The scheme relative to u = V verifies (15) for larger sets of V than the MRT scheme, excepted when s q = 1, s xy = 1.5 ( figure 12). However, the areas associated to u = V are not optimal any more: they decrease when s q and s xy move away from each other. Both velocity fields ( u = 0 and Figure 9. Velocities V stable in λ scale for the scheme relative to u = 0 (MRT on the left), u = V (on the right), for a non intrinsic diffusion with s q = 1.9 and s xy = 1. u = V ) undergo this phenomenon but the effect of the dispersion is bigger for the MRT scheme. These results confirm the advected spot ones: the same deterioration appears when σ q tends to 0 (table 1 and 2) to a lesser extent for u = V than for u = 0.
The main conclusions of this section are the following. Choosing u = V cancels some dispersive terms becoming important as V grows. This cancellation does not occur for u = 0. We must view these results in parallel with the fact that the scheme is more stable (numerical L 2 notion) for u = V than for u = 0. Figure 11. Velocities V stable in λ scale for the scheme relative to u = 0 (MRT on the left), u = V (on the right), for an intrinsic diffusion with s q = 1 and s xy = 1.
Theoretical L ∞ stability
We study in this section the stability of the relative velocity D 2 Q 4 scheme with respect to a notion of L ∞ stability presented in [6,18]. We use it to demonstrate some maximum principles for u equal to 0 (MRT scheme) and V , the advection velocity ("cascaded like" scheme). The L ∞ stability area is fully described in terms of relaxation parameters and advection velocity for these two choices of relative velocity. We show that this L ∞ notion does not differentiate u = 0 from u = V in terms of stable behaviour. In each case, the parameters (s q , s xy ) corresponding to a non empty area of L ∞ stability in V are included in the square [0, 2] 2 . Figure 13. Velocities V stable in λ scale for the scheme relative to u = 0 (MRT on the left), u = V (on the right), for an intrinsic diffusion with s q = 1 and s xy = 1.9.
Definition 3.1 (L ∞ stability). Let us consider a D d Q q scheme on the cartesian lattice L ∈ R d . The mass at time t is given by Let C ∈ R + , suppose that the initial distributions of particles are nonnegative and that the initial mass is bounded by C: the scheme is said to be L ∞ stable if for all time t n ∈ R, n ∈ N, f j (x, t n ) 0, 0 j q − 1, x ∈ L, and ρ tot (t n ) C. Figure 15. Velocities V stable in λ scale for the scheme relative to u = 0 (MRT on the left), u = V (on the right), for an intrinsic diffusion with s q = 1.9 and s xy = 1.
The proofs of this a priori non linear L ∞ stability notion are only depending on the relaxation phase. Indeed, the transport exchanges the distributions between each other: it does not act on their positivity and on their mass. Thus it is sufficient to show that the postcollision distributions remain nonnegative and the mass bounded. Note also that if ρ tot (0) is bounded, ρ tot (t n ) remains bounded for all time t n because this mass is conserved by the scheme. So there is only to study the positivity of the postcollision distributions functions.
To do so, we express the postcollision distributions of particles f as a function of the precollision one f for a general D d Q q scheme with one conservation law. These postcollision distributions are first expressed as functions of the postcollision moments thanks to (4). After applying the relaxation identity (2), the precollision moments are written in the frame of the particle distributions with (1). These calculations lead to the following expression of the relaxation.
The framework of study is linear because we are approaching a linear advection equation. As a consequence, the equilibrium term of (16) is a linear function of the conserved variable ρ and thus of the distributions f j . Consequently, there is a matricial relation between f and f : sufficient conditions of L ∞ stability are obtained checking when this matrix is nonnegative.
is said to be nonnegative if all its coefficients are nonnegative.
Note that this notion is different from a positive matrix that have all its eigenvalues nonnegative [23]. In the following, we prove the results in the twisted case: the analogous ones for the D 2 Q 4 scheme are available in the appendix B and use the proposition A.1 of the appendix A. We first consider the relative velocity D 2 Q 4 scheme with a non intrinsic diffusion (equilibrium (7)). The proofs are quite technical and involves geometric inequalities on V and s.
The non intrinsic diffusion case
with the relaxation parameters (0, s q , s q , s xy ), associated with the equilibrium (6,7) • if s q s xy 2s q and s q 1 (area ABC on the figure 17), the scheme is L ∞ stable for all V so that • if s xy = 0 and 0 < s q 2 (ray ]BD] on the figure 17), the scheme is L ∞ stable for V = 0. • if s xy = s q = 0 (point B on the figure 17), the scheme is unconditionally L ∞ stable. • For all other s, there is no V corresponding to a L ∞ stable scheme.
In particular, the parameters (s q , s xy ) corresponding to a non empty area of The stability areas in the plan (s q , s xy ) are represented on the figure 17.The limit cases for the parameters s q and s xy lead to consistent inequalities for the velocity conditions.
Proof. The first step consists to express f as a function of f thanks to the relation (16). If the matrix sending f on f is nonnegative, the scheme is L ∞ stable. A formal calculus software guarantees that it corresponds to ensure the following relations: If s xy = 0, these relations imply V = 0 and 0 s q 2. Otherwise, we factorise by s xy /4λ 2 and express these identities as functions of the coefficient γ = s q /s xy . This leads to if s xy > 0 or if s xy < 0. These inequalities are expressed thanks to quadratic forms in V . To characterize the associated geometry, we write them in an adapted basis. To do so, we use the variable change The inequalities (20) then read completed by the ones obtained when the roles of V x and V y are exchanged. We then center them in the adapted frame to obtain if s xy > 0 Figure 18. Value of the maximum of (21). Area 1 : γ 2 − 1, area 2 : plus the inequalities obtained exchanging the roles of V x and V y . The case s xy < 0 yields to We begin by showing that there is no velocity stable in the case s xy < 0. The minimum of (22) In the first and third cases, (22) would have a solution if and only if the left pole of the hyperbole centered in 2λ|γ| is negative. We then should have in the third one. Considering that s xy < 0, the first case is equivalent to s xy 2s q that has no intersection with s q s xy < 0 where (γ − 1) 2 is the minimum. The third one is equivalent to s xy 2(2 − s q ) whose intersection with 2 − s q s xy < 0 is empty. It remains the case when s xy min(s q , 2 − s q ). These inequations combined with the ones obtained exchanging the roles of V x and V y have no common intersection. This closes the case Figure 19. Layout of the bounds of the stability areas in V for the twisted D 2 Q 4 MRT scheme.
We exhibit now the conditions for which (21) has at least a solution V . The eventual stability area is located between the two hyperboles plus the ones obtained exchanging the roles of V x and V y . Let us note that this maximum is always nonnegative because (γ −1) 2 is nonnegative. This area is delimited by the points A, B, C, D on the figure 19. The inequalities (21) have a solution V if and only if the left pole P of the hyperbole centered in 2λγ has a nonnegative abscissa ( figure 19). When the maximum is is nonnegative if and only if s xy 2s q . Finally when the maximum is (γ + 1) 2 − 4/s xy , the abscissa is equal to 2λ(γ − ((γ + 1) 2 − 4/s xy ) 1 2 ). The positivity of this coefficient is equivalent to s xy 2(2 − s q ).
It remains to determine the maximum of (21) in the case where there is at least one velocity V stable: three cases, represented on the figure 18, appear. If s xy s q 2 − s xy , the maximum is γ 2 − 1, if s xy 2 − s q and s q 1, it is (γ + 1) 2 − 4/s xy , if s q min(s xy , 1), it is (γ − 1) 2 .
From the proposition 3.1, we deduce the simpler case of the BGK scheme corresponding to one single relaxation parameter.
Corollary 3.1 (The BGK case). Let V ∈ R 2 , u ∈ R 2 , s ∈ R, consider the twisted D 2 Q 4 scheme relative to u, BGK of parameter s, associated with the equilibrium (6,7) • If s > 4/3, no V corresponds to a L ∞ stable scheme.
In particular, the parameters s corresponding to a non empty area of L ∞ stability in V are included in [0, 2].
Remark 3.1. The BGK scheme does not depend on the velocity field u since the relation (3) is verified.
Proof. In this context, the parameter γ is equal to 1. Thus the inequalities (21) summarize in plus the inequality obtained exchanging the roles of V x and V y . If s 1, it is equivalent to and so to |V | ∞ λ, else we obtain (23a,23b).
The inequalities (23a,23b) admit a non empty set of stable velocities V if the abscissa of the left pole of the hyperbole is nonnegative. This abscissa, equal to 2λ(1 − 2(1 − 1/s) We now focus on the scheme relative to u = V with a non intrinsic diffusion and two relaxation parameters. We show that the optimal area of stability spreads on some TRT schemes contrary to the MRT scheme. Proposition 3.2 (L ∞ stability areas for a relative velocity scheme). Let V ∈ R 2 , (s q , s xy ) ∈ R 2 , consider the twisted D 2 Q 4 scheme relative to u = V associated with the relaxation parameters (0, s q , s q , s xy ), and the equilibrium (6,7) m eq (V ) = ρ (1, 0, 0, 0). then: • if s q s xy min(1, 2s q ) (area ABC on the figure 20), the scheme is • if s xy < 2s q , max(s q , 1) s xy 2(2 − s q ) (area BCED on the figure 20), the scheme is L ∞ stable on the intersection of (24) with • if 0 s xy min(s q , s q (2 − s q )) (area ACF on the figure 20), the scheme is • if s q (2 − s q ) s xy min(s q , 2(2 − s q ))(area CEF on the figure 20), the scheme is L ∞ stable on (25a,25b). • For all other s, no V corresponds to a L ∞ stable scheme.
• if s xy = 2s q and 0 < s xy 2 (ray ]AD] on the figure 20), the scheme is L ∞ stable on the intersection of (24) and • if s xy = s q = 0 (A on the figure 20), the scheme is unconditionally L ∞ stable.
In particular, the parameters (s q , s xy ) corresponding to a non empty area of L ∞ stability in V are included in the square [0, 2] 2 .
Proof. We determine the sufficient conditions for the positivity of the matrix sending f on f . A formal calculus software leads to the following inequalities plus the inequality coming from (29) when the roles of V x and V y are exchanged. We begin by the study of (28). Case 1 : 2s q < s xy . These inequalities are equivalent to No V verifies this inequality: such a set of V would be the intersection of fourth areas that do not intersect. We must assume that s xy 2s q to eventually have a non empty stability area in V .
If s xy = 0, all the inequalities are obviously verified and the scheme is unconditionally stable in V . If s xy < 0, λ ± V x 0 that is impossible. If s xy > 0, (29) becomes (λ ± V x )λ 0, plus the analogous inequality in V y : this is equivalent to (24). The inequalities (30) read that is equivalent to (27). The final stability area is the intersection of (24) and (27). Three cases, represented on the figures 21, 22 and 23, are then possible.
Let s xy 1, then Figure 21. L ∞ stability area for the scheme relative to u = V with a non intrinsic diffusion for s xy = 2s q . The point P is of abscissa 2λ(2 − s xy )/s xy 2λ. Figure 22. L ∞ stability area for the scheme relative to u = V with a non intrinsic diffusion for s xy = 2s q . The point P is of abscissa λ 2λ(2 − s xy )/s xy 2λ. and the stability area is given by (24) (figure 21), let 1 s xy 4/3, then Figure 23. L ∞ stability area for the scheme relative to u = V with a non intrinsic diffusion for s xy = 2s q . The point P is of abscissa 2λ(2 − s xy )/s xy λ. and the stability area is the intersection of (24) and (27) (2 − s xy ) s xy λ, and the stability area is given by (27) (figure 23). This area is reduced to V = 0 when s xy = 2.
The inequalities (28) read that is equivalent to (24). The inequalities (29) write for γ = s xy /(2s q −s xy ). The inequalities (31) have solutions if γ 0, that is equivalent to s xy 0. If s xy s q , (31) contains (24), that correspond to the same constraints that those imposed by (28). If s xy s q , then the stability area reduces to (26).
There is still to study the influence of (30) on the stability area. These identities read Figure 24. Layout of the areas given by (26) and (32). after division by 2s q − s xy . We change of frame, and the analogous inequality exchanging the roles of V x and V y . We center these equations to obtain (25a) and (25b).
The inequality (25a) reads We represent the areas corresponding to the inequations (26) and (32) on the figure 24. The relation (26) is equivalent in the new frame to |V| 1 2λγ. It is contained in the area delimited by (32) (figure 24). The final stability area is then given by (24) if γ 1 and by (26) otherwise.
The inequality (25a) has a solution V if the abscissa of the left pole of the hyperbole centered in 2λγ is nonnegative. This abscissa, equal to Assuming that s xy 2, then s xy 2|s q − 1| + 2 2s q , that gives an empty L ∞ stability area in V (refer to the beginning of the proof). As a consequence s xy 2. If s q 1, the inequality (33) is equivalent to s xy 2s q , otherwise to s xy 2(2 − s q ). Figure 25. Layout of the bounds of the L ∞ stability areas of the scheme relative to u = V with a non intrinsic diffusion for γ 1 and s q (2 − s q ) s xy 1.
These conditions carrying on s ensure the existence of a non empty stability area in V .
To obtain the final stability area in V given by the set of inequations from (28) to (30), we compare those associated with (25a) to (24) if γ 1 or (26) if γ 1. The two areas (24) and (26) read respectively |V| 1 2λ, and in the frame (0, V x , V y ). If γ 1, the stability area is included in the one given by We now compare the areas in V associated with u = 0 and u = V for three s that don't verify s q s xy min(1, 2s q ). We want to know if u = V provides a better L ∞ stability behaviour than u = 0. The associated inequalities are presented in Figure 26. Layout of the bounds of the L ∞ stability areas of the scheme relative to u = V with a non intrinsic diffusion for γ 1 and s q (2 − s q ) s xy .
Choice of s (0, 1, 1, 1/2) (0, 1, 1, 3/2) (0, 3/2, 3/2, 3/4) Table 3. L ∞ stability areas of the schemes with a non intrinsic diffusion for λ = 1. We can not say that a scheme is better than the other in terms of L ∞ stability: generally the areas just intersect. In the section 2.3, we saw that the velocity field u = V improve the stability associated with the blow up of the scheme that rather corresponds to a L 2 notion than to a L ∞ notion. We confirm this fact theoretically in the section 4.
The intrinsic diffusion case
As for the non intrinsic case, we first deal with the MRT scheme corresponding to u = 0. Proposition 3.3 (L ∞ stability areas for the MRT scheme). Let V ∈ R 2 , (s q , s xy ) ∈ R 2 , consider the twisted D 2 Q 4 MRT scheme ( u = 0) associated with the relaxation parameters (0, s q , s q , s xy ), and the equilibrium (6,8) m eq (0) = ρ(1, V x , V y , 0).
• if 0 s xy min(s q , 2 − s q ) (area BCD on the figure 30), the scheme is L ∞ stable for all V such that |V | 1 λγ. (35) • if s q min(1, s xy ) and s xy 2s q (area ABC on the figure 30), the scheme is L ∞ stable for all V such that • if s q max(1, 2 − s xy ) and s xy 2(2 − s q ) (area ACD on the figure 30), the scheme is L ∞ stable for all V such that • if s xy = s q = 0 (point B on the figure 30), the scheme is unconditionally L ∞ stable. • For all other s, no V corresponds to a L ∞ stable scheme.
In particular, the parameters (s q , s xy ) corresponding to a non empty area of L ∞ stability in V are included in the square [0, 2] 2 .
Proof. The positivity of the matrix sending f on f reads If s q = 0, then the inequalities (38) impose s xy = 0 and there is unconditional stability in V . Otherwise these inequalities are equivalent to The conditions to have a non empty set of V are to be determined: for that we must study when the minimum in (39) is nonnegative.
We now suppose that these conditions are verified and we determine the minimum according to the choice of s: if s xy min(s q , 2 − s q ), the area is given by (35), if s q min(1, s xy ), it is given by (36), if s q max(1, 2 − s xy ), we obtain (37). This closes the proof.
The areas are represented on the figure 30. We can deduce the L ∞ stability areas for the BGK scheme.
Proposition 3.4 (The BGK case). Let V ∈ R 2 , s ∈ R, consider the twisted relative velocity D 2 Q 4 scheme, BGK of relaxation parameters s, and of equilibrium given by (6,8) • if 0 s 1, the scheme is L ∞ stable for all V such that |V | 1 λ.
• if 1 s 4/3, the scheme is L ∞ stable for all V such that • For all other s, no V corresponds to a L ∞ stable scheme.
In particular, the parameter s corresponding to a non empty area of L ∞ stability in V are included in [0, 2].
We obtain a constant area of stability for s 1. When s becomes larger than one (overrelaxation) the area decreases as s increases. For s greater than 4/3, there is no velocity L ∞ stable. Now we do the same job for the TRT scheme with u = V .
Note γ 1 = (2s q − s xy )/(s q − s xy ) and γ 2 = s xy /(s q − s xy ), • if s q < s xy min(2s q , 2(2 − s q )) (area ACD on the figure 31), the scheme is stable for all V such that plus the analogous inequalities exchanging V x and V y . • if s xy min(s q , 2(2 − s q )) (area ABD on the figure 31), the scheme is stable for all V such that plus the analogous inequalities exchanging V x and V y . • if s xy = s q (segment [AD] on the figure 31), the L ∞ stability area is given by the proposition 3.4. • For all other s, no V corresponds to a L ∞ stable scheme.
In particular, the parameters (s q , s xy ) corresponding to a non empty area of L ∞ stability in V are included in the square [0, 2] 2 .
We now compare the L ∞ stability areas of the schemes relative to the velocities u = 0 or u = V with an intrinsic diffusion for different s. We draw the areas given by the propositions 3.3 and 3.5 on the figures 32 and 33.
The results are similar to the ones obtained in the non intrinsic case. We can't distinguish a scheme from an other in terms of L ∞ stability. The MRT scheme is better on the figure 32 but on the figure 33, the areas just intersect.
Weighted L 2 stability
In this section, we present some theoretical weighted L 2 stability results for the relative velocity D 2 Q 4 schemes. These results, based on the notion of stability proposed by Yong and al in [3], confirm the phenomena numerically observed in the section 2.3. In particular, taking u equal to the advection velocity V ("cascaded like" scheme) The general framework of the relative velocity D d Q q schemes, d, q ∈ N * , is chosen to present the stability notion introduced in [3] and studied in [21,25].
An iteration of a lattice Boltzmann scheme splits into a transport step plus a relaxation step that is here linear since the equilibrium is linear (6,7,8). Given f (., t) the matrix composed of the distribution vectors taken on all the lattice points, we have where R( u) is the relaxation matrix and T the linear non local transport operator. The size of the matrix f (., t) is equal to the number of velocities q multiplied by the number of lattice points. In the case of the relative velocity D d Q q scheme, the collision matrix reads R( u) = I q + J ( u) where J ( u) is given by where D = diag(s) is the diagonal matrix of the relaxation parameters and B defines the linear equilibrium thanks to f eq = Bf .
The calculus of the spectrum of the amplification operator T R( u) being difficult because of the matrix size, the idea is to study separately the transport and the collision: we want T and R( u) to be bounded by one in a well-chosen norm. The amplification operator T R( u) is then bounded by one and the scheme is stable for this norm. In the following, we say that an operator is stable in a given norm if the norm of this operator is bounded by one.
The idea proposed in [3] consists in weighting the L 2 norm defined by so that the transport is an isometry and the collision operator is stable in the weighted norm. The classical L 2 norm keeps the transport as isometric but the collision norm is difficult to evaluate. Introducing a weight allows to overcome this difficulty. First, we define a norm on R q depending on an invertible matrix P ∈ M q (R), We can then define a norm for a matrix g of the size q multiplied by the number of lattice points with where g(x) is the column of g associated with the node x. The necessity to define such a non local norm is due to the non locality of the transport. The matrix P is chosen such that t P P is diagonal. Thus the transport is an isometry for the norm | · | P ,L for the periodic and bounceback boundary conditions [21]. Indeed, the hypothesis of "quasi orthogonality" ( t P P diagonal) carrying on P cancels all the cross terms in the calculation of |f (., t)| P ,L . Then the isometry of the transport is obtained thanks to a simple change of variable: it uses the bijection between the nodes of the mesh and the nodes after the transport at a given velocity.
Contrary to the transport, the collision is a local operator so that its study reduces to the vectorial norm | · | P . If the matrix R( u) is stable in a particular node x of the lattice L for | · | P , it is stable for | · | P ,L . That's why, we use the local operator norm for the collision.
The matrix P is requested to diagonalize the collision: ||R( u)|| P is then easy to evaluate. These requirements define a notion of stability structure first evocated in [3].
Definition 4.1 ([25]).
A matrix N ∈ M q (R) has a pre-structure of stability if exists an invertible matrix P ∈ M q (R) and some vectors s, p ∈ R q so that where diag(p) is the diagonal matrix whose diagonal is constituted of the coefficients of p. The pre-structure of stability becomes a structure of stability if Suppose that J ( u) has a pre-structure of stability and consider the collision in norm | · | P . Knowing that the eigenvalues of R( u) are 1 − s k for 0 k q − 1, where || · || 2 is the operator norm associated with | · | 2 . Thus the collision is stable for || · || P if the condition (42) is verified. Under these conditions, the scheme is stable in the norm | · | P ,L since the transport is isometric.
We present a theorem from [25] giving a necessary and sufficient condition of existence of a pre-structure of stability. In the following, this theorem is the tool used to obtain some stability results for our relative velocity schemes.
This criteria gives a practical criteria of existence of a pre-structure of stability through the resolution of a linear system in the coefficients of Λ. The size of this linear system is q(q − 1)/2 since the matrix N Λ − Λ t N is antisymmetric.
The matrix P defining the weighted norm is explicitely derived in the proof of this theorem [25]. We exhibit P through the proof of the sufficient condition of prestructure. The identity (43) is equivalent to the fact that Λ −1 N Λ is symmetric with respect to the scalar product This implies the existence of an orthonormal matrix Q in the sense of (·, ·) Λ diagonalizing Λ −1 N Λ. Then, the matrix P = (ΛQ) −1 diagonalizes the collision. The matrix t P P is diagonal because of the orthonormality of Q for the scalar product (·, ·) Λ .
Stability results for the twisted relative velocity D 2 Q 4 scheme
In this section, we apply the stability criteria given by the theorem 4.1 to obtain weighted L 2 results for the twisted relative velocity D 2 Q 4 scheme. Our motivation is to compare the cases u = 0 and u = V . To do so, the scheme must have at least two different relaxation parameters because the BGK scheme does not depend on the relative velocity since (3) is verified. We begin by studying the MRT scheme ( u = 0) whose results are limited. We then obtain more results for the cascaded like scheme ( u = V ).
The MRT scheme
We focus on the MRT scheme corresponding to u = 0. We limit to V = 0 because when u = 0 and V = 0, there is no pre-structure of stability. The framework does not distinguish the non intrinsic and intrinsic case because for V = 0, the equilibria (7) and (8) are identical.
Proposition 4.1 (L 2 stability for the MRT scheme). Consider the twisted D 2 Q 4 scheme relative to u = 0, of equilibrium m eq (0) = (ρ, 0, 0, 0) associated with the relaxation parameters s = (0, s q , s q , s xy ) ∈ R 4 . The associated matrix J (0) has a pre-structure of stability. Moreover, if 0 s q , s xy 2, then J (0) has a structure of stability. The collision matrix R(0) is then stable for || · || P and the scheme is stable in norm | · | P ,L .
Proof. Since u = V = 0, the matrix J (0) reads We now focus on the choice u = V beginning by the non intrinsic case. Proposition 4.2 (L 2 stability for a non intrinsic relative velocity scheme). Consider the twisted D 2 Q 4 scheme relative to u = V = (V x , V y ) ∈ R 2 , of equilibrium m eq (V ) = ρ(1, 0, 0, 0), associated with the relaxation parameters s = (0, s q , s q , s xy ) ∈ R 4 . The matrix J (V ) has a pre-structure of stability if and only if |V | ∞ < λ. Moreover, if 0 s q , s xy 2, then J (V ) has a structure of stability. The collision matrix R(V ) is then stable for || · || P and the scheme is stable in norm | · | P ,L .
Proof. According to the theorem 4.1, the existence of a pre-structure of stability for N = J (V ) is equivalent to the existence of a diagonal positive definite matrix Λ such that J (V )Λ = Λ t J (V ). A solution of this linear system is given by This matrix is positive definite if and only if |V | ∞ < λ, that closes the first part of the proof. The eigenvalues of J (V ) are 0, −s q , −s q , −s xy because the matrix M (0)J (V )M (0) −1 is given by We deduce that P J (V )P −1 = diag(0, −s q , −s q , −s xy ), and that J (V ) has a structure of stability if 0 s q , s xy 2. The norm || · || P of the collision operator R(V ) is equal to 1 and the scheme is stable in norm | · | P ,L .
These results extend the L ∞ stability ones obtained in the proposition 3.2. The proposition 3.2 gives the L ∞ stability area |V | ∞ λ for s q s xy min(1, 2s q ). The weighted L 2 notion generalizes it to 0 s q , s xy 2. This result was expected because the numerical experiments of the section 2.3 put in evidence the independence of the area of L 2 stability towards the relaxation parameters. The fact that we can not obtain the same type of result for the MRT scheme as for u = V is an other evidence of the good behaviour of this latter.
Let's also note that these stability conditions are defined by opened sets. The numerical experiments seem to show that the scheme is still L 2 stable on the closure of these sets. However, it is not possible to proove it with this notion because the matrix Λ is null on this closure. This proposition leads to a natural corollary for a single relaxation parameter (BGK) that has already been proved in [25]. In the BGK case, the scheme does not depend on u. In particular, MRT and cascaded like scheme are identical. That's why the following result is valid whatever the relative velocity u.
Corollary 4.1 (The BGK non intrinsic case [25]). For V = (V x , V y ) ∈ R 2 , consider the twisted relative velocity D 2 Q 4 BGK scheme of equilibrium associated with the relaxation parameter s ∈ R. The matrix J ( u) has a pre-structure of stability if and only if |V | ∞ < λ. Moreover, if 0 s 2, then J ( u) has a structure of stability. The collision matrix R( u) is then stable for || · || P and the scheme is stable in norm | · | P ,L .
associated with the relaxation parameters s = (0, s q , s q , s xy ) ∈ R 4 . Suppose that V x = 0 (resp. V y = 0) then the associated matrix J (V ) has a pre-structure of stability if and only if |V y | < λ (resp. |V x | < λ). Moreover, if 0 s q , s xy 2, then J (V ) has a structure of stability. The collision matrix R(V ) is then stable for || · || P and the scheme is stable in norm | · | P ,L .
Proof. A non null solution of the equation (43) exists if and only if one of the two components of V is null. Suppose that it is V y , then a solution of (43) is given by This matrix is definite nonnegative if and only if |V x | < λ. The reasoning to get a structure of stability is identical to the one of the proposition 4.1.
This proposition extends the L ∞ stability results of the proposition 3.5 for the directions V x = 0 and V y = 0. The L ∞ notion provides areas decreasing as much as s q and s xy go far from each other. The | · | P ,L notion gives a constant optimal area in V for these directions while these parameters are bounded by 0 and 2. This phenomenon was observed numerically on the figures 11 to 16.
We close the section by a proposition for the BGK case that is not a corollary of the proposition for the intrinsic cascaded like scheme.
BGK of relaxation parameter s ∈ R. The associated matrix J ( u) has a pre-structure of stability if and only if |V | 1 < λ. Moreover, if 0 s 2, then J ( u) has a structure of stability. The collision matrix R( u) is then stable for || · || P and the scheme is stable in norm | · | P ,L .
We note that this proposition generalizes the L ∞ stability results in the BGK case (proposition 3.4). This weighted L 2 notion extends the set of s corresponding to the stability area |V | 1 < λ to 0 s 2.
Interpretation of the results
When we compare these propositions to the numerical results obtained in the section 2.3, we notice that this stability notion is only usable when the parameters s and V are decoupled. For the scheme relative to u = V with an intrinsic diffusion, we have only theoretical results for V x = 0 ou V y = 0, corresponding to a case where the areas in V are independent of s. Instead, when the area in V is a function of s, it is not possible to build a pre-structure of stability any more. For example, there is no pre-structure of stability when V x and V y are different from 0 for the D 2 Q 4 scheme relative to u = V with an intrinsic diffusion: for this scheme, the numerical results of the section 2.3 exhibit a link between V and s. Similarly, the MRT scheme present stability areas in V depending on s and it is impossible to build a pre-structure of stability for V = 0.
The origin of this limitation seems to be the hypothesis of diagonalization of the collision. Indeed, the spectrum of this operator being equal to 0, −s q , −s q , −s xy , the existence of a pre-structure of stability with such a diagonalization implies automatically the stability of the scheme for 0 s q , s xy 2. Then V and s can't be linked because the former hypothesis is too requiring. The linear system of six equations and four unknowns for the D 2 Q 4 scheme is an evidence of the constraints imposed by the notion. The existence of a pre-structure of stability requires its rank to be three that imposes V and s to be decoupled here. For example, when u = 0 and V = 0, there is no solution positive definite to the equation (43), because this rank is greater than four. That's why it is not possible to show similar results than in the case u = V which requires the resolution of a rank three linear system.
However, this notion gives some promising results as those exposed here and in [21,25]. It seems to describe well the limit of blow-up of a scheme. The main purpose now is to obtain theoretical stability results in more complex cases. Particularly, we want to complete this study for the D 2 Q 4 especially when u = 0. To do so, we must be able to relax the constraint of diagonalization of the collision without penalizing the isometry of the transport. Maybe, it would be possible to find some matrix P so that the norm | · | P ,L of the collision operator is computable without diagonalizing it.
Conclusion
We have studied the stability of a four velocities relative velocity scheme with two different equilibria for a linear advection equation. The discussion is based on two notions of L ∞ and weighted L 2 stability. It carries on the choice of the relative velocity equal to 0 (MRT scheme) or to the advection velocity ("cascaded like" scheme). The main conclusions are the following: comparing MRT and relative velocity schemes, no scheme is "better than the other" in terms of L ∞ stability; the stability areas generally just intersect. Instead, in terms of L 2 norm, the scheme relative to the advection velocity is more stable than the MRT scheme. This improvement is correlated to the cancellation of some dispersive terms of the equivalent equations for the scheme relative to the advection velocity. These terms stay for u = 0 and are linked to instabilities at low diffusion. This result has been justified theoretically for the scheme relative to the advection velocity thanks to the weighted L 2 notion of stability. Further work needs to be done to obtain more precise theoretical results for the MRT scheme and for systems of conservation laws. The study also needs to be generalized to an arbitrary relative velocity u, the present work being adapted to the two choices u = 0 and u = V .
Appendix A. Link between the twisted D 2 Q 4 and the D 2 Q 4 schemes We proove that the L ∞ and L 2 stability areas of the twisted D 2 Q 4 and D 2 Q 4 relative velocity schemes with a non intrinsic diffusion correspond through the composition of a rotation and a homothety. The same result is still true in the intrinsic case.
If we note v the velocity set of the D 2 Q 4 scheme, the twisted set of velocities is given by Rv where is the composition of a rotation and a homothety of scale factor √ 2. This transformation allows to link the relaxation of both schemes.
Lemma A.1 (Relation between the relaxation operators). Let V ∈ R 2 , u ∈ R 2 , (s q , s xy ) ∈ R 2 , the relative velocity D 2 Q 4 scheme associated with the equilibrium for the relaxation parameters (0, s q , s q , s xy ) and the relative velocity twisted D 2 Q 4 scheme associated with the equilibrium and with the relaxation parameters (0, s q , s q , s xy ) have the same relaxation phase.
Proof. We have defined a transformation R sending the velocities of the D 2 Q 4 scheme on the velocities of the twisted D 2 Q 4 scheme. This naturally leads to the following application also denoted by R.
where R(P ) : Indeed the images of the moments 1, X, Y, X 2 − Y 2 by R are and Providing these images, we can write the transformation of the relaxation of the D 2 Q 4 relative velocity scheme by R. We here choose to keep the polynomial notations to express the moments. The relaxation of the D 2 Q 4 reads Introducing the relations (45,46), we obtain that is the relaxation of the twisted D 2 Q 4 relative velocity scheme. Note that this last step uses the fact that X and Y have the same relaxation parameter s q .
Proposition A.1 (Relation between the stability areas). Let (s q , s xy ) ∈ R 2 , note S nt ⊂ R 2 (resp. S t ⊂ R 2 ), the set of the velocities V ∈ R 2 such that the relative velocity D 2 Q 4 scheme (resp. twisted relative velocity D 2 Q 4 scheme) of equilibrium given by (44) (resp. (6,7)) and of relaxation parameters s = (0, s q , s q , s xy ) is L ∞ or L 2 stable. We have S t = RS nt .
Proof. Note R t (V , u, s) ∈ M 4 (R), the relaxation operator of the twisted relative velocity D 2 Q 4 scheme of equilibrium (6,7): it verifies Note R nt (V , u, s) ∈ M 4 (R) its analogue for the D 2 Q 4 scheme associated with the equilibrium (44). The lemma A.1 means that The transport does not influence the L ∞ stability because it only exchanges the particle distributions. Thus S nt is the set of the velocities V ∈ R 2 so that the matrix R nt (V , u, s) is nonnegative. According to the relation (48), it corresponds to the velocities V ∈ R 2 so that RV belongs to the L ∞ stability area of the twisted scheme: that is equivalent to the relation (47) and the lemma is proven in the L ∞ case.
Instead, the transport plays a role for the L 2 stability. It becomes local in the Fourier space: the transport matrices associated with the D 2 Q 4 scheme (A nt ) and the twisted one (A t ) are given by The transport matrix A nt (k) is equal to A t (Rk/2) because Rv j · Rk/2 = v j · k since R is the composition of a rotation and a homothety of scale factor √ 2. The amplification matrices L nt and L t of both schemes are then defined by We then have the following identity The L 2 stability needs to evaluate the maximum of the spectral radius r of this matrix for all the wavenumbers k ∈ R 2 . According to (49), the last equality coming from a variable change in R 2 . As for the case of the L ∞ stability, only the velocities V finally matter: the relation between the L 2 stability areas is then obtained in the same way.
Appendix B. Theoretical L ∞ stability results for the D 2 Q 4 scheme B.1. For a non intrinsic diffusion Proposition B.1 (L ∞ stability areas for the MRT scheme). Let V ∈ R 2 , (s q , s xy ) ∈ R 2 , consider the D 2 Q 4 of MRT scheme with the relaxation parameters (0, s q , s q , s xy ), associated with the equilibrium Note γ = s q /s xy , • if 0 < s xy min(s q , 2 − s q ), the scheme is L ∞ stable for all V so that • if s q s xy 2s q and s q 1, the scheme is L ∞ stable for all V so that • if 2 − s q s xy 2(2 − s q ) and s q 1, the scheme is L ∞ stable for all V so that • if s xy = 0 and 0 < s q 2, the scheme is L ∞ stable for V = 0.
• if s xy = s q = 0, the scheme is unconditionally L ∞ stable.
• For all other s, there is no V corresponding to a L ∞ stable scheme.
• if s xy = 2s q and 1 < s xy 2, the scheme is L ∞ stable on the intersection of (50) and • if s xy = s q = 0, the scheme is unconditionally L ∞ stable.
• For all other s, no V corresponds to a L ∞ stable scheme.
• if 0 s xy min(s q , 2 − s q ), the scheme is L ∞ stable for all V such that • if s q min(1, s xy ) and s xy 2s q the scheme is L ∞ stable for all V such that • if s q max(1, 2 − s xy ) and s xy 2(2 − s q ) the scheme is L ∞ stable for all V such that • if s xy = s q = 0, the scheme is unconditionally L ∞ stable.
• For all other s, no V corresponds to a L ∞ stable scheme.
Proposition B.4. Let V ∈ R 2 , (s q , s xy ) ∈ R 2 , consider the D 2 Q 4 scheme relative to u = V associated with the relaxation parameters (0, s q , s q , s xy ), and the equilibrium Note γ 1 = (2s q − s xy )/(s q − s xy ) and γ 2 = s xy /(s q − s xy ), • if s q < s xy min(2s q , 2(2 − s q )), the scheme is stable for all V such that plus the analogous inequalities exchanging V x and V y . • if s xy min(s q , 2(2 − s q )), the scheme is stable for all V such that (V x ± λγ 2 2 ) 2 − (V y ) 2 λ 2 4(s q − s xy ) 2 (4s 2 q − 2s q s xy − s 2 xy + 8(s xy − s q )), plus the analogous inequalities exchanging V x and V y .
• if s xy = s q 1, the scheme is L ∞ stable for all V such that • if 1 s xy = s q 4/3, the scheme is L ∞ stable for all V such that • For all other s, no V corresponds to a L ∞ stable scheme.
We do the following change of variable V x = V x + V y , V y = V x − V y . Noticing that γ 1 − γ 2 = 2, the inequalities (57a) to (57d) are equivalent to for i = 0 in addition to the ones obtained exchanging V x and V y corresponding to i = 1. We center these inequalities to get the identities from (41a) to (41d). The case s q s xy is obtained by changing the sense of the inequalities. It is characterized by the inequalities from (40a) to (40d).
We begin by treating the case s q < s xy corresponding to the inequalities from (40a) to (40d). The case s q > s xy will be considered further. The reasoning first eliminates the couples (s q , s xy ) leading to no stable velocity and then concentrates on stable areas. First, we assume that s xy > max(0, 2s q ) and show that there is no stable velocity V . Since s xy > s q , we have γ 1 γ 2 0 and the equation (40a) reads plus the two ones obtained exchanging V x and V y . The intersection of these four hyperbolic areas is empty.
The last inequality contradicts our hypothesis and there is no stable velocity when s q < s xy < 0.
It remains to treat the case s q < s xy 2s q for which γ 1 γ 2 0. The equation (40a) has some solutions V if and only if the abscissa of the right pole P of the hyperbole centered in λγ 1 is nonnegative The area satisfying these inequations corresponds to the points A, B, C, D on the figure 35. It increases as the abscissa of P increases. The inequality (59) is equivalent to γ 1 0, that is verified since s q < s xy 2s q .
For the equation (40d), there are two cases. Hence Figure 35. Layout of the bounds of the L ∞ stability areas corresponding to (40a) and (40d) for the twisted D 2 Q 4 scheme relative to u = V with an intrinsic diffusion. then the reasoning is the same as the one made in the case s xy > max(0, s q ) to show that there is no stable velocity. Hence 4s 2 q − 2s q s xy − s 2 xy + 8(s xy − s q ) 0, then the abscissa of the right pole of the hyperbole centered in λγ 2 must be nonnegative: λγ 2 + λ |s q − s xy | (4s 2 q − 2s q s xy − s 2 xy + 8(s xy − s q )) 1 2
We now summarize the results we have obtained for s q < s xy . We have proven that if s q < s xy , the inequalities (40a) and (40d) have solutions for if s xy min(2s q , 2(2−s q )). It remains to prove that (40c) and (40b) have also solutions in this case.
We now focus on the case s xy < s q corresponding to the inequalities from (41a) to (41d). Let's begin by eliminating the case 2s q s xy < s q . In this case (41a) has solutions if and only if both poles of the hyperbole centered in −λγ 1 (γ 1 0) have nonnegative abscissas that reads −λ(γ 1 + (γ 1 γ 2 ) 1 2 ) 0. Because of the negativity of γ 1 , this is equivalent to s q s xy that contradicts our hypothesis. Second, we eliminate the area 2(2 − s q ) s xy < s q . This is done showing that there is no solution to (41d). In this area, we have 4s 2 q − 2s q s xy − s 2 xy + 8(s xy − s q ) 0, (figure 34) so that (41d) has a solution if and only if the abscissas of both poles of the hyperbole centered in λ|γ 2 | are nonnegative. This reads λ|γ 2 | − λ |s q − s xy | (4s 2 q − 2s q s xy − s 2 xy + 8(s xy − s q )) 1 2
0,
that is equivalent to s xy 2(2 − s q ). This contradicts our hypothesis.
It remains to eliminate the area s xy < min(0, s q , 2(2 − s q )) corresponding to γ 1 0 and γ 2 0. If s q 0, then γ 1 + γ 2 2 0 and the inequality (41c) is non empty if the abscissas of the intersections of the hyperbole centered in (−λ(γ 1 + γ 2 )/2, λ) with V y = 0 are both nonnegative. This is equivalent to −λ γ 1 + γ 2 2 − λ(γ 2 2 + 1) This is equivalent to s xy (s xy − s q ) 0, that is verified for s xy s q since s xy < 0. This enters in contradiction with our hypothesis. If s q > 0, the reasoning is symmetric with respect to the ordinates axis (V x = 0) and leads to the same contradiction.
To close the proof, it remains to guarantee the existence of at least one velocity stable in the triangle defined by 0 s xy min(s q , 2(2 − s q )). The equation (41a) has a non empty stability area if and only if the two poles of the hyperbole centered in λγ 1 have a nonnegative abscissa: this case, illustrated by the figure 19, is equivalent to This inequality is equivalent to γ 1 0, that is true for s xy s q .
For the equation (41d), we distinguish two cases: the first is 4s 2 q − 2s q s xy − s 2 xy + 8(s xy − s q ) 0. There is a non empty stability area if and only if the two poles of the hyperbole centered in λγ 2 have nonnegative abscissas (represented on the figure 19): this reads λγ 2 − λ |s q − s xy | (4s 2 q − 2s q s xy − s 2 xy + 8(s xy − s q )) 1 2
This inequation has always solutions: the area of stability in V is analogous to the one represented on the figure 24. | 2015-06-08T07:30:02.000Z | 2015-06-05T00:00:00.000 | {
"year": 2015,
"sha1": "1ea3220a0e38d822e00260dac3fb8f8acba4a764",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1ea3220a0e38d822e00260dac3fb8f8acba4a764",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
52989106 | pes2o/s2orc | v3-fos-license | Cariprazine for acute and maintenance treatment of adults with schizophrenia: an evidence-based review and place in therapy
Cariprazine is an oral antipsychotic approved in the US and EU for the treatment of schizophrenia. Cariprazine differs from other antipsychotics in that it is a dopamine D3- and D2-receptor partial agonist, with tenfold higher affinity for D3 receptors than for D2 receptors. Cariprazine is metabolized in two steps by CYP3A4 to didesmethyl-cariprazine (DDCAR). DDCAR has a long half-life of 1–3 weeks and is the predominant circulating active moiety. Efficacy and safety in persons with acute schizophrenia were assessed in four similarly designed, short-term, randomized, placebo-controlled clinical trials in nonelderly adults, with three studies considered positive and yielding a number needed to treat vs placebo for response (change from baseline ≥30% in Positive and Negative Syndrome Scale total score) of ten for the approved dose range of cariprazine 1.5–6 mg/day. The most common adverse reactions were extrapyramidal symptoms (15% and 19% for 1.5–3 and 4.5–6 mg/day, respectively, vs 8% for placebo) and akathisia (9% and 12.5% for 1.5–3 and 4.5–6 mg/day, respectively, vs 4% for placebo). For the approved dose range, rates of discontinuation because of an adverse event were lower overall for patients receiving cariprazine vs placebo (9% vs 12%). Weight and metabolic profile appear favorable. Cariprazine does not increase prolactin levels or prolong the electrocardiographic QT interval. Cariprazine has also been found to be effective for the maintenance treatment of schizophrenia by delaying time to relapse when compared with placebo (HR 0.45). A 26-week randomized clinical trial evidenced superiority of cariprazine over risperidone for the treatment of predominantly negative symptoms in patients with schizophrenia. Cariprazine is also approved in the US for the acute treatment of manic or mixed episodes associated with bipolar I disorder in adults and is being studied for the treatment of bipolar I depression and major depressive disorder.
Introduction
Schizophrenia, a relatively common and chronic psychotic disorder, is notable for its marked heterogeneity in disease course and response to treatment, as well as differences among currently available psychopharmacological interventions. [1][2][3] New medications are welcomed, in the hope that they can address the shortcomings of prior drugs in terms of both therapeutic targets 4 and tolerability profile. 5 Cariprazine is an antipsychotic medication that received initial approval by the US Food and Drug Administration (FDA) in 2015 6 and approval by the European Medicines Agency in 2017. 7 Although cariprazine is the third dopamine-receptor partial agonist antipsychotic to become generally available, it differs from the other two, aripiprazole
Pharmacology, mode of action, and pharmacokinetics of cariprazine
Current FDA-approved pharmacological interventions for schizophrenia focus on antagonism or partial agonism at the dopamine D 2 receptor and, in the case of second-generation (atypical) antipsychotics, antagonism at the serotonin 5HT 2A receptor. 4 However, the antipsychotics differ in terms of their pharmacodynamic profile by secondary binding characteristics at other receptors, with some of these affinities often being more robust (ie, associated with a lower binding constant or K i ) than for dopamine D 2 or serotonin 5HT 2A receptors. 65 Cariprazine is a dopamine D 3 -and D 2 -receptor partial agonist with tenfold higher affinity for D 3 receptors than for D 2 receptors (K i for D 3 receptors 0.085 nM vs 0.49 nM and 0.69 nM for the two types of D 2 receptors assayed). 10,13,30 Intrinsic activity of cariprazine at dopamine D 2 receptors is numerically lower than that for aripiprazole. 33 Additional pharmacodynamic characteristics include partial agonism at the serotonin 5HT 1A receptor (K i 2.6 nM) and antagonism at 5HT 2B and 5HT 2A receptors with high and moderate binding affinity (K i 0.58 and 18.8 nM, respectively) and moderate affinity for the histamine H 1 receptor, also as an antagonist (K i 23.2 nM). Lower binding affinity has been noted for the serotonin 5HT 2C and α 1A -adrenergic receptors (K i 134 and 155 nM, respectively), and no appreciable affinity has been noted for cholinergic muscarinic receptors (IC 50 .1,000 nM). The three commercially available dopamine-receptor partial agonists (aripiprazole, brexpiprazole, and cariprazine) have differing receptor-binding profiles, making them distinct molecular entities (see Table 2 located in reference 8). 8 A substantial amount of preclinical data is available supporting the potential therapeutic benefit of targeting dopamine D 3 receptors. 32,35,36,39,44,46,47 Theoretically, antagonism at D 3 autoreceptors can enhance dopaminergic neurotransmission, especially in such brain areas as the prefrontal cortex, where dopamine release appears to be controlled by D 3 receptors. 66 With disinhibition of dopamine release, cortical circuits can be "tuned" to improve cognition, mood, and negative symptoms. 67 In this process, acetylcholine release in the prefrontal cortex may be enhanced as well, which could also contribute to procognitive actions. 66 The pharmacokinetic profile of cariprazine is markedly different than that of other currently marketed antipsychotics. Although extensive metabolism by CYP3A4 (and to a lesser extent by CYP2D6) is not unusual, the ultimate active metabolite, didesmethyl-cariprazine (DDCAR), has a long half-life, described in product labeling as 1-3 weeks. 10 Therefore, DDCAR is the predominant circulating active moiety. Following a single dose of 1 mg cariprazine, DDCAR remains detectable at 8 weeks postdose. This has important implications in terms of dosing and interpretation of clinical trial results, which will be discussed later. DDCAR has an in vitro receptor-binding profile similar to cariprazine (K i 0.057 nM for dopamine D 3 receptors and 1.41 and 2.64 nM for the two types of D 2 receptors assayed); 6,10 however, intrinsic activity at the dopamine D 2 receptor for DDCAR has been reported to be about half that for cariprazine. 33 Additional details regarding the pharmacokinetic profile of cariprazine can be found in a report of a multicenter, randomized, open-label, parallel-group, fixed-dose (3, 6, or 9 mg/day) study of 28-week duration (#4-week observation, 12-week open-label treatment, and 12-week follow-up), where cariprazine was administered once daily to 38 adult patients with schizophrenia. 18
2565
Cariprazine for the treatment of schizophrenia approximately 1 week. Exposure was dose proportional over the range of 3-9 mg/day. The product label notes that mean concentrations of DCAR and DDCAR are approximately 30% and 400%, respectively, of cariprazine concentrations by the end of 12-week treatment. 10 Figure 1 illustrates the key points regarding concentrations of cariprazine, DCAR, and DDCAR over time.
Time to peak concentration of cariprazine is 3-6 hours. 10,18 Administration of a single dose of 1.5 mg cariprazine with a high-fat meal does not significantly affect maximum concentration or area under the concentration curve of cariprazine or DCAR. 10 Cariprazine and its major active metabolites are highly bound (91%-97%) to plasma proteins. 10 Drug-drug interactions involving a strong CYP3A4 inhibitor will necessitate reduction of the cariprazine dose by half (for patients already taking 4.5 mg/day, dosage should be reduced to 1.5 or 3 mg/day, and for patients taking 1.5 mg daily, the dosing regimen should be adjusted to every other day). 10 When initiating cariprazine in a person already on a strong CYP3A4 inhibitor, the patient should be administered 1.5 mg on days 1 and 3, with no dose administered on day 2. From day 4 onward, the dose should be administered at 1.5 mg/day, then increased to a maximum dose of 3 mg/day. When the CYP3A4 inhibitor is withdrawn, cariprazine dosage may need to be increased. 10 Concomitant use of cariprazine and a CYP3A4 inducer has not been evaluated and is not recommended, because the net effect on the active drug and metabolites is unclear. 10 No dosage adjustment is required in the presence of CYP2D6 inhibitors or in persons who are poor CYP2D6 metabolizers. 10 For patients with mild-moderate hepatic or renal impairment, no dosage adjustment is required. Cariprazine has not been studied in patients with severe hepatic or renal impairment, and is thus not recommended for such patients. 10 No dosage adjustment for cariprazine is required based on age, sex, race, or smoking status. 10 Although doses of cariprazine #12 mg/day have been assessed in the clinical trials described herein, the recommended maximum dose is 6 mg/day, because of a dose-related increase in certain adverse reactions, particularly at doses .6 mg/day. 10
Efficacy and safety in acute schizophrenia
Four similarly designed, short-term, randomized, placebocontrolled clinical trials in nonelderly adults with acute exacerbations of schizophrenia have been conducted and published, [14][15][16][17] of which three are included in section 14 of the product label and considered supportive of efficacy. 10,[15][16][17] Table 1 provides an overview of all four studies. For consistency, statistical outcomes based on the mixed-effect model for repeated measures are presented for all studies, even though it was the primary method of analysis for only the two phase III studies. 16,17 In the three supportive pivotal trials, 15-17 the mean age of participants was 38 years, ~70% were male, and ~40% were white. Mean body-mass index was 26 kg/m 2 . A little more than 50% of subjects were in the US. The mean baseline Positive and Negative Syndrome Scale (PANSS) total score was about 97. All tested doses of cariprazine -1.5, 3, 4.5, 6, 3-6, and 6-9 mg/day -were superior to placebo on reduction in PANSS total score, the primary outcome measure for each of the trials. Patients were also assessed using the Clinical Global Impression -severity (CGI-S) score, which was the key secondary end-point measure. Cariprazine was consistently superior to placebo on this outcome as well. A pooled analysis of CGI-S scores examining shifts, such as from extremely or severely ill (CGI-S $5) to mildly ill or better (CGI-S #3), demonstrated an advantage for cariprazine over placebo (OR 3.4, 95% CI 1.5-7.9). 26 Of clinical relevance are observed effect sizes for antipsychotic response, as defined by change from baseline $30% in PANSS total score. [15][16][17] Pooling together data for the approved dose range of cariprazine (1.5-6 mg/day) revealed a number needed to treat (NNT) vs placebo of ten (95% CI 7-19); 11 however, in one trial the NNT vs placebo for response was as robust as six. 15 Indirect comparisons with other antipsychotics Figure 1 Plasma concentration at trough (mean ± SE)-time profile during and following 12-weeks of treatment with cariprazine 6 mg/day. Notes: Reproduced from the product label. 10 11, respectively Cariprazine was initiated at 1.5 mg on day 1 and 3 mg on days 2 and 3. The 3-6 mg/day group remained at 3 mg until the end of week 2 of double-blind treatment. Starting on day 4, the 6-9 mg/day group received 6 mg until the end of week 2 of double-blind treatment. in cases of inadequate response (,20% improvement from baseline on PANSS total score and Clinical Global impressions -severity score $4), cariprazine dose was increased at the end of week 2. in the 3-6 mg/day group, patients received 4.5 mg/day for days 14-15 and 6 mg/day thereafter. in the 6-9 mg/day group, patients received 7.5 mg/ day for days 14-15 and 9 mg/day thereafter. Patients who did not qualify as inadequate responders or had significant tolerability issues did not receive a dose increase. Dosage was fixed from the end of week 3 to week 6. Patients were required to be hospitalized for at least 4 weeks after randomization. Aes reported in $5% of patients in either cariprazine group and at least twice the rate seen with placebo were akathisia (both groups), restlessness (6-9 mg/day), ePS (both groups), dyspepsia (6-9 mg/day), constipation (3-6 mg/day), tremor (both groups), vomiting (6-9 mg/day), weight increase (6-9 mg/day), and diarrhea (3-6 mg/day). A 48-week open-label treatment extension was available to completers (NCT01104792; RGH-MD-11). 22 Notes: All trials 6 weeks' duration. a Response reported using definition of $20% improvement from baseline in PANSS total score at week 6 in Durgam et al 14 and using definition of $30% improvement from baseline in PANSS total score at week 6 for the other three studies. [15][16][17] Abbreviations: Aes, adverse events; ePS, extrapyramidal symptoms; LS, least squares; MMRM, mixed-effect model for repeated measures; NNT, number needed to treat; PANSS, Positive and Negative Syndrome Scale.
are hampered by lack of a uniform definition for response; however, similar criteria used for the assessment of NNT vs placebo for aripiprazole and brexpiprazole (response defined as change from baseline $30% in PANSS total score or a CGI -improvement score of 1 [very much improved] or 2 [much improved]) in similar acute studies yielded NNT values of eight for aripiprazole and seven for brexpiprazole, with overlap of 95% CIs. 8 A tutorial on NNT can be found in a prior review. 11 Cariprazine has also been evaluated post hoc for specific antihostility effects in patients with schizophrenia. 25 Data were pooled from the three positive acute studies, [15][16][17] and the principal outcome was mean change from baseline to week 6 on the PANSS hostility item. Significantly greater improvement in hostility was seen in favor of cariprazinetreated patients compared with placebo-treated patients. The improvement associated with cariprazine appeared to be partially independent of improvement in PANSS positive symptoms, such as hallucinations and delusions, independent of the presence or absence of sedation, and was greater in magnitude in patients with higher levels of hostility at baseline.
Two meta-analyses are available that have examined the efficacy of cariprazine in acute schizophrenia 63,64 using the four studies available. [14][15][16][17] In one meta-analysis, 63 low and high ($6 mg/day) doses were tested separately, and both high and low cariprazine doses demonstrated superiority to placebo in all symptom domains (PANSS total score, PANSS positive, PANSS negative, PANSS response, Schizophrenia Quality of Life Scale -revision 4, and CGIimprovement). No differences were found between high and low doses on these measures. The standardized mean difference vs placebo showed a modest impact on overall symptoms compared with meta-analytic results for other antipsychotics (effects similar to lurasidone, asenapine, ziprasidone, and aripiprazole, but less than for risperidone, quetiapine, and olanzapine). The other meta-analysis 64 also demonstrated superiority of cariprazine over placebo for PANSS total, PANSS positive, and PANSS negative score changes from baseline.
Safety and tolerability data collected during the four acute trials in schizophrenia included the incidence of spontaneously reported adverse events (briefly summarized in Table 1 by study), body weight, laboratory measurements, vital signs, electrocardiography, and movement-disorder scales. For cariprazine doses of 1.5-6 mg/day, rates of discontinuation because of an adverse event were overall lower for patients receiving cariprazine vs placebo (9% vs 12%). 8 As per the product label, there was no single adverse reaction leading to discontinuation that occurred at a rate $2%
2568
Citrome in cariprazine-treated patients and at least twice the rate of placebo. 10 A pooled analysis of safety and tolerability is available using three modal daily dose groups (ie, most frequent dose taken by a patient during double-blind treatment): 1.5-3, 4.5-6, and 9-12 mg/day. 24 The overall incidence of treatment-emergent adverse events vs placebo was similar for cariprazine 1.5-3 mg/day, but higher for cariprazine 4.5-6 and 9-12 mg/day, with a dose-response relationship observed for akathisia, extrapyramidal symptoms (EPS), and diastolic blood pressure. Regarding the latter, a shift from normotensive to stage I hypertension was observed in 2.0% of patients receiving placebo compared with 1.1%, 2.8%, and 6.8% for patients receiving 1.5-3, 4.5-6, and 9-12 mg/day of cariprazine, respectively. Patients in the modal dose .6 mg/day group showed a higher likelihood for weight increase, as well as higher rates of CPK and transaminase elevations. These observations on doses .6 mg/day resulted in the FDA approving a recommended dose range for schizophrenia of 1.5-6 mg/day. 10,24 From the pooled data, mean changes in metabolic parameters were generally similar in cariprazine-treated and placebo-treated patients. 24 No prolactin level increase or QTc value .500 ms was noted. The incidence of orthostatic hypotension was similar for placebo (12.3%) and cariprazine (13.4%). No syncopal episodes were reported. Weight increase with cariprazine overall was 1.1 kg compared with 0.3 kg for placebo-treated patients. Weight increase $7% occurred in 9.2% of cariprazine-treated patients and 4.7% of placebo-treated patients at any time during double-blind treatment, for a calculated number needed to harm (NNH) vs placebo of 23 (95% CI . Within the recommended dose range of 1.5-6 mg/day, mean weight gain was #1 kg for cariprazine, and proportions with $7% increase in weight were 7.6% and 7.7% for the 1.5-3 and 4.5-6 mg/day groups, respectively, yielding a NNH vs placebo of 35 (95% CI 18-1,248) and 34 (95% CI 18-443), respectively.
Of note, both mean weight change and shifts in weight $7% were larger in the 9-12 mg/day dose group, with a rate of 17.2% and resulting in a calculated NNH vs placebo of 8 (95% CI 6-15); however, there was no corresponding alteration in fasting triglycerides, as noted by a shift in rate from fasting triglycerides normal/borderline (,200 mg/day/L) to high ($200 mg/day/L) of 14.2% for patients receiving placebo compared with 11.9%, 10.8%, and 11.8% of patients receiving modal doses of cariprazine 1.5-3, 4.5-6, and 9-12 mg/day, respectively. Similarly, shift rates for total, low-density lipoprotein (LDL) and high-density lipoprotein (HDL) cholesterol were not higher for cariprazine compared to placebo. Shift rates for fasting glucose normal (,100 mg/day/L) to high ($126 mg/day/L) were 6.7% for placebo compared with 7.4%, 9.8%, and 2.7% of patients receiving modal doses of cariprazine 1.5-3, 4.5-6, and 9-12 mg/day, respectively, and thus did not follow a dose response. Difficult to interpret without more context is an absolute increase for fasting glucose $10 mg/day/L, found in 35.2%, 41.3%, 49.8%, and 50.3% of patients receiving placebo or modal doses of cariprazine 1.5-3, 4.5-6, and 9-12 mg/day, respectively. As found in a large epidemiological study, elevated fasting glucose level within the normal range can be an independent predictor of cardiovascular disease in men and of type 2 diabetes mellitus in both women and men. 68 Table 2 is a list of spontaneously reported adverse events associated with the use of cariprazine (incidence $5% in any single cariprazine modal dose group and cariprazine incidence greater than placebo) observed in acute trials in schizophrenia and reported in product labeling 10 and the published pooled analysis, 24 together with their respective values for NNH vs placebo. As per the product label, the most common adverse reactions (incidence $5% and double or more the rate of placebo) for patients with schizophrenia were EPS and akathisia. Within the dose range of 4.5-6 mg/day, NNH values vs placebo were as strong as 9 (95% CI 7-14) for EPS and 12 (95% CI 9-18) for akathisia. The product label notes that adverse events may first appear several weeks after the initiation of treatment, probably because plasma levels of cariprazine and its major metabolites accumulate over time. 10 Published tolerability data focusing on only cariprazine 1.5 mg/day are limited to one of the acute clinical trials. 15 Table 3 provides NNH values vs placebo for adverse events reported in that trial using two methods: comparison with the placebo group from that study alone and comparison with placebo groups pooled across the four acute studies. 24 In general, cariprazine 1.5 mg/day appears well tolerated. The strongest NNH values observed were for constipation when compared with placebo from the single study (NNH 16, 95% CI 9-133) and akathisia when compared with placebo data pooled from all four available studies (NNH 19, 95% CI 10-209). The NNH value for akathisia for the 1.5 mg/day dose is about the same as for the 1.5-3 mg/day modal dose group (Table 2).
A meta-analysis is available of the tolerability and safety profile of cariprazine in the management of any mental disorder. 62
2570
Citrome the four acute studies in patients with schizophrenia, [14][15][16][17] the three pivotal studies for acute mania, 53-55 and a study in bipolar depression, 60 as well as a study in major depressive disorder, 61 for a total of 4,324 subjects. Consistent with the data already presented for short-term studies in acute schizophrenia, the risk of discontinuation due to adverse events for cariprazine was similar to that for placebo (RR 1.13, 95% CI 0.77-1.66). Across all the studies, cariprazine was associated with higher risks of EPS-related events than placebo, including risk of akathisia (RR 3.92, 95% CI 2.83-5.43), tremor (RR 2.41, 95% CI 1.53-3.79), and restlessness (RR 2.17, 95% CI 1.38-3.40). The cariprazine-treatment group was more likely to have clinically significant weight gain (RR 1.68, 95% CI 1.12-2.52), but no statistically significant differences in results were found in other metabolic parameters or cardiovascular-related events. There were no statistically significant effects on prolactin level.
Longer term safety
A single-arm, open-label, extension study evaluated the longterm safety and tolerability of cariprazine 1.5-4.5 mg/day in 93 patients with schizophrenia. 21 Participants had completed the acute study that examined fixed doses of cariprazine 1.5, 3, and 4.5 mg/day, 15 and were also required to have responded to treatment (as defined by CGI-S #3 and $20% reduction in PANSS total score) in that study. Approximately 50% of patients completed the 48 weeks of open-label treatment. Cariprazine 4.5 mg/day was the final dose for 70% of patients and was also the modal dose in 67.7%; 24.7% and 7.5% of patients had modal daily doses of 3 and 1.5 mg/day, respectively. Common adverse events included were akathisia (14%), insomnia (14%), and weight increase (12%); 11% discontinued due to adverse events, none being akathisia or weight gain, and one patient discontinued because of insomnia. Mean changes in metabolic parameters were generally small and not clinically relevant. No patients shifted from normal/borderline levels of total or LDL cholesterol to high levels. Shifts from normal HDL-cholesterol levels to low levels occurred in about 23% of patients. About 14% of patients shifted from normal/borderline to high levels of triglycerides and about 4% with normal fasting glucose levels at baseline shifted to high levels. About 29% of patients had an increase in fasting glucose $10 mg/day/L. Mean body weight increased by 1.9 kg from the start of the lead-in study to the end of the extension study. Potentially clinically significant weight gain ($7% increase from lead-in baseline) was experienced by 33% of patients, and 5% experienced weight increase $15%. Most patients who experienced $7% weight increase were normal or underweight at baseline. There were no discontinuations associated with change in metabolic parameters or body weight. Prolactin elevation or clinically significant changes in cardiovascular parameters were not observed. No patient had a QTc increase $60 ms or a postbaseline value .500 ms. There were no clinically significant changes in ophthalmologic parameters, including intraocular pressure, color discrimination, visual acuity, or lens opacity. An adverse event of cataract was noted; however, this resolved during the study, was not felt to represent an actual pathological event, and was likely due to variability on the part of the examiner. A second open-label study evaluated the long-term safety and tolerability of cariprazine 3-9 mg/day in 586 patients with schizophrenia. 22 Participants included both new patients (n=235) and patients who had completed one of the two phase III acute studies (n=351). 16,17 Approximately 39% of patients completed the 48 weeks of open-label treatment. The most frequent modal daily dose was cariprazine 6 mg/day (50.9%), followed by 9 mg/day (25.3%) and 3 mg/day (22.9%). Common adverse events included were akathisia (16%), headache (13%), insomnia (13%), and weight increase (10%); 12.5% discontinued due to adverse events, with ,1% discontinuing due to akathisia. Mean cholesterol and triglyceride levels decreased; however, an increase of almost 5 mg/day/L in glucose was observed. Shifts from normal/borderline to high cholesterol levels were observed in about 5% of patients, shifts from normal/borderline to high LDL-cholesterol levels were observed in about 3% of patients, shifts from normal to low HDL-cholesterol levels were observed in about 12% of patients, shifts from normal/borderline to high triglyceride levels were observed in about 8% of patients, and for fasting glucose, shifts from normal/impaired to high levels were observed in about 6% of patients. Mean body weight increased by 1.5 kg from the start of the lead-in study to the end of the extension study. About 26% of patients had $7% increase from baseline in body weight, with patients categorized as underweight at baseline having the highest percentage of clinically significant weight gain (40%).
Prolactin elevation or clinically significant changes in cardiovascular parameters were not observed. No retinal toxicity or cataracts were observed. One (0.2%) patient had a postbaseline QTcF value .500 ms, and three (0.5%) patients had QTcB postbaseline values .500 ms. An increase from baseline .60 ms in QTcF or QTcB values occurred in two (0.3%) and seven (1.2%) patients, respectively. Pooled data from both 48-week open-label safety studies 21,22 are reported separately in a third publication. 23 The pattern of results remain the same as described earlier.
Relapse prevention
A supplemental new-drug application for cariprazine for the maintenance treatment of adults with schizophrenia was approved by the FDA in November 2017, 69 based on a placebo-controlled, randomized withdrawal study in nonelderly adult patients with schizophrenia. 19,20 See also Table 4. To participate in the trial, patients were required to be acutely ill at screening. During the 20-week open-label treatment phase, patients received cariprazine 3-9 mg/day (starting at 1.5 mg/day on day 1). In order to be randomized to either continue cariprazine or to receive placebo, patients were required to meet prespecified stability criteria. Once randomized, the double-blind phase consisted of 26-72 weeks of fixed-dose treatment. The primary efficacy outcome was time to first relapse during the double-blind phase. Relapse was defined as meeting any of several operational criteria (worsening of symptom scores, psychiatric hospitalization, aggressive/violent behavior, or suicidal risk). A total of 264 patients of 765 (34.5%) completed the open-label phase, and 200 patients were randomized. Demographic and baseline characteristics of the participants entering the open-label phase were similar to those for the acute shortterm trials described earlier. Baseline PANSS score at the start of the double-blind phase was 51. At randomization, 14 patients were taking cariprazine 3 mg/day, 37 patients were taking 6 mg/day, and 50 patients were taking 9 mg/ day. Based on Kaplan-Meier analysis, time to relapse was significantly longer for patients who continued cariprazine than for patients randomized to placebo (HR 0.45, 95% CI 0.28-0.73). Observed relapse rates were 24.8% for cariprazine vs 47.5% for placebo, for a NNT of five (95% CI 3-11). The study protocol had the provision that subjects should meet the specified relapse criterion at a second assessment to be conducted 4-7 days after first meeting the criterion; however, the principal investigator had the discretion not to perform this second assessment based on safety reasons. Therefore, a sensitivity analysis that ensured that the first date of relapse was consistently applied was conducted in response to an FDA request, 20 and the results of this are contained in product labeling 10 (HR 0.52, 95% CI 0.33-0.82; Figure 2). Revised observed relapse rates were 29.7% for cariprazine vs 49.5% for placebo, for a NNT of six (95% CI 3-16). These NNT values are similar to what has been reported for other first-line second-generation antipsychotics. 70 As reported in the original analysis, 19 the 25th percentile for time to relapse was 92 days in the placebo group and 224 days in the cariprazine group. The 50th percentile (median) was 296 days for the placebo group and could not be calculated for the cariprazine group, because of the low number of relapse events. Of note, between-group separation of the curves did not occur until around day 50, possibly because of the long half-life of cariprazine (and specifically DDCAR), lending some extended protection against the risk of relapse, similar to what was observed when examining this phenomenon in similarly designed randomized-withdrawal studies conducted for paliperidone extended release oral vs 1-month paliperidone palmitate vs 3-month paliperidone palmitate. 71 Neuropsychiatric Table 4 Completed, longer term, randomized, controlled, phase iii double-blind clinical trials of cariprazine for schizophrenia
Randomized, n Cariprazine dose (and dose of active control if applicable) Comments, including regarding dose titration and tolerability
Durgam et al, 19 NCT01412060, RGH-MD-06 200 Flexible-dose range 3-9 mg/day About 50% of all cariprazine patients were receiving 9 mg/day at randomization, 37% 6 mg/day, and 14% 3 mg/day The aim of this study was to assess longer term maintenance treatment with cariprazine. The study included 20 weeks of open-label treatment with cariprazine for all patients, followed by a variablelength randomized phase where stable patients received either cariprazine or placebo. Cariprazine was started at 1.5 mg/day and increased to 3 mg/day on day 2. For patients with inadequate response and no significant tolerability issues, dosage increases were allowed on day 6 (6 mg/day [interim increase to 4.5 mg/day on day 4]) and day 10 (9 mg/day) if needed. Dose decreases to 3 or 6 mg/day were allowed for significant tolerability issues. During double-blind treatment, cariprazine was administered at the same fixed dose as in the stabilization phase. Patients were required to be hospitalized during screening and for the first 2 weeks in the run-in phase.
2573
Cariprazine for the treatment of schizophrenia The most commonly observed adverse events during the open-label phase were akathisia (19%), insomnia (14%), and headache (12%). During open-label treatment, akathisia and other EPS adverse events (excluding akathisia or restlessness) each led to discontinuation in ~1% of patients, while no EPSrelated adverse events led to discontinuation during doubleblind treatment. Changes from baseline in lipid parameters at the end of open-label and double-blind treatment were generally not clinically relevant. There were no clinically relevant mean changes in blood pressure, and no patient had a QTc of .500 ms. Weight gain $7% was reported in 11% of open-label patients, and in 32% of placebo-treated patients and 27% of cariprazine-treated patients during double-blind treatment.
Negative symptoms
Cariprazine's European product label, the summary of product characteristics, 72 includes support for cariprazine's efficacy for the treatment of predominantly negative symptoms in patients with schizophrenia based on a 26-week doubleblind randomized study comparing cariprazine 4.5 mg/day with risperidone 4 mg/day in 460 nonelderly adult patients. 27 See also Table 4. There was no placebo control. In this study, patients were required to have a PANSS negative-factor score 73 (NFS) $24, with single-item scores of at least moderate on selected symptoms, such as blunted affect, passive/ apathetic social withdrawal, and lack of spontaneity and flow of conversation. Excluded were patients with a hospital admission or an acute exacerbation of schizophrenia within the last 6 months prior to the study, a PANSS positive-factor score .19, significant positive-or negative-symptom fluctuations (ie, instability) during the prospective lead-in period, treatment with clozapine in the 12 months prior to the study, moderate-severe depressive symptoms, clinically relevant parkinsonian symptoms, or treatment with antidepressant medications and/or anticholinergic medications used to treat abnormal movements. After randomization, patients were uptitrated in 2 weeks to the target dose of cariprazine 4.5 mg/day or risperidone 4 mg/day, but at the end of week 3 and at every subsequent visit, the dose of the double-blind study medication could be decreased to 3 mg/day in cases of poor tolerability. In cases of impending psychotic deterioration, the dose could be increased to 6 mg/day.
Mean age was 40 years, and 57% of patients were male. Overall, 77% of patients completed the double-blind treatment period. The mean daily dose for cariprazine was 4.2 mg and that for risperidone 3.8 mg. The modal dose (excluding the uptitration period) was the target dose (ie, 4.5 mg/day
2574
Citrome for cariprazine and 4 mg/day for risperidone) for 95% of patients treated with either antipsychotic. Cariprazine was superior to risperidone on both the primary (PANSS-NFS) and key secondary outcomes (Personal and Social Performance Scale). PANSS-NFS responder rates at week 26 (as defined by $20% decrease in baseline score) were 69% for cariprazine vs 58% for risperidone, resulting in a NNT of nine (95% CI . For outcome variables analyzed to assess pseudospecific effects (change from baseline for PANSS positive symptoms, depression, and parkinsonian symptoms), changes were small and similar for cariprazine and risperidone. These results thus excluded indirect effects related to positive, depressive, or EPS improvement as a factor in negative-symptom improvement. Quality-adjusted life-year gain was also modeled, and resulted in an estimated quality-adjusted life-year gain of 0.029 per patient after 1 year of treatment. 28 The most common adverse event for cariprazine was akathisia (8%), which was also observed for 5% of persons randomized to risperidone.
Conclusion
Cariprazine is notable for being a dopamine-receptor partial agonist with a tenfold higher affinity for dopamine D 3 vs D 2 receptors; 10,13,30 this differs from other available antipsychotics, and has theoretical advantages in people with schizophrenia based on preclinical data. This is also supported by a single study 27 demonstrating superiority over risperidone in the treatment of predominantly negative symptoms in patients with schizophrenia; however, the effect size was small and the study requires replication. Cariprazine is also approved in the US for the acute treatment of manic or mixed episodes associated with bipolar I disorder in adults. 10,26,[53][54][55][56][57][58][59] An active clinical development plan includes studies in bipolar I depression 60 and major depressive disorder. 61 Cariprazine also differs from other oral antipsychotics in terms of the extended half-life of its major active metabolite -DDCAR. 10,18 DDCAR is the predominant circulating moiety, representing about 70% of the total exposure. An extended half-life carries important implications for dosing, as rapid increases in dose may be premature and possibly result in poorer tolerability. A long half-life also makes the interpretation of the short-term acute trials more difficult, as steady state is not reached for some time and changes in dose will not be fully reflected in plasma for several weeks. 10 However, a long half-life may provide a degree of protection when doses are occasionally missed. In the randomized-withdrawal relapse study, 19 between-group separation of the curves did not occur until around day 50. Of note, cariprazine is the only antipsychotic with instructions allowing dosing every other day (1.5 mg/day for coadministration with a CYP3A4 inhibitor). Under usual circumstances for patients with schizophrenia, the recommended dose range is 1.5-6 mg once daily, with a starting dose of 1.5 mg once daily with or without food. The product label does not provide guidance as to a preferred time of day when cariprazine should be administered. As described in detail earlier, the maximum
2575
Cariprazine for the treatment of schizophrenia recommended daily dose is 6 mg based on observations made during the short-term controlled trials, where dosages .6 mg daily did not confer increased effectiveness sufficient to outweigh dose-related adverse reactions. Overall tolerability is promising, with the rate of discontinuation due to adverse events lower than that observed for placebo in the acute trials for schizophrenia. [14][15][16][17]24 Elevations in prolactin were not observed, and no clinically relevant effects on the electrocardiographic QT interval were evident. As with all second-generation antipsychotics, monitoring individual patients for alterations in weight and metabolic shifts is necessary. 74 Using data from prior analyses, 8,11 Table 5 provides ranking of NNH values for clinically relevant weight gain, adverse events of somnolence, and adverse events of akathisia vs placebo for all first-line second-generation oral antipsychotics, as observed in shortterm studies in adults for schizophrenia and calculated from product labeling. Except for akathisia, cariprazine appears to have favorable (ie, higher) NNH values than some of the other agents. Of interest are the NNH values for somnolence, where cariprazine appears best in class with a ranking of 1. When contrasting the three available dopamine-receptor partial agonists, the order of propensity for weight gain appears to be brexpiprazole . aripiprazole . cariprazine, propensity for somnolence aripiprazole . brexpiprazole . cariprazine, and propensity for akathisia cariprazine . aripiprazole . brexpiprazole. These indirect comparisons will need to be confirmed by appropriately designed headto-head clinical trials. 8 The averages of the ranking values are also shown (lower numbers are best) and give an idea of overall tolerability for each antipsychotic. Although most of the antipsychotics have similar average ranking values, an individual patient's concerns for each of the different tolerability items would likely influence the choices considered acceptable, and in any event weighed against efficacy considerations for that individual patient. 3 An additional caveat is that the adverse-event rates reported here do not take into account severity or duration of the event: short-lived and perhaps easily manageable adverse events will be less likely problematic than adverse events such as weight gain, which can be persistent and more difficult to ameliorate. Another caveat is that adverse-event rates, and thus the NNH values derived from them, can vary by therapeutic indication. 75,76 In general, despite the availability of many antipsychotics for the treatment of schizophrenia, this disorder is complex and often difficult to treat. As noted, antipsychotics vary in terms of tolerability and safety concerns, 2 and patients themselves differ in terms of preexisting risk factors and comorbidities, which make drug selection challenging. 5,77 Having an additional choice is welcome.
Neuropsychiatric Disease and Treatment
Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/neuropsychiatric-disease-and-treatment-journal Neuropsychiatric Disease and Treatment is an international, peerreviewed journal of clinical therapeutics and pharmacology focusing on concise rapid reporting of clinical or pre-clinical studies on a range of neuropsychiatric and neurological disorders. This journal is indexed on PubMed Central, the 'PsycINFO' database and CAS, and is the official journal of The International Neuropsychiatric Association (INA). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2018-11-01T18:46:31.628Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "5b691c4c684e0fbf41a43dbf71c4f709d6064025",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=45015",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22cae72e9a1a1afb4a71d711a32269732822b55b",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
213931029 | pes2o/s2orc | v3-fos-license | Cut and fill analysis of Palu Bay seabed topography pre and post-tsunami
Palu Bay is an earthquake-prone area, in September 2018 tectonic earthquake occurred in Donggala District, Central Sulawesi, which caused tsunami due to underwater landslides. One of the impacts caused by the tsunami was changes in bathymetry. In tsunami post-disaster reconstruction, further studies regarding seafloor topography information of affected areas are needed. This study examines and analyzes the underwater landslide in Wani, Palu Bay 2018 before and after the underwater landslide using bathymetry, and coastline data to determine cut and fill volume. The analysis was conducted on the three components: contouring, slope/gradient, and volumetric (cut and fill). The contouring results show significant depth changes at the 200-meter contour on Tanjung Labuan, Palu. The volume calculation from two surfaces shows 59,512,720.790 m3 cut and 12,466,252.630 m3 fills in the landslide area.
Introduction
The Palu Koro fault is a fault that divides Sulawesi into two, starting from the waters of the Sulawesi Sea with the Makassar Strait to the Bone Bay. This fault is said to be very active due to its movement reaching 35 to 44 mm per year [1]. Palu Bay is an area that is prone to earthquakes; this is due to its position in the active fault area of Koro Palu which has high seismic activity. On September 28th, 2018, there was a flat type tectonic earthquake with magnitude 7. 4 SR in Donggala District, Central Sulawesi, which resulted in a tsunami due to landslide of the seafloor [2]. According to the Navy's Hydro-Oceanographic Center (PUSHIDROSAL), underwater landslides that caused tsunamis in Palu Bay were in Tanjung Labuan, Wani, Central Sulawesi [3].
One of the possible impacts of the tsunami was changes in bathymetry, destruction of vital public facilities and infrastructure in the coastal areas, and the carrying of material from the land. Bathymetry or underwater topography is one of the main components that affect the hydrodynamics that occurs on a seabed [4]. In post-disaster reconstruction, further studies are needed regarding topographic information on the seafloor of the affected areas.
In this study, pre and post-tsunami seafloor topography information analyzed in three components: contour, slope, and volume. The contour analyzed to identify topographic changes in the seabed. Slope changes are often associated with tsunamis [5]. In this study, the slope making aims to provide an overview of the topographic changes that occur due to underwater landslides and tsunamis. Calculation of changes in topographic volume is carried out to determine landslide material and the method used for that volume is a cross section method.
Study area
The location used as a case study in this study is Tanjung Labuan, Wani Sea, Palu Bay, Central Sulawesi Province, Indonesia.
Data survey
The data survey used to support this research are: Bathymetry data with 1: 10,000 scale for Wani Sea, Palu Bay obtained in 2012 from the Indonesian Navy's Hydrographic and Oceanographic Center. Bathymetry data with 1: 10,000 scale for Wani Sea, Palu Bay obtained in 2018 from the Indonesian Navy's Hydrographic and Oceanographic Center.
Data processing
Contouring is done to identify topographical changes in the seabed that occur using Wani sea bathymetry data in 2012 and 2018 with 1:10,000 scale. The first step in making a contour is to interpolate depth data from known points using the IDW method. The Inverse Distance Weighted (IDW) method is an interpolation technique that assumes each plot has a local influence, and the plot value decreases with distance [6]. This method assumes that the interpolation value is more similar in the near sample data than the further one. Weight change linearly according to the distance with sample data [7]. The results of depth interpolation are then processed into contour data and then overlayed contour data in 2012 and 2018.
The slope calculation is carried out on the pre and post-tsunami bathymetry data to identify the slope changes. The results of depth interpolation are processed into slope data, then classified into several classes [8]. Volume calculation is done to determine the changes in bathymetry that occurred due to the tsunami. The principle of volume calculation in this study is to calculate the cut and fill the volume of two depth data using the cross-section method. The volume is calculated based on post-tsunami depth data on pre-tsunami depth data [9].
Contour analysis
From the results of the contour overlap, it is known that significant topographic contour changes occur in the Wani waters which indicate the occurrence of underwater landslides in the area. There was an increase in depth at the contours of 150 m and 200 meters, but at a contour of 250 meters siltation occurred in the west and northwest areas.
Slope analysis
Slopes are natural sightings caused by significant differences in two places. The slope is one of the topographic elements and is a factor in erosion. In the calculation of the 2012 slope, the slope varies from 0% to 100%. Based on the classification in Table 1, the topography of the Wani sea is dominated by a moderate slope (8% -25%) [10]. In the calculation of the 2018 slope, the slope varies from 0% to 78%. Based on the classification in Table 1, the topography of Wani sea is dominated by a moderate slope (8% -25%) [10]. There was a change in the slope in the northern area. In 2012, the slope in the coastal area was in the category of high and low in the deepest area.
Volume Analysis
After the area of topographic change is known, to determine the volume of pre and post-tsunami topography changes, volume calculations are carried out using a cross-section method with a distance of 50 meters per section. The following is the result of the area and volume in each section by calculating using cross-sections. From the calculation of the volume between two depth surfaces, the volume of decreasing material is 59,512,720.79 m 3 , and the piled material is 12,466,252.63 m 3 . The most abundant material stack occurs at the western end of the area, as shown in Figure 3. The significant difference in material volume can be caused by landslide material that accumulates in an area more than 350 meters deep or outside the observation area.
Conclusion
a. The seabed topography on the Wani sea after the tsunami has increased the depth of 150 m to 200 m contours, but at 250 m contour to the west area, there is siltation due to the accumulation of material reaching 90 m.
b. The pre-tsunami slope of the northern coastal area is steeper than the post-tsunami slope, and in the western area, the slope changes from high class to moderate class.
c. The underwater topography volume calculation between pre-and post-tsunami surfaces show an increase in the volume by 12,466,252,630 m 3 and volume decreased by 59,512,720,790 m 3 after the tsunami. | 2019-12-19T09:16:37.322Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "dec4f508beeda66e11a17b2f1d4849cbe26c4460",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/389/1/012025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "eb739a14018ea02cc3c57639d3957adfcf353cb0",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
} |
244142614 | pes2o/s2orc | v3-fos-license | Verifying single-mode nonclassicality beyond negativity in phase space
While negativity in phase space is a well-known signature of nonclassicality, a wide variety of nonclassical states require their characterization beyond negativity. We establish a framework of nonclassicality in phase space that addresses nonclassical states comprehensively with a direct experimental evidence. This includes the negativity of phase-space distribution as a special case and further analyzes quantum states with positive distributions effectively. We prove that it detects all nonclassical Gaussian states and all non-Gaussian states of arbitrary dimension remarkably by examining three phase-space points only. Our formalism also provides an experimentally accessible lower bound for a nonclassicality measure based on trace distance. Importantly, this foundational approach can be further adapted to constitute practical tests in two directions looking into particle and wave nature of bosonic systems, via an array of nonideal on-off detectors and coarse-grained homodyne measurement, respectively. All these tests are practically powerful in characterizing nonclasssical states reliably against noise, making a versatile tool for a broad range of quantum systems in quantum technologies.
I. INTRODUCTION
Describing a quantum state of light or matter in phase space, e.g., Wigner function [1], is profoundly important to study quantum dynamics. It is a crucial tool to delineate the boundary between classical and quantum physics widely used in quantum optics [2], continuous variable (CV) quantum informatics [3,4] and other fields of quantum science [5]. As a classical phase-space distribution takes non-negative values like a probability distribution, a negativity emerging in quantum distribution is regarded as a signature of nonclassicality. However, negativity is just one aspect of multifaceted nonclassicality characterizing only a subset of nonclassical states. There exist quantum states with positive distributions that can nevertheless be classified as nonclassical, e.g., a squeezed state of light that is a key resource for CV quantum informatics [4] and single photons under a high-loss channel that are elementary information carriers for quantum informatics [6,7].
Nonclassical states are essential resources broadly for quantum informatics generating entangled states [8][9][10][11][12], providing advantage for quantum metrology [13][14][15] and quantum computation [16,17], etc.. A recent resource theory identified all quantum non-Gaussian states, even with positive Wigner functions, as a resource for quantum tasks, e.g., subchannel discrimination [18], which was further generalized to all CV nonclassical states [19]. It is thus crucial to establish a framework that can widely analyze nonclassical states beyond negativity. Specific properties were often used to characterize nonclassical states such as squeezing and photon-number statistics * Electronic address: jiyong.park@hanbat.ac.kr † Electronic address: hyunchul.nha@qatar.tamu.edu (sub-Poissonian) [20][21][22][23] extended also to multi-mode cases [24,25]. Distillation of nonclassicality can also be used to verify the nonclassicality of an initial state, however, requiring multiple copies of the same nonclasscial state and postselection [26][27][28]. More broadly, a quantum state tomography may be used to obtain complete information on a state thereby confirming nonclassicality [29]. However, it requires extensive measurements for sufficient data, and more seriously, an optimization process to find a physical state closest to obtained data. The data itself does not directly represent a legitimate quantum state rendering its significance weaker. It is necessary to characterize nonclassicality by examining phase-space in a faithful and resource-efficient way.
Adhering to negativity as a nonclassical feature, some works proposed to display negativity by modifying phasespace distributions, e.g., a regularized P -function under filtering process [30,31]. Other distributions closely related to the so-called s-parametrized functions [2] were also studied in view of photon statistics from on-off detectors [32,33]. Phase-space inequalities were also obtained by combining different s-parametrized functions useful to some extent [34]. Nevertheless, it is worth asking if the original Wigner function contains substantial information on nonclassicality beyond negativity. In this respect, Banaszek and Wódkiewicz proposed a Bell test examining four phase-space points to manifest nonlocality of two-mode states with positive Wigner functions [35], which were extended to generalized quasiprobability distributions [36] and genuine multipartite nonlocality [37][38][39]. The works in [40,41] demonstrated Bell-like tests also for single-mode nonclassicality and quantum non-Gaussianity. While conceptually remarkable and practically useful, these methods do not address a broad range of nonclassical states, e.g., squeezed states with purity < 0.86 are out of reach.
In this article, we propose a hierarchy of nonclassicality criteria in phase space that yields an efficient and broadly applicable test for CV systems. Our formalism addresses the Wigner function at n(n+1) 2 phase-space points progressively (n = 1, 2, . . . ). It includes the negativity of Wigner function at n = 1. Remarkably, it can detect all nonclassical Gaussian states and all non-Gaussian states of arbitrary dimension at the next level n = 2, i.e., looking into three phase-space points only. This opens a new possibility for a faithful and efficient test. We show that our foundational approach can constitute two practical tests characterizing nonclassical states reliably and efficiently from a particle and a wave point of views, respectively. It thus makes our method a versatile tool for a wide range of CV systems in quantum physics and technologies. We illustrate the practical power of our approach by examples. Our proposed approach is fruitful also in other aspects. It provides an experimentally accessible lower bound for nonclassical distance defined via trace norm [42], which is hard to obtain even theoretically. It can also be further extended to identify quantum non-Gaussianity [43][44][45][46][47][48][49][50][51][52][53][54][55][56][57] under energy constraint.
II. CRITERIA
Let us start with a general condition on classicality. A classical state, i.e., a mixture of coherent states, must satisfy for an arbitrary f (α) since its Sudarshan-Glauber-P function P ρc (α) is positive definite [58,59]. Our aim is to establish criteria that deal with the Wigner function at discrete phase-space points by choosing f (α) properly. Not only providing a fundamental insight, the Wignerfunction approach also leads to two general practical tests broadly applicable for CV systems, as shown later. To our aim, invoking the convolution between the Pfunction and the s-parametrized function [2] For the classicality to hold for arbitrary c i 's, we deduce the following theorem.
III. HIERARCHY
By its construction, M (n+1) 0 implies M (n) 0 since the matrix M (n+1) includes M (n) as its submatrix. That is, there naturally occurs a hierarchy of criteria with n increasing. If nonclassicality is confirmed at the level of n, it must be so at the next levels of n + 1, etc., but the converse is not always true.
Our formulation includes the negativity of Wigner function at the lowest n = 1, M (1) = π 2 W (β) 0. Then, it is fundamentally interesting, and practically important, to know how many phase-space points are required to verify noclassicality for states with positive Wigner functions. We prove below that our method can detect nonclassical states comprehensively using only three points {β 1 , β1+β2 2 , β 2 } on a line, i.e., M (2) 0 with
IV. GEOMETRIC INTERPRETATION
Before demonstrating its usefulness, let us briefly discuss the meaning of the classicality condition M (2) ≥ 0. One readily finds that all coherent states satisfy ρi , then makes a general classicality condition M (2) ≥ 0 for a mixture of coherent states. For a classical state, we thus see that the Wigner function at midpoint β1+β2 2 must be bounded by the geometric mean of the Wigner functions at end points β 1 and β 2 , importantly with a scaling factor e − 1 2 |β1−β2| 2 . In fact, this factor results from the commutator [â,â † ] = 1 representing the size of vacuum fluctuation.
Gaussian states. Every single-mode Gaussian state can be expressed as a displaced squeezed thermal state is a squeezing operator with strength r and angle φ of squeezing axis. ρ th (n) = ∞ n=0n n (n+1) n+1 |n n| is a thermal state with mean numbern. We can readily show M (2) 0 by taking three points along a squeezed axis [ Fig. 1(a)], with two end points at a distance 2d and the middle point at the origin. Our test turns out to be successful for a wide range of d as shown in Fig. 1(c). Without loss of generality, we consider an x-squeezed thermal state (γ, φ = 0), whose Wigner function is given by W σ (q, p) = 2µ π e −2e 2(r−rc ) q 2 e −2e −2(r+rc ) p 2 , with purity µ = (1 + 2n) −1 and critical squeezing r c = − 1 2 log µ. Section S1 of the Supplemental Material (SM) [60] gives its Wρ(d) 2 exp(−4d 2 ) for a Fock state |n under 80 % loss channel for n = 1 (red solid), n = 2 (gray dashed), n = 3 (black dot-dashed). R < 1 (shaded region) confirms nonclassicality for a wide range of displacement d.
Non-Gaussian states: More importantly, the threepoints test M (2) 0 can detect a broad range of non-Gaussian states. We first demonstrate its success for all non-Gaussian states of arbitrary truncation in Fock space. This includes as examples all noisy Fock states having positive Wigner functions. In Sec. S3 of SM [60], we further demonstrate that it can be extended to states of practical relevance having infinite Fock-state components.
The Wigner function of an arbitrary Fock-space truncated state (FSTS), ρ = N i,j=0 ρ jk |j k|, takes a form W ρ (α) = N i,j=0 ρ jk W |j k| (α), with W |j k| (α) given in Sec. S2 of SM [60]. As the case of negative Wigner functions is already treated at n = 1, we focus on the case of positive Wigner functions. Choosing β 1 = 2de iϕ and β 2 = 0 gives det ρ (de iϕ )e −4d 2 whose value less than 1 verifies nonclassicality. R(d) is a continuous function of d satisfying R(0) = 1. For the FSTS, we always find lim d→∞ R(d) = 0 with details in Sec. S2 of SM [60]. Therefore, there must be a finite d satisfying R(d) < 1 confirming nonclassicality. Remarkably, it works regardless of ϕ, i.e. insensitive to the axis of three points.
As an illustration, in Fig. 1, we plot the ratio R for (c) squeezed states and (d) Fock states under a 80%-loss channel. We confirm nonclassicality, R < 1, for a broad range of displacement d.
V. NONCLASSICALITY DISTANCE
It is also a topic of great interest to quantify the degree of nonclassicality for a given state ρ. A typical approach is to measure a distance between ρ and its closest classical state ρ c as N d (ρ) ≡ 1 2 min ρc∈C ||ρ − ρ c || 1 , with || · || 1 the trace norm and C the set of classical states. This is, however, very hard to obtain even if the state is completely known. Our formalism provides a lower bound for this nonclassical distance [42,61] enabling its practical estimation. With details in Sec. S4 of SM [60], we obtain where λ min is the least eigenvalue of M (n) at the level n. At n = 1, Eq. (7) shows that a negative value in phase space directly provides a reliable estimate for nonclassical distance. We can further estimate the nonclassical distance of a state with a postive Wigner function by using M (2) . For instance, for a general Gaussian state σ, using Eq. (6), which is beyond the results in Refs. [42,61] addressing only pure Gaussian states. We also establish connection between our approach and nonclassical depth [62] in S9 of SM [60].
With details in Sec. S5 of SM [60], if the least eigenvalue of M (2) ρ for a state ρ with energy E satisfies it confirms QNG. In Fig. 2, we plot ∆λ min = B(E) − λ min for a non- As seen from Fig. 2(a), our criterion detects QNG with f < 1 2 (positive Wigner function) for a squeezing r 0.237. Note that the squeezing operation does not create QNG as it is a Gaussian operation. In this context, the result also represents the QNG of f |2 2|+(1−f )|0 0| without squeezing. A recent ion-trap experiment realized a measurement in squeezed Fock basis, {Ŝ(r)|n : n = 0, 1, · · · } [69]. This can be adopted to verify QNG of stateŝ S(r)ρ nGŜ † (r) without performing squeezing on a non-Gaussian state ρ nG enhancing the range of QNG detection. Fig. 2(b) gives another example of a positive Wigner function with its QNG verified.
VII. PRACTICAL TESTS
The Wigner function corresponds to the number parity after displacement, i.e., W ρ (α) = 2 π tr[D † (α)ρD(α)(−1)n]. It is routinley measured in various systems, e.g., ion-trap [40,71] and circuit-QED [72], for which our proposed test M (2) can thus directly characterize nonclassicality. On the other hand, we can also derive alternative, practical, schemes out of Wigner-function framework, which can test nonclassicality reliably and efficiently against experimental imperfections. First, we present a generalized formalism to use on-off detectors registering photons without photon-number resolving (PNR). Second, we also (red solid) against n with binning size σ = 0.1. n: bin number for quadrature q = nσ. Grey shades represent the size of error due to finite data (a) ∼ 10 5 and (b) ∼ 10 6 .
present a marginal version of Wigner-function test, i.e., using M (q) = dpW ρ (q, p), which can be measured by homodyne detection well established for a wide variety of quantum systems including quantum optics [29], trapped ion [73], atomic ensemble [74], circuit cavity QED [75,76], and optomechanics [77,78]. Both of our proposed tests are powerful against noise with wide applicability.
On-off detector array. When an input light is equally divided via beam-splitting to impinge on N on-off detectors, the probability of k-detectors clicking is [79] p k [ρ] = tr ρ : with η detector efficiency and :Ô : normal-ordering.
The counting statistics p k above can also be obtained via the time-multiplexing approach using a single detector [33]. We first generalize our criterion to sparametrized function W ρ (α; s) [2] to use the counting Our classicality condition is readily generalized to M (s,n) 0 for an arbitrary s using elements in Eq. (11). We find the connection between the s-parametrized functions and the counting statistics p k in S6 of SM [60] as with each s m = 1 − 2N (N −m)η (m = 0, · · · , N − 1). Eq. (12) means that N different s-parametrized distributions W ρ (α; s m ) are determined by the counting statistics {p 0 , · · · , p N } obtained for a displaced stateD † (α)ρD(α). Furthermore, we prove in Sec. S2 of SM [60] that M (s,n=2) (three points test) can detect all nonclassical Gaussian and non-Gaussian states (FSTSs), importantly for an arbitrary s. This broader applicability beyond Wigner function makes our test robust against noise.
Let us illustrate the case of testing a Fock state |n under 50% loss channel mixed with a thermal photonn = 0.05 by using only N = 1 on-off detector of efficiency η = 0.7 [80]. We further consider an error due to finite data acquisition ∼ 10 5 (Sec. S8 of SM [60]). Our 3-points test Fig. 3(a), there exists a range of displacement to detect nonclassicality substantially beating the error. For instance, we have the signal to noise ratio as 1−R ∆R = 2.48 at d = 1.1. We also demonstrate the successful detection for other noisy Fock states with error analysis in Sec. S8 of SM [60].
Homodyne test. We next present a test using a marginal distribution M (q) = dpW ρ (q, p). Homodyne detection to measure M (q) is highly efficient, but requires a careful analysis. It is because that the actual homodyne data is coarse-grained due to finite binning, which may lead to a false detection of nonclassical effects [81][82][83]. Let σ be the binning size of homodyne data. Then all data in the range [−σ/2, σ/2] belong to the same bin yielding a coarse-grained distribution M σ ρ [n] ≡ σ/2 −σ/2 dδM ρ (nσ + δ) (n: bin number representing mean quadrature q = nσ). A classicality condition M (H) ≥ 0 then emerges with its elements where k can be either 0 or 1, with details in Sec. S7 of SM [60]. We prove in SM [60] that this marginal test even with a coarse-grained information detects all nonclassical Gaussian and non-Gaussian states (FSTSs). In Fig. 3(b), we show the result for a squeezed state (r = 0.3) un- 2∆ 2 e inφ ρe −inφ leaving no squeezing at ∆ = 1.2. Our homodyne test under coarse-graining (σ = 0.1) clearly manifests nonclassicality over 7 standard deviation, 1−R ∆R = 7.11. We also illustrate other cases in Sec. S8 of SM [60].
VIII. CONCLUSION
A phase-space approach usually provides us with a valuable insight into quantum physics [84]. While nega-tivity is one manifestation of nonclassicality, recent studies made it clear that all nonclassical states even without negativity are valuable resources for quantum information science [8][9][10][11]18]. It is thus critically important to establish a comprehensive framework of addressing nonclassical states with and without negativity covering a wide range of quantum systems. We have introduced a hierarchy of nonclassicality conditions that can address nonclassicality beyond negativity effectively and efficiently. Our approach makes it possible to analyze all nonclassical Gaussian states and non-Gaussian states using three phase-space points. Our formalism further provides a lower bound for nonclassical distance and a criterion to detect quantum non-Gaussianity with positive Wigner functions. Remarkably, our foundational approach also constitutes two practical tests looking into particle nature (number parity) and wave nature (marginal distribution), making a versatile tool for CV systems broadly. We illustrated the practical power of our tests adopting nonideal on-off detectors without resolving photon numbers and coarse-grained homodyne detection, respectively.
We hope our work could further stimulate works related to nonclassical effects from both a fundamental and a practical perspective. Our approach here clearly indicates that the information on nonclassicality is sufficiently imbedded in phase space even at a few points. Our geometric interpretation on classicality has stipulated the relation among the values of Wigner function, which is fundamentally associated with quantum fluctuation represented by a commutation relation or uncertainty principle. This seems worthwhile to further purse in studying nonclassicality for quantum multipartite systems as well. In the near term, we anticipate our framework can be useful for both theoretical and experimental analysis of quantum systems. In particular, our proposed tests can address all different CV systems including quantum optics, nano-or opto-mechanics, atomic ensemble, and circuit cavity QED, and so on.
Note added. We recently became aware of a closely related work by Bohmann, Agudelo and Sperling [85]. We note that our main idea and some results were earlier presented at an international conference [86].
S1. Optimal phase-space test for a Gaussian state
We first consider a 2 × 2 matrix A whose matrix elements are given by where F (x, y) = e −ax 2 −by 2 is a Gaussian function with a > c > b > 0. Then, we show that the minimum lowest eigenvalue of the matrix A is given by For a given function F (x, y) = e −ax 2 −by 2 , we can show that the points (x 1 , y 1 ) and (x 2 , y 2 ) minimizing the lowest eigenvalue of A must be (1) on x-axis, i.e., y 1 = y 2 = 0 and (2) symmetric with respect to the origin, i.e., x 1 + x 2 = 0. First, the lowest eigenvalue of A is given by For fixed values of A 11 = u and A 22 = v, the set of the points (x 1 , y 1 ) and (x 2 , y 2 ) satisfying F (x 1 , y 1 ) = u and F (x 2 , y 2 ) = v form two ellipses having the same center, directrix and major axis. Under the conditions a − c > 0 and b − c < 0, the ratio R is maximized at We now set y 1 = y 2 = 0 and rewrite Eq. (S3) as Now that λ in Eq. (S6) is a function of two variables A 11 = u and A 22 = v, we further fix its product A 11 A 22 = uv ≡ w. This also fixes the value of x 2 1 + x 2 2 = − 1 a log w. Note that X − √ X 2 + Y with Y > 0 decreases as X decreases or Y increases. Using the relation between the arithmetic mean and the geometric mean, we observe that A 11 + A 22 and A 11 A 22 (R − 1) are minimized and maximized, respectively, when |x 1 | = |x 2 | and u = v for a given uv = w.
All things considered together with |x 1 | = |x 2 | = x, we now need to optimize Examining its first derivative, we obtain the minimum lowest eigenvalue of A in Eq. (S2) at The Wigner function of the x-squeezed thermal state is given by W σ (α = q + ip) = 2µ π e −2e 2(r−rc ) q 2 e −2e −2(r+rc ) p 2 , with purity µ = (1 + 2n) −1 and critical squeezing r c = − 1 2 log µ. In view of Eq. (S1), its parameters read a = 2e 2(r−rc) , b = 2e −2(r+rc) and c = 2 satisfying the condition a > c > b > 0 for a nonclassical state r > r c . Therefore, its least eigenvalue of M We here show that all nonclassical Gaussian states and non-Gaussian states of arbitrary Fock-space truncation can be detected via the matrix M (s,2) criterion for any s. This naturally includes the case of Wigner function test, s = 0.
Gaussian states-The s-parametrized quasiprobability function of a x-squeezed thermal state is given by where with r and r c representing the squeezing strength and the critical squeezing strength for nonclassicality, respectively. Using the minimum lowest eigenvalue of M (s,2) σ in Eq. (S2) of S1 with the parameters a and b for the Gaussian states, we see that it becomes negative for all nonclassical states r > r c regardless of s < 1.
Non-Gaussian states-The s-parametrized quasiprobability function of an arbitrary Fock-space truncated state (FSTS), ρ = N j=0 N k=0 ρ jk |j k|, is written as where W |j k| (α; s) for j ≥ k [1] is given by with a generalized Laguerre polynomial L We thus investigate a function of d 1−s (S16) whose value less than 1 confirms nonclassicality. R(d) is a continuous function of d satisfying R(0) = 1. For an FSTS, we always find lim d→∞ R(d) = 0 with a dominant contribution given by Therefore, there must be a finite d satisfying R(d) < 1, i.e. det M (2) < 0 for all s < 1.
Remarkably, the above proof works regardless of ϕ, i.e. insensitive to the axis of three points. We illustrate it by an example of non-rotationally symmetric state in phase space, that is, a superposition state 1 √ 2 (|0 + |1 ) under a 50 % loss. In Fig. S1, we show the results of testing it by choosing three points along x-axis (red solid), 1 √ 2 (x + p)axis (gray dashed) and p-axis (black dot-dashed), respectively. We can clearly see that the test is successful for a wide range of displacement d whatever axis is taken. Wρ(d) 2 exp(−4d 2 ) , where d represents the distance from the origin along x-axis (red solid), 1 √ 2 (x + p)-axis (gray dashed) and p-axis (black dot-dashed), respectively. R < 1 confirms the detection of nonclassicality for a wide range of d in each case.
S3. Non-Gaussian states with infinite Fock-state components
We here illustrate the detection of non-Gaussian states having infinite Fock-state components that are of current practical interest for CV quantum informatics.
Setting β 1 = 0 and β 2 = iπ 4γ , we observe that which manifests that our method can detect all nonclassical dephased cat states regardless of γ and f .
B. Photon added coherent states
Another infinite-dimensional quantum state of current interest is the photon-added coherent state, |Ψ ∼ a † |γ . As |Ψ is a pure non-Gaussian state with negativity, let us treat a realistic situation under a loss channel, i.e. ρ = Tr E U BS |Ψ Ψ| ⊗ |0 0| E U † BS , where U BS represents a beam splitting operation. We first note that a † |γ = a †D (γ)|0 =D(γ)(a † + γ * )|0 =D(γ)(|1 + γ * |0 ). That is, |Ψ is nothing but a displaced FSTS. As the local displacement followed by the beam splitting can be expressed in different order as the beam splitting followed by another displacement, we finally see that the mixed state ρ is just a lossy FSTS followed by a displacement. We have already proved that any FSTS can be detected under our formalism. As the final displacement can be incorporated to the choice of three phase-space points accordingly, it proves that all photon added states under a loss channel can be detected as well. The same idea is readily extended also to the multiple-times photon-added coherent states, |Ψ ∼ a †m |γ =D(γ)(a † + γ * ) m |0 , under a noisy channel.
S4. Nonclassicality distance
The nonclassical distance is defined as N d (ρ) ≡ 1 2 min ρc∈C ||ρ − ρ c || 1 , with || · || 1 the trace norm and C the set of classical states. If N d (ρ) = D, it allows a decomposition ρ = ρ c +D(ρ + −ρ − ) where ρ c is its nearest classical state under the trace measure. Dρ + and −Dρ − represent the mixtures of eigenstates for ρ − ρ c with positive and negative eigenvalues, respectively (trρ + = trρ − = 1). For a general operatorÔ, we have where Q represents the set of all quantum states. We setÔ = n i,j=1 u * i u jMij with its connection to M ij of Eq. (3) in main text as M ij = tr[ρM ij ]. Here u = {u 1 , u 2 , ..., u n } is specifically taken to be the eigenvector for the lowest eigenvalue of M. From tr[ρ cÔ ] ≥ 0, we obtain We have above used |tr[σÔ]| ≤ n for any state σ ∈ Q from |tr[σM ij ]| ≤ 1 and n i=1 |u i | ≤ √ n.
S5. Quantum non-Gaussianity test
Our formalism can further identify quantum non-Gaussianity (QNG), i.e. those states beyond a mixture of Gaussian states. Eq. (S9) is the lowest eigenvalue of M (2) for a Gaussian state, which can be rewritten in terms of energy E = tr[ρn]. We verify QNG if the least eigenvalue of a given state is smaller than those of Gaussian states with the same E We here derive the above Gaussian bound B(E) in the following steps. To begin with, note that the phasespace matrix M (n) is linear with respect to states, i.e. M In this case, the smallest eigenvalue λ of M (2) ρ cannot be less than the weighted sum of the smallest eigenvalues λ i of M (2) σi , i.e. λ ≥ i p i λ i . This can be readily seen by considering the eigenvectors corresponding to the least eigenvalues as |λ and |λ i , i.e. λ|M (2) |λ = λ and λ i |M (2) σi |λ i = λ i , respectively. Then, from M (2) = p i M (2) σi , we obtain λ = λ|M (2) |λ = p i λ|M (2) σi |λ ≥ p i λ i due to the condition λ|M (2) σi |λ ≥ λ i |M (2) σi |λ i = λ i .
(ii) A pure Gaussian state is a displaced squeezed state, σ =D(γ)Ŝ(r, φ)|0 0|Ŝ † (r, φ)D † (γ), which has the energy E = sinh 2 r + |α| 2 . On the other hand, as we explained in the main text, the displacement does not affect the least eigenvalue of M (2) σ , which is given by λ min,σ = −2e −r coth r sinh r in Eq. (S9). This means that the squeezed state without displacement is energyefficient to achieve the same level of least eigenvalue. We thus consider only the squeezed states with energy E = sinh 2 r, which gives the expression for its least eigenvalue.
(iii) Importantly, we note that λ E min,σ above is a convex function of E (Fig. 1). Thus, among all Gaussian states with the same average energy E, the pure squeezed state achieves the smallest eigenvalue. If a state is a mixture of pure squeezed states with E = i p i E i , its least eigenvalue cannot be less than that of a single pure squeezed states with energy E due to the convexity of the function in Eq. (S21). Therefore, λ E min,σ represents the smallest possible eigenvalue of the matrix M (2) σ among all Gaussian states σ with the energy E.
Combining (i), (ii) and (iii), we obtain a QNG criterion. That is, if the least eigenvalue of M (2) ρ for a state ρ with energy E is less than B(E) ≡ λ E min,σ , it confirms QNG. The state cannot be a mixture of Gaussian states.
In Fig. S2, we plot ∆λ min = B(E) − λ min for a non- For a fraction f > 1 2 , the state has the negativity of Wigner function, which is a trivial evidence of QNG. As can be seen from Fig. S2, our criterion can detect QNG with f < 1 2 for a squeezing r 0.237. Note that the squeezing operation does not change QNG as it is a Gaussian operation. In this respect, the result in Fig. S2 also represents the QNG of f |2 2|+(1− f )|0 0| without squeezing.
A recent ion-trap experiment realized a measurement in squeezed Fock basis, {Ŝ(r)|n : n = 0, 1, · · · } by controlling the interaction between the spin and the motional states [2]. This measurement can be adopted to verify QNG of those statesŜ(r)ρ nGŜ † (r) without performing squeezing on a non-Gaussian state ρ nG . Fig. S2 (c) gives another example of a positive Wigner function with its QNG verified. The QNG of a mixed cat state f |C C| + (1 − f )|0 0|, with a cat state |C ∼ |γ + | − γ , is confirmed up to the vacuum fraction 1 − f for each γ. Black solid represents the value of 1 − f at which the Wigner function becomes positive, above which our criterion detects QNG for the mixed cat state (red dashed). The yellow-filled region thus represents the detection of QNG for a non-Gaussian state with a positive Wigner function.
S6. Connection between the s-parametrized functions and the counting statistics from on-off detectors
We start with the counting statistics Eq. (10) in main text and a relation between the expectation value of a normally ordered operator for a quantum state ρ with its Glauber-P function, i.e., tr[ρ : f (n) :] = d 2 βP ρ (β)f (|β| 2 ) for an arbitrary well-defined function f [3]. We first obtain Using PD † (α)ρD(α) (β) = P ρ (α + β), we get Employing the convolution relation in Eq.
(2) of main text and d 2 βP ρ (β) = 1, we obtain where s m = 1 − 2N (N −m)η and T km is defined as We thus see that the counting statistics p k for a displaced state ρ α ≡D † (α)ρD(α) is composed of quasiprobability functions via a triangular matrix T . The relation can be represented as which leads to using the inverse matrix T −1 . A triangular matrix is invertible if and only if all elements on its principal diagonal are non-zero [4]. We find that T ii = 0 for all i, which means that we can always obtain N different s-parametrized quasiprobability distributions from the photocounting statistics via N on-off detectors using Eq. (S29).
S7. Power of criteria using marginal distributions
Marginal test: Choosing β i = q i + ip with the same p for all i in Eq. (3) of main text and integrating over Under a coarse-graining with binning size σ, we choose test points as q i = (2m i + k)σ + δ with integer m i , k = 0 or 1, and δ ∈ [− σ 2 , σ 2 ]. Integrating over δ, we have is the coarse-grained marginal distribution with n the bin number. Our whole procedures corresponds to the classicality condition as ij ≥ 0 thus constituting a matrix test with elements In the above, k = 0 (1) corresponds to the case of choosing all even (odd)-numbered bins, making two different tests. If one chooses even-numbered (n 1 = 2m 1 ) and odd-numbered (n 2 = 2m 2 + 1) bins together, the middle bin ( n1+n2 2 ) is not well-defined. Therefore, the matrix test in Eq. (S30) has been established according to the quadrature values q i = (2m i + k)σ + δ for a fixed k (0 or 1), while m i may vary, from the beginning.
Marginal distribution of Gaussian state
Without loss of generality, we again deal with the case of a x-squeezed thermal state. Its marginal distribution is given by with µ its purity, r the squeezing strength and r c = − 1 2 log µ the critical squeezing strength for nonclassicality. Considering the ratio R = which indicates that all nonclassical Gaussian states, r > r c , can be detected (R < 1) with an arbitrary non-zero d. More practically, we now examine a coarse-grained marginal distribution of Gaussian states.
Coarse-grained marginal distribution of Gaussian state
The coarse-grained marginal distribution of the Gaussian state is given by Here, erf(z) = 2 √ π z 0 e −t 2 dt is the error function that can be expanded as which indicates that erf(z) = e −z 2 z √ π {1+O(z −2 )} for z ≫ 1. Setting z 1 = √ 2e r−rc (n− 1 2 )σ and z 2 = √ 2e r−rc (n+ 1 2 )σ, we have which yields erf(z2) erf(z1) ≪ 1 for r > r c and |n| ≫ 1. If we investigate the ratio with z = √ 2e r−rc (2n − 1 2 )σ. For a sufficiently large n, we thus see R → 0 for r > r c . That is, R[n] < 1 in a certain range of large n. It indicates that all nonclassical Gaussian states can be witnessed by using coarse-grained marginal distribution with an arbitrary binning size σ.
Marginal distribution of FSTS
The marginal distribution of an arbitrary FSTS, ρ = N j=0 N k=0 ρ jk |j k|, is given by where the marginal distribution for the operator |j k| is given by Looking into the ratio R = , we obtain R ∝ d −2N for a large d ≫ 1. Thus, R → 0 for a very large d. This means that the nonclassicality of every FSTS can be verified, R < 1, in a certain range of large d. More practically, we now examine a coarse-grained marginal distribution of FSTS.
Coarse-grained marginal distribution of FSTS
The coarse-grained marginal distribution of FSTS is given by using M |j k| (q) in Eq. (S38). We are going to deal with M σ ρ [n] for a large n, so with Eq. (S40) in mind, we first look at the integration Here Γ(a, x) = ∞ x e −t t a−1 dt is an incomplete Gamma function [5], which can be expanded as which yields for n ≫ 1. Using Eqs. (S40), (S41) and (S45), we thus obtain for n ≫ 1. Considering the ratio , we thus see R ∝ (nσ) 1−2N for n ≫ 1, which indicates that the nonclassicality of every FSTS can be verified by using the coarse-grained marginal distribution with an arbitrary finite binning size σ.
S8. Error analysis and practical examples
The data acquisition procedure can be thought of as picking up outcomes randomly from a multinomial distribution. Let us assume that the total number of possible outcomes is k and the probability to obtain i-th outcome is p i with k i=1 p i = 1. If we pick up outcomes N s times and the number of observation for i-th outcome is X i , the variance of the probability estimator Xi Ns and the covariance between the probability estimators Xi Ns and Xj Ns are given by pi(1−pi) Ns and − pipj Ns , respectively [6]. For the test based on the on-off detectors, we obtain s-parametrized quasiprobability distributions by a linear transformation of the measured clickcounting probability as described in Sec. S5. For a linear combination Z = i a i X i , the variance of Z is given by (∆Z) 2 = i a 2 i (∆X i ) 2 + i i =j a i a j Cov(X i , X j ) where ∆x and Cov(x, y) represent the standard deviation of x and the covariance between x and y, respectively. Using the propagation of the uncertainty, we can estimate the statistical uncertainty of the obtained s-parametrized quasiprobability distibutions.
Furthermore, the statistical uncertainty of the ratio R = AB C 2 can be estimated as We can also estimate the statistical uncertainty of the ratio for the test based on the coarse-grained homodyne detection by a similar error analysis. We now illustrate the power of two practical tests using the on-off detectors and the homodyne detection, respectively, including the analysis of error due to a finite data acquisition.
On-off detector test-We first show the results for the case of Fock state |n under a 50% loss channel, which makes all Wigner functions positive definite. In Fig. S3, we demonstrate the detection of nonclassicality using N = 2 on-off detectors with a data number N s ∼ 10 5 . The red curve represents R = as a function of displacement d, with the grey shades representing the error size ∆R for each d. We see that there always exists a range of d for each state in which the value R is well below 1 with the error ∆R included, that is, 1 > R + ∆R. For instance, we have 1−R ∆R = 2.21 at d = 1.87 for the noisy Fock state |3 , so the signal beats the error over 2 standard deviation. Interestingly, the signal to noise ratio increases with n, as seen from figures, as 1 Adopting this strategy, we analyze the case of lower detector efficiency η = 0.75 in Fig. S4. We again confirm the successful detection of nonclassicality reliably against noise with a lower detector efficiency for a broad range of displacements.
Homodyne test-We now demonstrate the detection of nonclassicality using a coarse-grained marginal distribution with a binning size σ = 0.1 and a data number N s ∼ 10 6 . In Fig. S5, we show the results for the same noisy Fock states as in Fig. S3. Black circles repre- at each value of m with m 1 = 10, which corresponds to the choice of three quadratures q 1 = 2m 1 σ, q 2 = (m 1 + m)σ, and q 3 = 2mσ for test. The grey shades represent the error size ∆R due to finite data N s ∼ 10 6 . We again see that there always exists a range of m for each state to achieve 1 > R+ ∆R. For instance, we have 1−R ∆R = 21.34 at m = 4 for the noisy Fock state |3 , so the signal beats the error over 21 standard deviation. For other Fock states |4 , |5 , and |6 , the signal to noise ratio is 1−R ∆R =17.73, 11.84, and 6.48 at m=5, 6, and 7, respectively.
As seen from the above examples, the corase-grained homodyne test is very robust against experimental imperfections. We further demonstrate its practical power m designates a bin number corresponding to quadrature q = mσ. Grey shades represent the size of error due to finite data ∼ 10 6 . R < 1 confirms nonclassicality. The results clearly manifest nonclassicality beating error in the orange region, where the error bar is negligible so not appreciable in the plot.
with other examples. Fig. S6 (a) confirms the case of a noisy single-photon state f |1 1|+(1−f )|0 0| at f = 0.3, and Fig. S6 (b) for a single photon state under a 50% loss channel mixed with a thermal photonn = 0.1. For comparison, there have been some protocols proposed to distill squeezing by which one may confirm nonclassicality upon the postselected (distilled) copies. For example, R. Filip remarkably proposed a distillation of squeezing for a noisy single-photon state [8]. Its single-copy distilla- n designates a bin number corresponding to quadrature q = nσ. Grey shades represent the size of error due to finite data ∼ 10 6 . R < 1 confirms nonclassicality. The results clearly manifest nonclassicality beating error in the orange region. tion ( Fig. 1 of [8]) entails a low probability of success, e.g. < 10 −2 for the case of f = 0.3, with distilled squeezing low to detect. In constrast, our method does not require postselection of data and manifests nonclassicality robust against practical errors. Next we move on to consider another practical noise, i.e. phase-diffusion of an optical signal. A phase diffusion can be described as where ∆ is the standard deviation of random phase-shift. The phase diffusion affects the marginal distribution of a quantum state ρ as where M ρ (x θ ) represents the marginal distribution of the quantum state ρ for the quadraturex θ =â e −iθ +â † e iθ 2 .
As shown in Fig. S7 (a), the phase-diffusion destroys squeezing at ∆ > 1.074, ∆ > 0.901 and ∆ > 0.696 for squeezed states with squeezing parameter r = 0.1, r = 0.2 and r = 0.4, respectively. The squeezing can be distilled after phase-diffusion, as shown in Fig. S7 (b), e.g. by applying the protocol in [9] that generally requires multi copies of the same nonclassical states and postselection. Our homodyne test directly confirms nonclassicality for these phase-diffused squeezed states. Fig. S8 shows a strong result from our test for the case of ∆ = 1.2, at which all considered states completely lose squeezing, against coarse-graining of homodyne data and finite data acqusition.
Comparison with moment test-The above examples illustrate that our homodyne test verifies nonclassicality when the usual squeezing test by homodyne detection fails. We may further compare our test with moment-based homodyne tests. Suppose that one obtains a marginal distribution M (q) = dpW ρ (q, p) from a homodyne measurement, where W ρ (q, p) is the Wigner function. The distribution M (q) can be related to the marginal distributionM (q) = dpP ρ (q, p) of the Sudarshan-Glauber P -function P ρ (q, p). That is, M (q) = 2 π dqM (q)e −2(q−q) 2 . If the state is classical, the dis-tributionM (q) must be positive-definite. This can be checked in terms of moments q k ≡ dqM (q)q k . We can readily see that an n × n matrix Q, whose elements are given by Q ij = q i+j , must be positive definite. For example, the violation of the condition Q ≥ 0 at n = 2 represents the usual quadrature squeezing.
We have performed the analysis on the phase-diffused squeezed states using Q matrix test. This test fails to detect nonclassicality at n = 2 (squeezing) and n = 3 for a severely decohered state, so we move to the level n = 4. Fig. S9 shows the results for the squeezed state with r = 0.4 and r = 0.5 against the phase diffusion ∆. For a fair comparison, we include the effects of finite data ∼ 10 6 and the coarse-graining σ = 0.1 for the error analysis. To confirm nonclassicality, the least eigenvalue (red solid) of the matrix Q must be negative. Consdering the error level (black dashed), we see that the test fails at a large phase-diffusion, ∆ > 1.76 (r = 0.4 case) and ∆ > 1 (r = 0.5 case). One may try to enhance the test by further going up to the higher level of n, which however is not much favoarable in the presence of errors due to finite data and coarse-graining. In contrast, as we show in Fig. S10, our homodyne test clearly manifests nonclassicality even at the large phase-diffusion ∆ = 2 for the same squeezed states.
S9. Estimating the nonclassical depth
Our formulation also provides an estimate for another nonclassicality measure, i.e. nonclassical depth [10], as follows.
If the nonclassical depth of a quantum state ρ is τ , the | 2020-05-13T01:00:51.269Z | 2020-05-12T00:00:00.000 | {
"year": 2020,
"sha1": "d1a126331c97d53010e69f112812866040d5f71d",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.3.043116",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "d1a126331c97d53010e69f112812866040d5f71d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16052940 | pes2o/s2orc | v3-fos-license | Do relevant markers of cancer stem cells CD133 and Nestin indicate a poor prognosis in glioma patients? A systematic review and meta-analysis
Background CD133 and Nestin, as the markers of cancer stem cells, have recently been reported frequently in the pathogenesis and development of human gliomas. However, the prognostic role of CD133 and Nestin in gliomas still remains controversial. In this study, we aimed to evaluate the association between the expression of CD133 and Nestin and the outcome of glioma patients by conducting a systematic review and meta-analysis. Methods We performed systematically electronic and manual searches through the database of Pubmed and embase (until to December 25, 2014) for titles and abstracts which investigated the relationships between CD133 and Nestin expression and outcome of glioma patients. A systematic review and meta-analysis was executed to generate Pooled hazard ratios (HRs) with 95 % confidence intervals (CIs) for overall survival (OS) and progression-free survival (PFS). Results A total of 1,490 patients from 32 studies (13 articles) were included in the analysis. 19 studies and 13 studies investigated correlation between CD133 expression or Nestin and survival in gliomas, respectively. Our results showed that high CD133 expression in patients with glioma was associated with poor prognosis in terms of OS (HR 1.69; 95 % CI, 1.16–2.47; P =0.0060) and PFS (HR, 1.64; 95 % CI, 1.12–2.39; P = 0.010). In addition, high Nestin expression were associated with worse OS (HR 1.751; 95 % CI, 1.19–2.58, p = 0.004) but has no significant association with PFS (HR 1.55; 95 % CI, 0.96–2.51, p = 0.074). Even more important, the results of the subgroup meta-analyses show that that high CD133 expression was associated with worse prognosis in terms of OS and PFS in patients with WHO IV glioma but not WHO II-III. On the other hand, Nestin high expression was associated with worse prognosis in terms of OS and PFS in patients with WHO II-III glioma but not WHO IV. Conclusion High level of CD133 expression trends to correlate with a worse OS and PFS in glioma patients, especially WHO IV gliomas and Nestin high expression trends to correlate with a worse OS in glioma patients especially WHO II–III, revealing both the markers of cancer stem cells may as the potential pathological prognostic markers for glioma patients.
Introduction
Glioma is the most common primary brain tumor with the most grade malignancy, although in recent years the diagnosis and treatment of gliomas have made great progress, the prognosis of patients with glioma remains poor [1]. There is an urgent need to find a reliable marker to predict the prognosis of glioma, thereby providing the basis for the choice of a reasonable individualized treatment plan [2].
Cancer stem cell theory considers that the occurrence of tumors derives from some special cells, these cells are called cancer stem cells with the similar characteristics to embryonic stem cells, such as selfrenewing and unlimited proliferation, multi-directional differentiation and anti-chemoradiotherapy and so on [3,4]. Due to these characteristics of cancer stem cells, the traditional treatments, such as radiation and chemotherapy, can not effectively remove the cancer stem cells, the remaining tumor stem cells continue proliferation and differentiation, leading to tumor recurrence [4,5]. There are a variety of markers used to isolate glioma stem cells, CD133 and Nestin are the most commonly used two markers that are widely expressed in various tumor cells, such as malignant glioma, liver cancer, ovarian cancer, colon cancer, lung cancer, etc. [6][7][8][9][10][11]. In recent years, a number of studies analyze the relationship between the markers of tumor stem cells CD133, Nestin and prognosis of patients with glioma, but due to differences in research method, sample size and the study population, the findings of a single sample are difficult to extend to the entire population and the obtained conclusions are inconsistent. This study used Meta-analysis method to systematically evaluate the literatures on the relationship between the expression of Nestin, CD133 that were multiple markers involving glioma stem cells and the prognosis of patients with glioma.
Search strategy and study selection
A systematic literature search of the PubMed and Embase databases was conducted on studies evaluating the effect of the markers of cancer stem cells (CD133 and Nestin) on glioma patient survival. Our search strategy included terms ("Glioma" or "Glioblastoma") and ("CD133 antigen" or "AC133 antigen" or "prominin-1" or "PROML1" or "Nestin") and ("Survival" or "Mortality" or "Prognosis"). The literature search was conducted in 25 December 2014 and updated in 5 January 2015. Furthermore, a manual search of reference lists from the relevant original articles and review articles was also performed for additional relevant publications.
Two independent reviewers (Xia L and Wu B) independently inspected all candidate articles. Discrepancies were resolved by discussion. Studies that met all the following inclusion criteria were included in the review: (i) The diagnosis of glioma was made based on pathological examination; (ii) The association of the expression CD133 or Nestin with OS or PFS about gliomas was reported; (iii) The study provided the direct estimation of hazard ratios (HRs) and there was 95 % confidence intervals (CIs), or the date could be calculated by p values and other data reported. (iv) We included the studies with the largest sample size if the same glioma patient population were found to overlap among publications.
Definitions and data extraction
The OS (overall survival) was defined as the time interval between the medical treatment and the death of patient or the last follow-up. The PFS (progression free survival) was calculated as the time interval between the date of treatment and the detection of the tumor recurrence or death from any cause. Both two reviewers independently carried out data extraction from including studies and any discrepancies were resolved by discussion between the two. The following data were extracted from all including studies: the first author's name, year of publication, country, sample size, patient age, WHO grade, detect method of CD133 or Nestin expression, cut-off level, follow up period, survival analysis and prognostic outcomes (PFS and OS). Any discrepancies were resolved through discussion amongst the authors.
Quality assessment of primary studies
Quality assessment of included primary studies was independently executed by two reviewers (Xia L and Wu B) using the Newcastle-Ottawa Quality Assessment Scale (NOS). NOS scores of ≥6 were defined as high-quality studies. Any disagreement was determined by joint discussion.
Statistical analysis
All analyses were performed by using stata 12.0 statistical software (Stata Corporation, College Station, TX, USA). Hazard ratio (HR) and 95 % confidence intervals (CI) were got directly from each study or from estimation of Kaplan-Meier survival curves according to the methods by Parmer et al. An HR less than one was defined as a better prognosis in glioma patients with IDH mutation, whereas an HR more than one indicated a poor prognosis. We the most powerful one (multivariate analysis was superior to univariate analysis. And the latter one weighted over unadjusted Kaplan-Meier analysis) was chose, if several HR estimates were presented in the same study.
The heterogeneity of the included trials was assessed by the Cochrane's Q statistic for each meta-analysis. We carried out both fixed-effects (Mantel-Haenszel method) and random effects (DerSimonian-Laird method) models and producted the pooled HRs. Thanks to a priory of assumptions about the likelihood of heterogeneity across primary studies, the random-effects model was chosen. In addition, subgroup analyses were performed to investigate the potential causes of heterogeneity according to study country, sample size, patient age, follow-up period, detect method of CD133 or Nestin expression, cut-off level and WHO grade.
Publication bias was first investigated by Funnel plots and then performed for each of the pooled study groups using the Begg's test. All p values were two-sided and the significance level was set at 5 %.
Results
The study inclusion procedure and study characteristics The selection procedure of the eligible studies was presented in Fig. 1. In brief, a total of 1153 studies were identified from our initial electronic search. Of these, we eliminated 306 studies owing to overlapping data sets. Of which, 847 abstracts were considered relevant and full texts were reviewed in detail. By the end of the review 13 literatures on glioma [12][13][14][15][16][17][18][19][20][21][22][23][24] (9 CD133 and 6 nestin; 1,490 patients), meeting our inclusion criteria for metaanalysis, were left with sufficient data for extraction.
The baseline characteristics of the literatures enrolled were summarized in Tables 1 and 2. Thirty two studies were included in those studies including 19 studies investigated the association between CD133 expression and outcome of glioma patients and 13 studies for Nestin. All studies were published between 2008 and 2014. Of these, the majority of the studies were executed in Europe (n = 23). Others were conducted in Asia (n = 8) and USA (n = 1). The total sample size from all studies was 1490 and the sample size was 24-379 patients and the range of medium age was 37.5-60.1 years. Of which, 13 studies evaluated grade II-III gliomas and 17 examined grade IV glioma. HRs and 95 % CI for OS or PFS in 27 studies could be directly extracted and was produced by Kaplan-Meier analysis for the 5 remaining studies. The most frequently used cutoff values for the high versus low/ present versus absent expression of CD133 or Nestin were the median (n = 15) and values calculated by using several semiquantitative methods.
CD133 expression and OS in gliomas
A total of 11 studies were involved in the association between CD133 expression and OS of glioma patients, among which statistically significant heterogeneity was observed (I 2 = 77.8 %). Therefore, a random model was applicable to calculate a pooled HR and 95 % CI, the combined analysis showed that upon comparing patients with a low expression of CD133, patients possessing high CD133 expression had a significantly poorer OS (HR = 1.69, 95 % CI: 1.16 to 2.47; P = .0006) (Fig. 2).
In order to avoid the influence of heterogeneity, further subgroup analyses were conducted and stratified based on the study origin, sample size, follow up period, patient age, test method, cut-off level and WHO grade. And the results showed that almost the subgroup Regarding the publication bias in the studies, we found no funnel plot asymmetry. Furthermore, Begger's test was applied to provide statistical evidence for funnel plot symmetry. As expected, the P value of Begger's test was 0.350 (Fig. 3). Hence, there was no evidence for significant publication bias in the meta-analysis.
CD133 expression and PFS in gliomas
Eight studies provided information concerning the association between CD133 expression and PFS of glioma patients. Similarly, a random model was applicable to calculate a pooled HR and 95 % CI, since significant heterogeneity was observed in the pooled studies (I 2 = 80.5 %). The combined analysis exhibited a significant association between increased expression of CD133 and poor PFS (HR 1.73; 95 % CI, 1.86-2.83, p = 0.027) (Fig. 4).
Further subgroup analyses were conducted and stratified based on the study origin, sample size, follow up period, patient age, test method, cut-off level and WHO grade. And the results show that increased expression of CD133 predicted a significantly worse OS in following subgroups including sample size ≤ 50, follow up period > media 12 months or no referred median/mean age referred, IHC test and cut-off level not median. However, we did not discover any significant association in other subgroups (Table 3). Similarly, the results in subtotal analysis stratified by WHO grade of glioma showed that At the same time, no funnel plot asymmetry was found in the studies and the Begger's test did not show any evidence of publication bias (P = 0.902; Fig. 3).
Nestin expression and OS in gliomas
Eight eligible studies provided the estimation of the HR and 95 % CI for the correlation between Nestin expression and OS of glioma patients, among which statistically significant heterogeneity was observed (I 2 = 75.8 %). Therefore, a random model was applicable to calculate a pooled HR and 95 % CI and the combined analysis showed that upon comparing patients with a low expression of Nestin, patients with high Nestin expression had a significantly poorer OS (HR = 1.75, 95 % CI: 1.19 to 2.58; P = 0.004) (Fig. 5).
The subgroup analyses were conducted and stratified based on the study origin, sample size, follow up period, patient age, test method, cut-off level and WHO grade. And the results showed that increased expression of CD133 predicted a significantly worse OS in following subgroups including sample size ≤ 50, follow up period > media 12 months or no referred, median/mean age referred, IHC test and cut-off level not median. However, we did not discover any significant association in other subgroups (Table 4). Even more important, when in subtotal analysis stratified by WHO grade of glioma, four studies of WHO II-III glioma exhibited a significant association between increased expression of Nestin and poor OS (HR 3.11; 95 % CI, 1.45-6.67, p = 0.004 I 2 = 57.2). However, we did not discover any significant association in subgroups of WHO IV giomas (HR 1.09; 95 % CI, 0.83-1.44, p = 0.518 I 2 = 0.000).
The publication bias in the studies was conducted, the funnel plot asymmetry was not found. Then, we applied Begger's test to provide statistical evidence for funnel plot symmetry. As expected, the P value of Begger's test Fig. 4 A forest plot of HR and 95 % CI of the association between CD133 expression and PFS of gliomas was 0.266 (Fig. 3). Hence, there was no evidence for significant publication bias in the meta-analysis.
Nestin expression and PFS in gliomas
The combined analysis of the five studies did not exhibit a significant association between increased expression of Nestin and poor PFS (HR 1.55; 95 % CI, 0.96-2.51, p = 0.074) (Fig. 6).
Further subgroup analyses were conducted and stratified based on the study origin, sample size, follow up period, patient age, test method, cut-off level and WHO grade. And the results showed that increased expression of Nestin predicted a significantly worse OS in following subgroups including sample size ≤ 50, follow up period > media 12 months or no referred, median/mean age referred, IHC test and cut-off level not median. However, we did not discover any significant association in other subgroups (Table 4). Similarly, the results in subtotal analysis stratified by WHO grade of glioma showed that two studies of WHO IV glioma exhibited a significant association between increased expression of Nestin and poor OS (HR 2.34; 95 % CI, 1.68-3.27, p = 0.000 I 2 = 0.000).
At the same time, no funnel plot asymmetry was found in the studies and the Begger's test did not show any evidence of publication bias (P = 0.221; Fig. 3).
Discussion
Cancer stem cells (CSC), a small portion of cell population with the characteristics of stem cells, are present in the tumor tissue. It is the root to form the tumor cells with different degree of differentiation because of a capacity of self-renewing and the multi-directional differentiative potential [1,3,25]. At the earliest, CSC is found in the blood system tumors [26]. Recently, with the development of flow cytometry and in vitro tumor formation technology, cancer stem cells have been isolated and identified in a variety of solid tumors [27,28]. The proposed cancer stem cell theory makes people have a new understanding on the biological behavior of tumors: the tumor is not only a genetic disease, but also a disease of stem cells. Stem cells become cancer stem cells after gene mutations, which is the root of tumor recurrence and metastasis [29].
Glioma stem cells can be sorted by finding its markers in order to conducted the targeted chemotherapy for glioma stem cells, which can effectively improve the specificity of chemotherapy and reduce the side effects on the normal engine body and cells, thereby finding a new breakthrough for cancer treatment [30]. Recently, the markers of glioma stem cell that are studied more than others include CD133, nestin, HMGA1, A2B5, etc. [31][32][33][34]. It has made some breakthroughs from the continuous in-depth study of these markers, but there is a lot of controversy.
At present, a number of studies have shown that glioma stem cell markers CD133 and Nestin are closely related to the prognosis of patients with glioma, but some individual studies show that there is no clear relationship between CD133, Nestin and the prognosis of patients with glioma. We used Meta-analysis to systematically evaluate the literatures on the relationship between the glioma stem cells markers Nestin, CD133 and the prognosis of patients with glioma in order to accurately and objectively evaluate the application value of CDl33 and Nestin in prognosis of glioma.
The current meta-analysis is the first to systematically estimate the association between cancer stem cell markers and glioma survival. In this study, the results showed that CSCs marker CD133 was associated with worse OS and PFS in glioma patients and Nestin was associated with worse OS but not PFS. Especially, subgroup analysis showed that the overexpression of CD133 had a more significant predictive value for glioma patients with WHO grade II-III, but Nestin for WHO grade IV.
Limitations of this study include: (1) the number of selected cases in the research is too few, especially only Above all, we found that high CDl33 expression may be independent risk factor for glioma patients' prognosis, especially WHO IV gliomas and high Nestin expression may be independent risk factor for glioma patients' prognosis with grade WHO II-III. Based on the current findings, assessing CDl33 and Nestin expression could provide better prognostic information for patients with glioma and be used as a novel therapeutic target. Further large-scale cohort studies are needed to validate our results. | 2018-04-03T04:33:01.788Z | 2015-05-14T00:00:00.000 | {
"year": 2015,
"sha1": "6ceb6f782c1bde290e97fd77e33ee73ecae70950",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-015-0163-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ceb6f782c1bde290e97fd77e33ee73ecae70950",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
49863265 | pes2o/s2orc | v3-fos-license | Bi-allelic loss of human CTNNA2, encoding αN-catenin, leads to ARP2/3 over-activity and disordered cortical neuronal migration
Neuronal migration defects, including pachygyria, are among the most severe developmental brain defects in humans. Here we identify bi-allelic truncating mutations in CTNNA2, encoding αN-catenin, in patients with a distinct recessive form of pachygyria. CTNNA2 was expressed in human cerebral cortex, and its loss in neurons led to defects in neurite stability and migration. The αN-catenin paralog, αE-catenin, acts as a switch regulating the balance between α-catenin and Arp2/3 actin filament activities1. Loss of αN-catenin did not affect β-catenin signaling, but recombinant αN-catenin interacted with purified actin and repressed ARP2/3 actin-branching activity. The actin-binding domain (ABD) of αN-catenin or ARP2/3 inhibitors rescued the neuronal phenotype associated with CTNNA2 loss, suggesting ARP2/3 de-repression as a potential disease mechanism. Our findings identify CTNNA2 as the first catenin family member with bi-allelic mutations in human, causing a new pachygyria syndrome linked to actin regulation, and uncover a key factor involved in ARP2/3 repression in neurons.
activity. The actin-binding domain (ABD) of αN-catenin or ARP2/3 inhibitors rescued the neuronal phenotype associated with CTNNA2 loss, suggesting ARP2/3 de-repression as a potential disease mechanism. Our findings identify CTNNA2 as the first catenin family member with bi-allelic mutations in human, causing a new pachygyria syndrome linked to actin regulation, and uncover a key factor involved in ARP2/3 repression in neurons.
Keywords
alpha-catenin; actin remodeling; Arp2/3; neuronal migration; centrosome; pachygyria; polarity; primary neurite; Wnt signaling; beta-catenin The Lissencephaly (LIS) spectrum is characterized by defects in the folding pattern of the cerebral cortex, resulting in reduced number or complexity of gyri that characterize the human brain 2 . LIS encompasses a continuum of malformations ranging from complete agyria to pachygyria, in which gyral formation is diminished but not absent. Patients present clinically with failed motor and cognitive development followed by intractable epilepsy. Most patients are severely intellectually impaired, and are unable to walk or care for themselves.
Projection neurons of the cerebral cortex are born in a zone adjacent to the ventricle, and then migrate to the future six-layered neocortex. Neuronal migration is achieved through a rearrangement of cytoskeletal components in response to extracellular cues, mediated by numerous intracellular signaling pathways 3 . Coordinated regulation of monomeric (G-actin) and polymerized actin microfilaments (F-actin) is critical for neuronal growth cone morphogenesis, axon pathfinding, and neurite extension during migration 4,5 . Positive regulators of F-actin assembly include families of nucleating proteins (Formins, Arp2/3) and their co-activators (Rho GTPases, WASP, WAVE), but negative regulators remain elusive, particularly in neurons 6 .
We recruited three families, with seven affected individuals showing neurodevelopmental delay (Fig. 1a, Table 1). All were diagnosed before age two years and displayed acquired microcephaly, hypotonic cerebral palsy, inability to ambulate or speak, and intractable seizures. (Table 1). Magnetic resonance imaging (MRI) demonstrated pachygyria with dramatic cortical gray matter thickening up to 3-4 cm, with paucity of gyri without obvious posterior-anterior gradient or focal dysplasias. There was absent anterior commissure, hypogenesis of the corpus callosum, and cerebellar hypoplasia (Fig. 1b). This phenotype is distinct from LIS1-pachygyria or DCX-pachygyria, which show posterior-or anteriorgradients, respectively. All subjects were enrolled in institutional review board (IRB) approved protocols and provided consent for study. We performed whole exome sequencing 7 on at least one member of each family 8 . We prioritized rare (<0.2% allele frequency) damaging variants (GERP score >4 or SIFT <0.05), and focused on homozygous variants due to parental consanguinity. Parametric linkage analysis from genome-wide SNP arrays and homozygosity mapping 9,10 showed identical-by-descent haplotypes ( Supplementary Fig. 1a). Aligning variants with corresponding homozygous intervals identified putative nonsense variants in CTNNA2 in all three families (Family 1101, c.2664C>T p.Arg882*; Family 1263, c.2341C>T p.Arg781*; Family 4727, c.1480C>T p.Arg494*) (Fig 1c, d, Supplementary Fig. 1b). The three variants were each observed only heterozygous once in the public databases ExAC and gnomAD. Sanger sequencing confirmed segregation according to a strict recessive mode of inheritance, with full penetrance, in all genetically informative available family members, suggesting that bi-allelic CTNNA2 loss-of-function mutations underlie pachygyria in these patients.
CTNNA2 is the ancestral α-catenin gene and is conserved in all Metazoa, but is predominantly expressed in brain in mammals 11 . CTNNA1 is the most widely expressed, but is absent from populations of migrating neurons 12 , whereas CTNNA3 is expressed predominantly in myocardium. We confirmed CTNNA2 expression in human neural tissue ( Supplementary Fig. 2a), and found protein co-expression with migration markers Dcx and Tuj1 in murine embryonic day (e) 13.5 brain (Supplementary Fig. 2b). As reported in mouse, a rim of αN-catenin was expressed in the apically localized progenitors of the ventricular zone 12 . In 20-week gestation human fetal brain αN-catenin was mostly restricted to regions expressing DCX and TUJ1 in developing cortical plate and marginal zone ( Supplementary Fig. 2c).
There are two mouse lines harboring loss-of-function mutations of the ortholog to human CTNNA2 (Catna2). Cerebellar deficient folia (cdf) mice have a spontaneous C-terminal deletion [13][14][15] , and the conventional knockout removed the first exon 16 . These mutants share multiple phenotypes including impaired lamination of a subset of Purkinje and hippocampal neurons [13][14][15][16] , hippocampal dendritic spine morphogenesis 16,17 , axon projections, positioning of subsets of nuclei-specific neurons, and midline axonal crossing 18 . Of note, many of the phenotypes present in Catna2 mice are shared with CTNNA2 patients, including cerebellar hypoplasia and midline defects, however, neither line showed evidence of an overt neocortical phenotype 15 . This was not surprising given that mouse models for human cortical migration defects typically show no neocortical defects.
In order to investigate migration in a human model, we generated iPSC and neuronal derivatives from the affected and unaffected member of Family 1263 (1263A and Control, respectively), an individual with LIS due to Miller-Dieker syndrome (MDS, deletion of chromosome 17p11.3) as well as targeted the CTNNA2 gene in the H9 hESC line (herein referred to as CTNNA2 KO ) 19 (Supplementary Fig. 3). Western blot of the neuronal progenitor cells (NPCs) demonstrated absent αN-catenin protein in the patient and knockout compared to Control ( Supplementary Fig. 4).
To study migration, we adopted a neurosphere assay 20 , and measured neuronal cell body distances from the edge after 48 hrs. We first confirmed cells exiting the neurosphere expressed postmitotic neuronal markers and were negative for other lineages ( Supplementary Fig. 5a, b). GFAP-positive cells were only observed at the edge of the plated spheres in all of the lines tested ( Supplementary Fig. 5a, b). Cell bodies of Control neurons showed a migration front at 514 μm (Fig. 2a, 21 , the distribution of distances of MDS, CTNNA2 patient, and CTNNA2 KO exited-neurons was significantly reduced (Fig. 2a, Supplementary Fig. 6, Supplementary Fig. 7). We conclude that loss of CTNNA2 results in a neuronal migration defect in vitro.
Time-lapse phase microscopy was performed to study the mechanism of defective migration. Control cells displayed bipolar morphology, with leading neurite length on average 130 μm. CTNNA2-mutant cells showed shortened (ave. length 18 μm) disorganized leading processes, enlarged growth cones, proximal ectopic filopodia and lamellipodia, and failed bipolar morphology (Fig. 2b, Supplementary Movie 1-3). We confirmed that the neurite length and migration defect were due to absence of CTNNA2 using lentiviral transduction rescue with GFP-tagged CTNNA2. Transduced cells had uniform expression of a full-length αN-catenin-GFP at 129 kDa and a processed product at 102 kDa ( Supplementary Fig. 8a).
Previous studies have shown mouse αE-catenin, encoded by Catna1, is required for preimplantation epithelial integrity 22 , whereas conditional deletion in embryonic brain disrupts adherens junctions, leading to dysregulated proliferation 23 . To test whether loss of CTNNA2 affects neuroepithelium polarity, we generated neural rosettes from Control and 1263A iPSCs and used immunostaining to examine apical polarization. Despite the absence of αN-catenin at the apical surface of the rosette, the tight junction marker ZO-1 and the adherens junction markers N-Cadherin and β-Catenin were localized in a polarized fashion near cilia ( Supplementary Fig. 9). These results suggest loss of CTNNA2 does not adversely affect apical polarization of neuroepithelial cells in culture.
The presence of a putative β-catenin binding domain suggests αN-catenin loss may alter Wnt signaling 24 . We therefore compared gene expression profiles between MDS, 1263A and Control NPCs by RNA-sequencing. We found no consistent, differential expression changes, arguing against a major transcriptional effect ( Supplementary Fig. 10 a-c). Furthermore, established canonical Wnt target genes were not changed ( Supplementary Fig. 10d), suggesting loss of CTNNA2 does not measurably impact Wnt-mediated transcription.
We thus focused on potential αN-catenin regulation of the neuronal cytoskeleton. αNcatenin contains a putative F-actin binding domain (ABD) at the C-terminus. To assess the ability of αN-catenin to directly bind and bundle actin, we performed actin binding and bundling assays in vitro. Similar to the α-actinin control, full-length recombinant αNcatenin co-sedimented with F-actin (Fig. 3a, Supplementary Fig. 11). Full-length αNcatenin appeared to weakly bundle F-actin filaments (Fig. 3b), although more work would be necessary to assess direct bundling activity. There was no noticeable change in the Gactin/F-actin ratio in NPCs from control or affected ( Supplementary Fig. 12), arguing against an overall effect on actin stabilization. We next generated lentiviral constructs lacking or containing amino acids 671-905, corresponding to the αN-catenin ABD 25 , and confirmed that encoded protein was stable in NPCs ( Supplementary Fig. 8b, c). CTNNA2 ΔABD could not rescue migration defects in the CTNNA2 KO neurons, and had no effect on Control cells (Fig. 3c, Supplementary Fig. 6, 7). In contrast, expression of CTNNA2 ABD alone mediated rescue of migration in CTNNA2 KO , and enhanced migration in Control neurons (Fig. 3c, Supplementary Fig. 6, 7). We conclude the ABD of CTNNA2 is necessary and sufficient for its effect on neuronal migration in CTNNA2-mutant cells.
The reduction in leading process stability we observed in CTNNA2-mutant neurons was similar to a phenotype recently described in mouse Arpc2 mutant radial glia 26 , suggesting αN-catenin might regulate Arp2/3 activities. Recombinant αE-catenin controls actinfilament organization and represses Arp2/3-mediated actin polymerization in vitro 25 . Thus, we reasoned that the excessive filopodia formation, accompanied by repeated, rapid retraction of the leading process in CTNNA2-mutant neurons, might result from failure to suppress ARP2/3-mediated actin branching. To initiate actin branching, the ARP2/3 complex binds to the side of existing F-actin filaments to nucleate a new filament at a distinctive 70° angle; and consequently ARP2 and ARP3 are incorporated into the microfilament structure 27 . To test for increased F-actin branching, we assessed the amount of ARP3 protein associated with F-actin filaments. Migrating neurons from the MDS patient showed ARP3 associated with actin to be equal to or less than Control, whereas neurons from 1263A and CTNNA2 KO lines showed almost a 50% increase in association (Fig. 4a, c, Supplementary Fig. 13a, b). We conclude that CTNNA2-deficient cells show excessive association between ARP2/3 and actin.
ARP2/3 initiates actin branching in the presence of the Wiskott-Aldrich syndrome family protein (WASP) VCA domain by the nucleation of F-actin filaments. We thus tested whether αN-catenin was sufficient to inhibit the effect of ARP2/3 + VCA on actin polymerization.
Recombinant VCA domain of human WASP (400 nM) showed minimal actin polymerizing activity. This was more than doubled upon addition of 10 nM ARP2/3 complex. Increasing concentrations of recombinant αN-catenin resulted in a dosage-dependent inhibition on the ARP2/3 + VCA effect on actin, reduced nearly back to baseline (Fig. 4b). We conclude that αN-catenin is sufficient to suppress the effect of ARP2/3 + VCA on actin polymerization in vitro.
Since αN-catenin repressed ARP2/3 activity in vitro, we next tested whether this activity was mediated by the ABD using CTNNA2 KO neurons. We ectopically expressed αN-catenin ABD and assessed the amount of ARP3 associated with actin by Western blot. We observed a near doubling of ARP3 associated with F-actin, whereas expressing the αN-catenin ABD showed potent loss of most ARP3 bound to actin (Fig. 4c, Supplementary Fig. 13b), suggesting the ABD domain of αN-catenin is sufficient to regulate ARP2/3 -actin interaction. We also tested the ability of recombinant αN-catenin ABD to repress ARP2/3mediated actin polymerization in vitro. The ABD of αN-catenin was sufficient to repress ARP2/3 mediated actin polymerization in a dosage-dependent manner ( Supplementary Fig. 13c), albeit to a lesser extent than full-length αN-catenin.
If αN-catenin mediates its effects on neuronal morphology through suppression of ARP2/3, then inhibition of ARP2/3 should at least partially compensate for the loss of αN-catenin in neurons. To test this, we used two different cell-permeable ARP2/3 inhibitors, CK-666 or CK-869 28 , which inhibit ARP2/3 at different sites. After generating a dose-response curve, we analyzed neurite length in Control, MDS, patient, and CTNNA2 KO migrating neurons after 24 hrs. A majority of Control neurons showed long, extended bipolar neurites with ave. length > 100 μm. Both inhibitors showed a negative effect on neurite length in Control neurons, but CTNNA2-mutant cells increased neurite length nearly 10-fold, restoring the distribution to near Control levels (Fig. 4d, Supplementary Fig. 7, 14). MDS neurons showed neurites on average 18 μm in length, without improvement upon ARP2/3 inhibitor treatment. We conclude that ARP2/3 inhibitors can largely rescue the neurite length defect associated with loss of CTNNA2.
In summary, we have identified a new neuronal migration disorder due to bi-allelic truncating mutations in CTNNA2. The involvement of an actin regulator in pachygyria was surprising, given that previously genetic studies focus on microtubules 29 . Microtubules scaffold the cytoskeleton for the repeated cycles of migration 30,31 . The main effect of actin appears to be in sensing and responding to extracellular guidance cues through changes in cell morphology, as well as guiding microtubules within the primary neurite 32 . This is mediated at least in part by calcium influx and enhanced neuronal motility through LIS1dependent regulation of Rho GTPases 33 , but also through transmembrane proteins, which can alter cell morphology through effects on actin 34 .
Actin structure is critically regulated by Arp2/3, which controls the decision to initiate branching. We show Arp2/3 over-activity, as a result of loss of αN-catenin, leads to excessive branching, which impairs neurite growth and stability, possibly by controlling microtubule advance into the growth cone 32,35 . αN-catenin suppresses actin branching; in vivo this likely occurs by binding F-actin. This activity is mediated by the ABD, which we show competes with ARP2/3 for nucleation sites on F-actin. α-actinin, fascin and αEcatenin can sort actin associated proteins by altering the conformation of actin 1,25,36 . It will be interesting to determine if αN-catenin also shares this property or if it extends to other ABD containing proteins. Thus, as a more general principle, actin-binding proteins may serve unrecognized roles in actin regulation by controlling F-actin polymerization as well as dictating actin conformation locally within a cell.
Online Methods
Please see the accompanying Life Sciences Reporting Summary for further detailed explanation of experimental design and analysis used in this study.
Patient Recruitment
Patients were enrolled and sampled according to standard local practice in approved human subjects protocols at the respective institutions. Each subject was evaluated by and had MRI images by one of the authors. Excluded were cases with overlapping conditions such as asymmetrical brain dysplasia, primary white matter disease, or evidence of altered metabolism such as elevations in lactate or abnormal peaks on standard clinical serum tandem mass spectroscopy.
Exome Sequencing
For each sample, DNA was extracted from peripheral blood leukocytes by salt extraction. Exon capture was performed with the Agilent SureSelect Human All Exome 50 Mb Kit with paired-end sequencing on an Illumina HiSeq2000 instrument resulting in >94% recovery at > 10x coverage.
Sequences were aligned to the human genome (hg19) with Burrows-Wheeler Aligner (BWA) and variants delineated using the Genome Analysis Toolkit (GATK) software and SAMTools algorithms for both SNPs and insertion/deletion polymorphisms 8 . Variants were filtered for the criteria: 1] occurring in coding regions and/or splice sites, 2] nonsynonymous, 3] found in less than 0.1% frequency in control populations (our in house exome data set of 12,000 individuals, dbSNP and Exome variant server) 4] homozygous in consanguineous families, 5] within linkage intervals or blocks of homozygosity. Variants were ranked by the type of mutation (nonsense/splice/indel > missense), amino acid conservation across species, and damage prediction programs (PolyPhen and Grantham score).
Linkage Analysis
All informative members of Family-1101 were genotyped with the Infinium iSelect24 mapping panel (Center for Inherited Disease Research) and analyzed with easyLINKAGE-Plus software 37 . Parameters were autosomal recessive with full penetrance and disease allele frequency of 0.001. Genomic regions with LOD scores under -2 were excluded as loci, over 2 were considered as candidate loci, and over 3.3 as statistical evidence for genome-wide significance. Linkage simulations were performed with Allegro 1.2c under the same parameters, with 5,000 markers at average 0.64 cM intervals, codominant allele frequencies, and parametric calculations 37 .
Tissue Culture
Fibroblasts were generated from unaffected and affected dermal biopsies explants and cultured in MEM (Gibco), supplemented with 20% FBS (Gemini). iPSCs, neural progenitor cells and neurons were obtained as previously described 19,38 . MDS fibroblasts were obtained from Coriell Biorepository (GM09209) and similarly reprogrammed.
Bright field images were taken on Axiovert.A1, or AxioObserver inverted microscopes (Zeiss), in addition to an EVOS microscope (Life Technologies) and processed with Photoshop CS5 (Adobe Systems). Time-lapse movies were acquired on a Zeiss Axio Observer and compiled with Zeiss Zen software. Fluorescent confocal images of neural rosettes were taken on an LSM 880 (Zeiss) and processed with FIJI/ImageJ software. Images are maximum z-projections of the entire depth of signal for the cell of interest.
cDNA synthesis and RT-PCR
cDNA synthesis of 1 μg patient RNA was performed with Superscript III First-Strand RT-PCR (Life Technologies). The reaction products were then used for quantitative real-time PCR (qRT-PCR) or cloning. qRT-PCR was performed in triplicate on 10 ng human cDNA for CTNNA2, AXIN2, ID2, LEF1, and GAPDH (see Supplementary Table 1 for primer sequences). CT values were normalized to GAPDH as a loading control and fold change calculated in reference.
Neurosphere Assays
Neurosphere assay methods were modified from previously described protocols 42 . Three different stem cell lines (clones) per patient or condition were used as biological replicates for each experiment. The iPSC-derived NPCs were dissociated and incubated, shaking at 95 RPM, in NPC media. The following day, the media was changed to DMEM:F12 with 1x N2, 1x B27, shaking at 95 RPM for 12 hrs, the resultant neurospheres plated on PLO/laminin plates, and imaged at 48 hrs to record distance each neuronal cell body traveled from the edge of the neurosphere using FIJI/ImageJ (NIH) per clone. Data from each clone was collected and analyzed together with data from the other 2 clones per patient or condition to facilitate statistical tests.
Neurite Quantification
Three different stem cell lines (clones) per patient or condition were used as biological replicates for each experiment. To assess leading process length, stem cell derived migrating neurons were imaged. The length of the primary neurite from the edge of the cell body to the tip of the growth cone was measured in microns. For ARP2/3 inhibition, 0.2 μM CK-666, 0.2 μM CK-869, or an equivalent dilution of DMSO was added to the neurons 6 hrs after plating 28 . Neurons were analyzed after 24 hr. Data from each clone was collected and analyzed together with data from the other 2 clones per patient or condition to facilitate statistical tests.
RNA sequencing
Total RNA was extracted from two NPC lines per patient with TRIzol Reagent (Gibco). Full length mRNA was captured by TruSeq Stranded mRNA kit (Illumina) with standard protocol and was submitted for paired-end 100 nucleotide sequencing on MiSeq (Illumina). Roughly 30 million reads per sample were aligned to the 1000 Genomes Project's version of GRCh37 using standard TopHat2 (http://ccb.jhu.edu/software/tophat/) v2.0.11 with pairedend read options and allowing for intron-spanning reads as defined by transcripts in Illumina iGenomes NCBI build 37.2. The pairwise euclidean distance between all samples using the filtered (median FPKM>1) and log2 transformed gene expression values was calculated, and Pearson's correlation determined. Differential expression on a gene-based level was tested for with cuffdiff 59 v2.1.1 using default option. Genes reported as significantly differentially expressed between a pair of conditions were determined to have been so based on cuffdiff threshold of a 0.05 false-discovery rate corrected p-value.
Histology
Animal use followed NIH guidelines and was approved by IACUC at UCSD and Rockefeller University. Wild type mice (C57BL/6N) were obtained from Jax and intercrossed to produce embryonic day (e) 13.5 embryos. The embryos were fixed in 4% PFA, cryoprotected then sectioned. Human fetal brain was obtained from the UCSD autopsy service, fixed in 4% PFA and embedded in paraffin. Microtome sections were deparaffinized, blocked with 4% donkey serum, incubated in primary antibody overnight, washed, incubated in secondary antibody for 1 hr, post-fixed in 4% PFA, and counterstained with DAPI or Nissl for imaging.
Actin Assays
Actin binding and bundling assays were performed according to manufacturer's recommendations (Cytoskeleton, Inc., BK001). Actin polymerization assays were performed with pyrene actin (Cytoskeleton, Inc., AP05), α-actinin, GST-tagged VCA domain of human WASP protein (#VCG03), and Arp2/3 protein complex from porcine brain (Cytoskeleton, Inc., RP01P) according to manufacturer's recommendations, with the addition of recombinant αN-catenin. G:F Actin ratios were determined in triplicate from patient-derived NPCs using G-Actin/F-Actin In Vivo Assay Biochem Kit (Cytoskeleton, Inc., BK037). CK-666 and CK-869, block an activating conformational change, binding to different sites 28 . CK-666 stabilizes the inactive state of the complex, blocking movement of the Arp2 and Arp3 subunits into the activated filament-like (short pitch) conformation, while CK-869 binds to a serendipitous pocket on Arp3 and allosterically destabilizes the short pitch Arp3-Arp2 interface.
Statistics
Neuronal migration and neurite length was graphed in MS Excel as a box plot with quartile 1, median, and quartile 3 displayed. Whiskers represent the maximum and minimum observed values for each comparison group across all replicates. Western blot densitometry and fold change gene expression was plotted as a bar graph with the average of at least 3 replicates displayed, or the values displayed on the figure. Error bars represent the standard error of the mean. Unpaired, 2-tailed student's t test was used to calculate significance (* P<0.05, ** P<0.001, *** P<0.0001, n.s. not significant.) Fig. 2a: Control vs MDS, P = 3.29x10 −34 ; Control vs 1263A, P = 1.20x10 −27 ; Control vs CTNNA2 KO , P = 1.01x10 −18 .
Data Availability
The datasets generated and analyzed for the current study are have been deposited in the database Genotypes and Phenotypes (dbGaP) with accession numbers phs000288 and phs000744 and the Gene Expression Omnibus (GEO) repository with accession number GSE72994.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. (a) αN-catenin binds to F-actin in co-sedimentation assays. The majority of F-actin was present in the pellet (P) upon 100,000g spin, and while α-actinin was observed primarily in the supernatant (S) when centrifuged alone, when sedimented with F-actin shifted to P. BSA was found in S even when sedimented with F-actin. αN-catenin was exclusively in S when centrifuged alone, but when sedimented with F-actin the full-length band shifted to P. Repeated in duplicate.
(b) αN-catenin weakly promotes bundling of F-actin in an assembly assay. Upon centrifugation at 10,000g, F-actin was split between S and P. While α-actinin was exclusively in S, when co-pelleted with F-actin promoted a shift of actin to P. BSA showed no bundling activity. Similar to α-actinin, αN-catenin was exclusively in S, but promoted a slight shift of actin to P. Repeated in duplicate. Table 1 Clinical Phenotypes Patients display acquired microcephaly, hypotonic cerebral palsy, inability to ambulate or speak, and intractable seizures. HC, head circumference; SD, standard deviation below the mean; B/L, bi-lateral; VEP, visual evoked potential; ERG, electroretinogram; EEG, electroencephalogram. | 2018-08-01T20:18:05.999Z | 2018-05-24T00:00:00.000 | {
"year": 2018,
"sha1": "3c18c51ff0a86bc4470bde2d5535f1fecc0a4e09",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc6072555?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "984a1e672947bbf958ee0c497e3da0ca3f372555",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
36738171 | pes2o/s2orc | v3-fos-license | Impact of comorbidity on waiting list and post-transplant outcomes in patients undergoing liver retransplantation.
AIM
To determine the impact of Charlson comorbidity index (CCI) on waiting list (WL) and post liver retransplantation (LRT) survival.
METHODS
Comparative study of all adult patients assessed for primary liver transplant (PLT) (n = 1090) and patients assessed for LRT (n = 150), 2000-2007 at our centre. Demographic, clinical and laboratory variables were recorded.
RESULTS
Median age for all patients was 53 years and 66% were men. Median model for end stage liver disease (MELD) score was 15. Median follow-up was 7-years. For retransplant patients, 84 (56%) had ≥ 1 comorbidity. The most common comorbidity was renal impairment in 66 (44.3%). WL mortality was higher in patients with ≥ 1 comorbidity (76% vs 53%, P = 0.044). CCI (OR = 2.688, 95%CI: 1.222-5.912, P = 0.014) was independently associated with WL mortality. Patients with MELD score ≥ 18 had inferior WL survival (Log-Rank 6.469, P = 0.011). On multivariate analysis, CCI (OR = 2.823, 95%CI: 1.563-5101, P = 0.001), MELD score ≥ 18 (OR 2.506, 95%CI: 1.044-6.018, P = 0.04), and requirement for organ support prior to LRT (P < 0.05) were associated with reduced post-LRT survival. Donor/graft parameters were not associated with survival (P = NS). Post-LRT mortality progressively increased according to the number of transplanted grafts (Log-Rank 18.455, P < 0.001). Post-LRT patient survival at 1-, 3- and 5-years were significantly inferior to those of PLT at 88% vs 73%, P < 0.001, 81% vs 71%, P = 0.018 and 69% vs 55%, P = 0.006, respectively.
CONCLUSION
Comorbidity increases WL and post-LRT mortality. Patients with MELD ≥ 18 have increased WL mortality. Patients with comorbidity or MELD ≥ 18 may benefit from earlier LRT. LRT for ≥ 3 grafts may not represent appropriate use of donated grafts.
INTRODUCTION
Liver retransplantation (LRT) represents the only viable option for survival for some patients who develop graft failure following primary liver transplant (PLT). Published reports on cohorts of patients who underwent LRT indicate inferior post-transplant survival in these patients [1][2][3][4][5] . There has been an increase in the number of patients awaiting PLT which was not associated with increase in donated organs [6] . Although transplant programmes have tried to compensate for this increase in demand by more liberal use of marginal grafts, there is evidence that death on the waiting list (WL) for patients listed for PLT remains high [7] . Therefore, the combination of increased WL mortality with increasing demand for PLT coupled with the known inferior outcomes of LRT; raises concerns and generates ethical debate in the transplant community on the use of scarce resource of donated organs for LRT [8] . This debate has motivated researchers to identify predictors of survival following LRT to improve the selection of patients who might benefit most from LRT. Model for end-stage liver disease (MELD) score > 25, recipient age, creatinine level, bilirubin level, indication for retransplantation, the urgency for LRT, coma episodes, haemoglobin (Hb) level and the number of fresh frozen plasma units transfused were identified as factors associated with reduced post-LRT survival in a number of studies [3,5,9,10] . Death or graft loss was shown to increase gradually following LRT according to the timing of LRT with marked increase in risk between 4-38 d following LRT [11][12][13] . Inferior survival was also observed according to increasing number of transplanted graft [13] . Comorbidity as defined by the Charlson comorbidity index (CCI) was found to adversely affect posttransplant survival in patients who underwent PLT [14] . Thuluvath et al [7] analysed the data of the scientific registry of transplant recipients (SRTR) in the United States from 1999 to 2008. The prevalence of comorbidity such as diabetes mellitus (DM), renal impairment (RI) and obesity was found to have steadily increased in candidates listed for liver transplantation over the ten year period [7] . However, the prevalence and impact of comorbidity on WL and post-transplant survival in patients listed for LRT have not been studied previously.
The aims of this study were three fold, firstly, to identify the prevalence of comorbidity according to CCI in patients listed for LRT, secondly, to study the impact of comorbidity on WL and post-LRT mortality, and finally, to identify other factors associated with reduced WL and post-LRT survival.
Patients and design
This is a retrospective study of all patients referred to the liver unit at King's for LRT assessment between January 2000 and December 2007. There were 151 assessments for LRT on 137 patients over the 8 year period. One patient was excluded because of incomplete information. Data analysis was performed on 150 LRT assessments. We utilized a cohort of patients who underwent PLT over the same time period for comparison of outcomes of PLT and LRT (n = 1332). Patients assessed for acute liver failure (n = 175), familial amyloid polyneuropathy (43) and 24 with incomplete information were excluded. We analysed data on 1090 patients with end stage liver disease (ESLD) who were assessed for PLT.
Data
All patients assessed for liver transplantation at our centre had their clinical, laboratory, radiological and histological data as well as the outcome of transplant assessment entered at the time of liver transplant assessment into a prospective electronic database. This database was analysed in addition to electronic patient records and clinical notes to record demographic, clinical and laboratory variables of this cohort. Prognostic scores such as MELD and United Kingdom model for end-stage liver disease (UKELD) scores were calculated at the time of assessment and at the time of transplantation. MELD was calculated according to the UNOS adjustment [15] . The UKELD score was calculated according to Barber et al [16] . Donor and graft variables were collected and donor risk index was calculated according to Feng et al [17] . Patient survival was recorded according to their survival status in our hospital information system and further confirmed using the National Health System electronic portal. This is a United Kingdom wide national database, where patient survival status is updated according to the generation of death certificates in the United Kingdom.
Definitions of outcome measures
WL outcome was defined for this study by death on WL or delisting because of significant deterioration or hepatocellular carcinoma (HCC) progression beyond Milan criteria whilst awaiting LT. To study the influence of comorbidity and other variables on listing outcome, we used the transplant free survival (defined as time from listing to death, time to delisting or time to transplant) to eliminate the artificial impact of transplantation on survival of this cohort. Posttransplant patient survival was defined as time from transplantation to death, and if alive, censored on 01/11/2011. Graft survival was defined as time from transplantation to retransplantation or death, and if alive censored on 01/11/2011. Patients who were lost to follow-up were censored as being alive at the date of their last follow-up. Post-LRT patient survival was defined as time from second or subsequent transplant to death, and if alive, censored on 01/11/2011. Post-LRT graft survival was defined as time from second or subsequent transplant to further retransplantation or death, and if alive censored on 01/11/2011. Oneyear post transplant patient survival was defined as time from LRT to death, and if alive, censored at 12 mo following transplantation. One-year post transplant graft survival was defined as time from transplantation to retransplantation or death, and if alive censored at 12 mo following transplantation. Marginal grafts were defined as graft with Donor Risk Index > 1.8 [7] . Cut off values for MELD score of 18 and 25 were chosen according to Rosen et al [18] and Edwards and Harper [19] .
Comorbidities
Nine comorbidities were prospectively defined according to Volk et al [14] . These included congestive heart failure, coronary artery disease, DM, peripheral vascular disease, cerebro-vascular disease, chronic pulmonary disease, connective
Patient characteristics
One hundred and fifty assessments for LRT were examined and compared to a control group of 1090 patients assessed for PLT. Median follow-up was 7 years (3)(4)(5)(6)(7)(8)(9)(10)(11)(12). There were 124 assessments for a second transplant, 21 assessments for a third transplant, 3 assessments for a fourth transplant, and 1 assessment each for a fifth and sixth transplant out of 150 LRT assessments. Out of these150 assessments for LRT, six were not listed for LRT (two because of early referral, 1 because of alcohol abuse, 1 declined relisting, 1 with complete porto-mesenteric thrombosis and 1 died during the assessment process. Only121 patients received LRT of the144 listed patients. Twenty three patients were delisted for the following reasons: 12 died awaiting a graft, 6 had significant clinical improvement and 5 were delisted because of significant clinical deterioration, whilst on WL. Information regarding mechanical ventilation, renal replacement therapy, vasopressor support and location of patient [home, hospital or intensive care unit (ICU)] prior to LRT was available on 113 patients. Thirty two patients (28%) received renal replacement therapy, 21 (19%) received mechanical ventilation and 20 (18%) received vasopressor support prior to LRT. Forty four patients (36%) were transplanted from the hospital ward, 40 (33%) were transplanted from ICU and 28 (23%) were transplanted from home. Table 1 summarises baseline characteristics according to PLT and LRT. LRT patients were significantly younger and were less likely to have ascites. However, this group were more likely to have higher median serum sodium levels (Na), creatinine values, bilirubin levels, MELD and UKELD scores (P < 0.05). There were no significant differences in proportion of patients with encephalopathy or median INR level between groups (P = NS).
Comorbidities
There were 84 patients (56%) who had ≥ 1 comorbidity as defined by CCI. The most common comorbidity was RI in 66 (44.3%), followed by DM in 25 (Table 2). Only DM and RI as individual comorbidities were included in the Cox model because of the infrequency of other comorbidities in this cohort. RI (HR = 3.802, 95%CI: 1.147-12.603, P = 0.029) was independently associated with WL mortality. WL mortality in patients with any comorbidity was higher compared to those without comorbidities as shown in Figure 1.
WL mortality
Sixteen out of 144 patients (11%) died awaiting a graft. Eight had disease recurrence (of which 5 had HCV recurrence), 3 had vascular complications, 4 had graft rejection and 1 had other indication for LRT. None of the patients with early graft dysfunction died awaiting a graft. WL mortality for PLT was significantly higher compared to LRT (24% vs 11%, P < 0.001). However, median waiting time was significantly shorter for LRT compared to PLT (16 d, range: 0-1118 d vs 100 d, range: 1-922, P < 0.001). Table 2 summarises variables associated with WL mortality on univariate and multivariate analysis. Only age > 60 years and the presence of ascites were included as fixed variables in the multivariate model to prevent interaction of variables with similar clinical relevance (such as creatinine, MELD, RI, comorbidity). Factors which were independently associated with WL mortality were age > 60 years, RI, creatinine level, the presence of comorbidity, CCI, MELD score and UKELD score.
Post-transplant outcomes
The 1-, 3-and 5-year post-transplant patient and graft survival were significantly lower for patients who had LRT compared to those who had PLT. Figure 3 summarise these findings. In retransplanted patients, patient and graft survival were significantly different according to the number of grafts transplanted as analysed by Kaplan Meier survival method ( Figure 4)
DISCUSSION
The CCI was originally developed and validated as a tool to predict hospital outcome in general medical patients [20] . Composed of medical conditions with varying assigned weights, versions of CCI were found to predict outcomes in multiple clinical settings [21][22][23][24][25][26] . In this study, we reported on 150 episodes of assessment for LRT from a single centre. We demonstrated that comorbidity as defined by CCI is common (56%) in patients assessed for LRT, and higher than that reported for PLT (40%) [14] . This high prevalence of comorbidity is mainly attributed to the high prevalence of renal impairment (44%) in this cohort. It is difficult to estimate the rate of renal dysfunction in LRT patients from previously published studies [3,4,12,18] . RI was seen in 33% of candidates listed for PLT according to the data of the SRTR [7] . Renal dysfunction is a well recognised complication in patients with ESLD, critical illness and in PLT [27][28][29][30] .
Renal impairment is known to have detrimental impact on survival of patients with ESLD [31,32] . Therefore, the increased prevalence of RI in our cohort can be explained by the fact that patients listed for LRT have more severe liver dysfunction, reflected by higher MELD scores compared to PLT patients and also by the large proportion of patients who were transplanted from ICU (33%) reflecting the severity of their illness. Furthermore, standard immunosuppression agents with Calcineurin inhibitors such as Ciclosporin or Tacrolimus which is routinely used following liver transplantation to prevent rejection are known to cause or at least contribute to renal impairment following liver transplantation [33] . Other comorbidities, apart from DM, were rare which may be explained by the relatively young median age of patients listed for LRT compared to PLT. The younger age of LRT patients compared to PLT is consistent with previous reports [18,34] . This is the first study to demonstrate the impact of comorbidity on WL mortality in LRT patients. The presence of any comorbidity defined by the CCI was independently associated with a greater than 5 times the risk of death on the wait list. Furthermore, this study has shown that the presence of any comorbidity was associated with twice the risk of post-LRT patient death. Similarly, comorbidity was associated with a three-fold increased risk of patient death and two fold increased risk of graft loss within 12 mo post-LRT. The only study to date which investigated the effect of comorbidity on post liver transplant outcome showed that the presence of any comorbidity was associated with 21% increase in patient death following PLT [14] . Comorbidity was also found to predict post-transplant outcome in patients who received renal and allogeneic stem cell transplantation [35][36][37][38] .
We have demonstrated in this study that the median MELD score for patients assessed for LRT was significantly higher compared to PLT patients. We have also shown that the increase in MELD among LRT candidates was attributable to the high median bilirubin and creatinine levels but not to an increase in INR which is consistent with UNOS data (Table 1) [34] . We have also shown that the already established models to assess the severity of hepatic impairment (MELD and UKELD) were independently associated with WL mortality. Furthermore, MELD score at a cutoff as low as 18 was associated with WL mortality which was increased by more than 4 fold. This suggests that patients listed for LRT with MELD score ≥ 18 may benefit from prioritization on WL and earlier transplantation to improve LRT outcome.
Our data showed increased WL mortality in LRT patients with MELD score of 18 or higher. In a report from The University of Nebraska, Watt et al [39] showed that MELD score was predictive of WL mortality in 63 patients listed for a second transplant. WL mortality was also shown to increase with increasing MELD scores, especially at the lower range of MELD [34] . None of the other previously reported studies examined the performance of MELD in predicting WL mortality in LRT patients. Instead, these reports focused on factors predictive of post-LRT outcomes [2,3,5,11,18,40,41] . Surprisingly, WL mortality was lower for LRT patients (11% vs 24%, P < 0.001), discordant to previous reports [34] . This can be explained by the fact that patients listed for LRT had significantly shorter median waiting time (16 d vs 100 d, P < 0.001) which may indicate an informal prioritization mechanism for patients listed for LRT in our hospital. Our report also suggests that UKELD score retains its predictive capacity of WL mortality in patients listed for LRT with a 12% rise in WL mortality with every point increase in the UKELD score. Another important finding of the current study is that recipient age > 60 years was independently associated with death on the WL in LRT patients, consistent with previous studies that identified advanced recipient age as a risk factor for WL mortality in patients listed for PLT [28,42,43] . We have shown that 1-, 3-and 5-year patient and graft survival were inferior in patients who underwent retransplantation, consistent with previously published reports [5,13] . This inferior post-transplant survival in our cohort is mainly attributed to the poor post-LRT survival in patients who received ≥ 3 grafts. Patients who had a second graft had slightly lower patient survival compared to PLT. Although these findings contrast with the outcome of patients who underwent retransplantation 1984-2001 at The University of California Los Angeles, improved survival of patients who had a second transplant in our cohort may be explained by both a different era of transplantation, advances made in immunosupression and local patient selection processes [13] . Our findings also suggest that a second liver transplant may represent an acceptable use of donated organs in selected patients. However, if we take into consideration the rule of 50% survival benefit at 5 years post-transplant, according to our findings a third or subsequent grafts may not represent an appropriate use of donated organs, except in rare instances [6] . Published reports suggested that the time interval between PLT and LRT has an influence on posttransplant outcome. Reports from 2 transplant programs indicated that LRT 4-30 d or 8-30 d following first transplant carries a worse post-transplant survival [11,13,40] . Our data showed inferior survival in patients who were transplanted within 30 d from previous liver transplantation, irrespective of whether LRT occurred in the first 7 d or between 830 d. In our cohort, the most common indication for LRT within the first 7 d following a previous transplant was early graft dysfunction whilst vascular complications (thrombotic and non-thrombotic graft infarction) were the primary indication in patients who had LRT 8-30 d following a previous transplant. This increased post-LRT mortality in patients who receive early LRT may be explained by severity of illness, intense immunosuppression, hence increased risk of infections [2,44] . Our findings are consistent with those of Rosen et al [18] who reported significantly inferior long term survival in patients who had LRT for PNF and vascular complications. In both United States and the United Kingdom, in recognition of the severity of illness and the high mortality associated with PNF and early HAT without LRT, an urgent priority for LRT is given [45,46] . Regarding post-LRT survival, we demonstrated that the CCI, RI, MELD score ≥ 18 and requirement for organ support were independent factors associated with 1-year post-LRT patient and graft survival consistent with the reported literature in which MEDL, or individual components of MELD, were associated with post-LRT outcome [2,3,9,10,12,40,41] . Similarly, requirement for mechanical ventilation and renal replace ment therapy were found to negatively impact on post-LRT outcome in agreement with the reported literature [2,5,12,40] . Interestingly, we identified pre-LRT vasopressor support as the only factor associated with long term graft outcome. Vasopressor use was also an independent factor associated with 12 mo posttransplant patient and graft survival. This finding has never been reported in previous studies. The requirement for vasopressors may therefore reflect the severity of recipient illness with hemodynamic instability and it may indirectly suggest the negative impact of graft ischemia on patient and graft survival.
Despite our detailed analysis of donor and graft variables, we found no association between graft qua lity and post-LRT outcomes. This is likely to reflect our local donor-recipient matching practices demonstrated by the limited use of marginal grafts in this cohort and a low median donor age of 44 years which is well within the confines of non-extended criteria donor parameters. Few studies analyzed the impact of graft and donor factors on post-LRT survival. Whilst Pfitzmann et al [5] found no correlation between graft survival and donor variables, others identified donor age, ethnicity and warm ischemia time as factors independently associated with inferior outcome [5,10,40] . Limitations of this study were that it represents a single centre experience; therefore, applicability of the findings on other cohorts may be limited. Secondly, data on immunosuppression were not included in our analysis, although standard immunosuppression was used in all cases except for patients with eGFR < 50 mL/min, a renal sparing regimen of low dose Tacrolimus and interleukin-2 (IL-2) blocker and prednisolone was used preferentially. Indeed, the choice of immunosuppression not only can influence posttransplant outcomes in patients underwent PLT, it can influence the rate of complications related to immunosuppression such as RI which we found as an important factor associated with inferior patient and graft survival [33,47,48] . Thirdly, we used a version of the CCI tested in liver transplant cohort [14] . Therefore, the impact of other comorbidities on post-transplant survival such as inflammatory bowel disease, peptic ulcer disease, valvular heart disease or obesity, which were found to affect patient survival, remains unknown given they were not incorporated in the model [20,49] . Lastly, although the definitions of individual comorbidities were consistent with previous reports, clinical applicability of these definitions maybe limited.
In conclusion, our data indicates that the presence of comorbidity in liver retransplant candidates increases mortality on the WL and following LRT. The severity of recipient liver disease was associated with WL mortality. MELD score was able to discriminate between survival and death whilst on the WL at a lower cut-off value of 18 which suggests that patients undergoing LRT should be transplanted at lower MELD scores. Post-transplant mortality progressively increased according to the number of transplanted grafts; however, the greatest adverse impact was seen after transplanting ≥ 3 grafts with only 40% 5-year survival seen in this group. Graft and donor variables were not found to influence patient or graft survival in this study which may reflect centrerelated donor recipient matching.
Background
The prevalence and impact of comorbidities on waiting list (WL) and posttransplant survival in patients undergoing liver retransplantation (LRT) is not known. This study evaluates the impact of comorbidity on the above parameters.
Research frontiers
Model for end-stage liver disease (MELD) score > 25, recipient age, creatinine level, bilirubin level, indication for retransplantation, the urgency for LRT, coma episodes, haemoglobin level and the number of fresh frozen plasma units transfused were identified as factors associated with reduced post-LRT survival in a number of studies.
Innovations and breakthroughs
Comorbidity in liver retransplant patients increases mortality on the WL and following LRT. MELD score of ≥ 18 was associated with increased risk of death on WL and within 12 mo following retransplantation. Post-transplant mortality progressively increased according to the number of transplanted grafts.
Applications
Patients undergoing LRT should be transplanted at lower MELD scores. Assessment of comorbidity in LRT candidates can provide important prognostic information. A third or subsequent grafts may not represent an appropriate use of donated organs, except in rare instances. | 2018-04-03T03:50:48.595Z | 2017-07-18T00:00:00.000 | {
"year": 2017,
"sha1": "0ddb32ed604c0db501d5409f227d41d335663db3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4254/wjh.v9.i20.884",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ddb32ed604c0db501d5409f227d41d335663db3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10419728 | pes2o/s2orc | v3-fos-license | Genome-wide profiling identifies a subset of methamphetamine (METH)-induced genes associated with METH-induced increased H4K5Ac binding in the rat striatum
Background METH is an illicit drug of abuse that influences gene expression in the rat striatum. Histone modifications regulate gene transcription. Methods We therefore used microarray analysis and genome-scale approaches to examine potential relationships between the effects of METH on gene expression and on DNA binding of histone H4 acetylated at lysine 4 (H4K5Ac) in the rat dorsal striatum of METH-naïve and METH-pretreated rats. Results Acute and chronic METH administration caused differential changes in striatal gene expression. METH also increased H4K5Ac binding around the transcriptional start sites (TSSs) of genes in the rat striatum. In order to relate gene expression to histone acetylation, we binned genes of similar expression into groups of 100 genes and proceeded to relate gene expression to H4K5Ac binding. We found a positive correlation between gene expression and H4K5Ac binding in the striatum of control rats. Similar correlations were observed in METH-treated rats. Genes that showed acute METH-induced increased expression in saline-pretreated rats also showed METH-induced increased H4K5Ac binding. The acute METH injection caused similar increases in H4K5Ac binding in METH-pretreated rats, without affecting gene expression to the same degree. Finally, genes that showed METH-induced decreased expression exhibited either decreases or no changes in H4K5Ac binding. Conclusion Acute METH injections caused increased gene expression of genes that showed increased H4K5Ac binding near their transcription start sites.
Background
Methamphetamine (METH) is an illicit psychostimulant that is abused throughout the world. The drug causes behavioral abnormalities that include the development of tolerance and dependence, paranoid states, and psychotic symptoms in human addicts [1,2]. In animals, METH causes behavioral sensitization, is self-administered, and causes structural plasticity in the brain [3][4][5][6]. The acute behavioral effects of the drug are mediated by METH-induced increases in the amount of dopamine (DA) release in brain synapses [7,8] and by subsequent stimulation of DA receptors in various brain regions [9]. Activation of these receptors by direct or indirect agonists such as cocaine and amphetamine induces acute changes in the expression of several immediate early genes (IEGs) in the striatum [10,11]. These observations have prompted suggestions that the enduring behavioral and cognitive effects of these psychostimulants might be dependent on transcriptional changes in the rat brain [12,13]. Similarly, acute METH injections cause significant increases in the expression of several IEGs in the rat brain [14 -17]. These genes include c-fos and Egr1, among others [9,16].
In contrast to the acute METH-induced transcriptional changes, chronic METH administration produces differential changes in IEG responses and blunts the effects of an acute single METH injection on the expression of several IEGs in the striatum [18]. These observations had suggested that chronic METH exposure might alter the molecular machinery that controls the acute transcriptional effects of the drug. These blunting effects might be consequent to alterations in the complex interactions of factors that regulate gene transcription [19,20]. During resting states, DNA is compacted in ways that interfere with the binding of transcription factors whereas DNA becomes more easily accessible during activation of cells by various stimuli [21][22][23]. DNA is indeed packaged into chromatin whose fundamental subunit, the nucleosome, is made of 4 core histones, histones H2A, H2B, H3, and H4 that form an octomer (2 of each histone) surrounded by 146 bp of DNA [24,25]. Biological processes are regulated, in part, via post-translational modifications of these histones, modifications that include acetylation, methylation, phosphorylation, and ubiquitination [26][27][28][29][30]. Lysine residues of histone tails can be reversibly acetylated and deacetylated by several histone acetyltransferases (HATs) and histone deacetylases (HDACs), respectively [31][32][33] and these modifications promote alterations in gene expression by enabling or inhibiting recruitment of regulatory factors onto DNA regulatory sequences [31,33,34].
In order to understand the relationship between METH-induced changes in gene expression and histone acetylation on a genome-wide scale, we used two unbiased approaches, namely microarray analyses and chromatin immunoprecipitation (ChIP) followed by massive sequencing [35,36]. In the case of psychostimulants including cocaine, investigators have focused their studies, for the most part, on the effects of illicit drugs on histone H3 modifications [37,38]. However, we chose to investigate METH-induced changes on histone H4 acetylated at lysine residue 5 (H4K5Ac) because a strong link exists between gene activation and acetylation of lysine residues (K5, K8, K12, and K16) of histone H4 [39][40][41][42]. We thus used ChIP-Seq to identify sites of binding of H4K5Ac throughout the rat genome. We found that H4K5Ac binding is ubiquitous in the rat dorsal striatum and occurs mainly around transcription start sites (TSSs). There was also a positive correlation between global H4K5Ac binding and striatal gene expression. Moreover, acute METH-induced increases in gene expression were associated with METH-induced increased H4K5Ac binding on genes with increased expression in rats chronically pre-exposed to either saline or METH. Thus, our results document a relationship between H4K5Ac binding and increased gene expression on a global scale.
Results
Acute and chronic METH administration causes differential changes in striatal gene expression We used microarray analysis (RatRef-12 Expression BeadChips arrays, 22, 523 probes, obtained from Illumina Inc., San Diego, CA) to provide a panoramic view of the effects of METH on gene expression in rats chronically exposed to either saline or METH. Using the data obtained from these analyses, we sought to identify molecular and cellular functions of genes with the highest baseline expression in the striatum of control rats. Towards that end, we picked the top 10 percent of genes with the highest expression and ran them through Ingenuity pathway Analysis (IPA). We found that these genes were involved in nucleic acid metabolism (126 genes), post-translational modifications (72 genes), protein folding (26 genes), and cell death and survival (525 genes). They also participate in nervous system development and function (303 genes), mediation of behaviors (166 genes), as well as neurological (447 genes) and psychological (181 genes) diseases. Top canonical pathways include genes involved in mitochondrial functions, EIF2 signaling, protein ubiquitination, mTOR signaling, CDK5 signaling, dopamine-DARPP32 feedback in cAMP signaling, NRF2-mediated oxidative stress response, and synaptic long-term potentiation. The high number of genes involved in these pathways is consistent with the role of dopamine in the striatum [43] and the energy demand and production during brain functions, as well as the role of mitochondrial dysfunction in neurodegenerative disorders [44]. The high expression of these genes in this brain structure supports the notion that the striatum is very sensitive to mitochondrial oxidative dysfunctions [45].
Acute injection of METH (5 mg/kg) in METH-naïve rats (SMvSS) caused significant changes in the expression of 86 genes, with 60 being upregulated and 26 downregulated ( Figure 1). IPA analysis revealed that these genes are involved in the control of gene expression, participate in cell signaling, regulate cellular growth and proliferation, control organ morphology, and participates in the manifestation of behaviors. Figure 2 shows networks of genes that are involved in the control of gene expression, cellular compromise, and endocrine system development. Upregulated genes found in these networks include several transcription factors, namely Arc, c-fos, Crem, Egr1, Egr2, Egr4, c-fos, junB, Npas4, Nptx2, Nr4a3 (NOR-1) (Figure 2 and Additional file 1: Table S3). The expression of some of these is known to be influenced by illicit drugs, including cocaine [46] and METH [9,14]. Other genes of interests include Dusp14, neurotensin, and orexin-A (hypocretin, HCRT) that are also upregulated (Additional file 1: Table S3). Top canonical pathways that involve these genes include GADD45 signaling, TGFbeta signaling, acute phase response signaling, and NRF2-mediated oxidative stress.
In contrast, acute injection of METH in METHpretreated rats (MMvSS) caused significant alterations in the expression of 71 genes, with only 18 being upregulated and 53 being downregulated. The list of genes that were affected by the acute METH administration to the chronically METH-treated rats are shown in Additional file 1: Table S4. These genes are involved in cellular development, cell-to-cell signaling and interaction, and carbohydrate metabolism. Top canonical pathways include glioma invasiveness and Rac signaling. The list of genes includes Npb and Nr4a3 that are upregulated and BMP2 that is downregulated by the acute METH injection to rats pre-exposed to the drug. Figure 3 shows networks of genes that are involved in cell cycle, drug metabolism, tissue development, and reproductive system development and function. Figure 4 shows quantitative PCR validation of the METH-induced changes in the expression of some genes of interest in METH-naïve and METH-pretreated rats. These are DnajB5, Egr1, Nptx2, Nts, and Npb.
Genome-wide analysis of H4K5Ac binding in the rat striatum after METH exposure Although we have consistently shown that acute administration of various doses of METH can cause substantial alterations in gene expression [14][15][16]47], the epigenetic events involved in these changes have yet to be characterized. Gene expression in the central nervous system is regulated, in part, by epigenetic alterations that include post-translational modifications of histone tails including histone acetylation and methylation [48]. Changes in large-scale DNA binding by modified histones and other proteins, after various manipulations, are now being investigated using ChIP-Seq [35,36,49,50]. We reasoned that a similar approach might help us to identify epigenetic alterations that participate in the acute effects of METH on gene expression in the rat dorsal striatum of METH-naïve and METH-pretreated rats. As shown in Figure 5, genome-wide analysis of H4K5AC binding reveals that H4K5Ac binds around the transcription start sites (TSSs) of genes in the control ( Figure 5A), SM ( Figure 5B), MS ( Figure 5C), and in the MM ( Figure 5D) groups. However, there were additional H4K5Ac binding sites in the SM (87,089 H4K5Ac binding sites corresponding to 10,463 annotated genes), MS (50,031 binding sites, 9,877 genes), and the MM (74,856 binding sites, 10,301 genes) groups in comparison to the control animals that showed 22,262 H4K5Ac binding sites corresponding to 8,203 annotated genes in the rat striatum ( Figure 6). The majority of genes with H4K5Ac binding in the SS group were also found in the SM, MS, and MM groups ( Figure 6). As shown in the figure, 99% of the genes with H4K5Ac binding sites in the control rats (SS) were also found in the METH-naïve rats that received an acute METH injection. Similarly, the majority (97%) of the genes with H4K5Ac binding sites in the control rats were also found in the chronic METHtreated groups, while 99% of the genes in the control group were also found in the MM group. Taken together, these data suggest that both acute and chronic treatment with METH caused the appearance of de novo H4K5Ac binding sites in a large number of genes that are expressed in the striatum. Figure 6A also reveals that the vast majority of genes with H4K5Ac binding sites in the groups that had received either acute or chronic METH treatments were co-localized: 9,731 genes in SM and MS, 9,643 genes in MS and MM, 10,090 genes in SM and MM, and 9,543 genes in the 3 METH groups. Figure 6B also shows the majority of METH-induced additional H4K5Ac binding sites were located on genes that were commonly found (1627 annotated genes) in the 3 METH groups. In addition, 1776 genes were common in the SM and MS groups, 1996 genes in the SM and MM groups, and 1683 genes in the MS and MM groups. These results indicate that METH administration exerts consistent effects on H4K5Ac binding in the rodent brain.
Pathway analyses revealed that genes with novel H4K5Ac binding in the SM group are involved in protein synthesis (93 genes), cellular growth and proliferation (539 genes), cell death and survival (582 genes), nervous system development and function (304 genes), behaviors (188 genes), and neurological diseases (358 genes). Top canonical pathways include Ox40 signaling pathway, acute phase response signaling, death receptor signaling, and Huntington's disease signaling. The genes with novel H4K5Ac binding in the MM group participate in the control of cell death and survival (552 genes), nervous system development and function (264 genes), and neurological diseases (356 genes). Top canonical pathways included OX40 signaling, acute phase response signaling, death receptor signaling, G-protein-coupled receptor signaling, cAMP-mediated signaling, and Huntington Disease signaling. The data on the involvement of AMPK and G-protein receptor signaling are consistent with the known effects of METH on neurotransmitters and their receptors [9].
We next chose the top 10% of genes with highest H4K5Ac binding in the SS, SM, and MM groups for further pathway analyses because we thought that they might potentially play important roles in the functions of the striatum in the absence or presence of METH exposure. IPA revealed that the top 10 percent of the genes with high H4K5Ac binding in the SS group are involved in neurological diseases (61 genes), cancer (58 genes), and developmental disorders (32 genes). Molecular and cellular functions in which they participate include cell cycle regulation (32 genes) and lipid metabolism (10 genes). They are also involved in tissue development (44 genes) and nervous system development and function (41 genes). Top canonical pathways include cAMP-mediated signaling, protein ubiquitination pathway, NRF2-mediated oxidative stress, G-protein-coupled receptor signaling, and tuna splicing. The top 10 percent of genes with high H4K5Ac binding in the SM group are involved in neurological diseases (208 genes) and developmental disorders (91 genes). They also participate in the control of cellular assembly and organization (159 genes), nervous system development and function (198 genes) and behavior (109 genes). Top canonical pathways include protein kinase A signaling, G-protein-coupled receptor signaling, CDK5 signaling, ERK/MAPK signaling, axonal guidance signaling, and Dopamine-DARPP32 feedback in cAMP signaling. Finally, in the MM group, genes in the top 10 percent of high H4K5Ac binding belong to genes that participate in control of gene expression (188 genes), cellular function and maintenance (209 genes), and cell morphology (207). They are also involved in neurological diseases (216 genes), developmental disorders (103 genes), and in nervous system development and function (227 genes). In addition, top canonical pathways in the MM group include molecular mechanisms of cancer, Wnt/beta-catenin signaling, Dopamine-DARPP32 feedback in cAMP signaling, and in G-protein-coupled receptor signaling. Together, these observations are consistent with the idea that acute and chronic METH administration can influence histone acetylation in the brain.
Global striatal gene expression levels correlate with H4K5Ac binding
In order to test if H4K5Ac binding correlated with striatal gene expression, we carried out regression analyses and the gene expression and the ChIP-Seq data were compared as described previously [51]. Figure 7 shows that there were positive correlations between the levels of striatal H4K5Ac binding and gene expression in the control ( Figure 7A), SM ( Figure 7B), MS ( Figure 7C), and MM ( Figure 7D) groups. These data obtained from the rat brain provide further support for the notion that H4K5 acetylation is an important factor in gene transcription, as reported previously using other models [39][40][41]52].
Acute METH-inducible genes are accompanied by METH-induced increased H4K5Ac binding As reported above, the acute METH injection caused increased expression of 60 and decreased expression of 26 genes in METH-naïve rats (SM group). We thus wanted to know if METH-induced increased H4K5Ac binding might be related to increased gene expression caused by the drug. Towards that end, we compared H4K5Ac binding between the SS and SM groups among the genes that showed acute changes after the METH injection. We found that 29 of 32 annotated genes present in the array and ChIP-Seq data showed acute METH-induced increased gene expression and increased H4K5Ac binding, while the other 3 genes showed no changes (Table 1). These genes include Arc, c-fos, Egr1, and Crem that are known to be involved in the actions of psychostimulants, including cocaine, in the brain [46,[53][54][55]. Together, these observations support a common role for these genes in drug-induced neuroadaptations in the brain. In contrast, 3 of the 5 down-regulated genes identified on both platforms show no changes while 2 genes showed decreased H4K5Ac binding ( , (C) rats chronically exposed to METH and then treated acutely with saline (MS), and (D) rats chronically exposed to METH and then given an acute injection of METH before being euthanized (MM). The pattern of H4K5Ac binding in the brain was not influenced by neither acute nor chronic METH exposure. response, and protein kinaseA signaling. Figure 8 shows networks of genes that are involved in embryonic development and cellular compromise ( Figure 8A), and in the control of nervous system development and behavior ( Figure 8B).
ChIP-PCR confirmed the ChIP-Seq data and showed that acute METH caused significant increases in We also measured the protein expression of Arc and c-fos ( Figure 10). Acute METH caused significant increases in Arc (3.67-fold, p = 0.0003) ( Figure 10A) and c-fos (3.31-fold, p = 0.0008) ( Figure 10B) expression in saline-pretreated rats. Repeated exposure to METH also caused increased Arc (3.7-fold, p = 0.0003) and c-fos (2.72-fold, p = 0.0049) protein expression. Moreover, there were significant increases in Arc (3.04-fold, p = 0.0017) and c-fos (3.76-fold, p = 0.0003) protein expression in the MM group.
As reported above, the acute administration of METH to METH-pretreated rats caused changes in the expression of 71 genes, with most genes being downregulated ( Figure 1). Table 2 shows that 4 of the 5 upregulated genes identified on both platforms showed increased H4K5Ac binding whereas one gene showed no changes in binding. These genes included Npb and Nr4a3. IPA shows that they are involved in cellular development (Nr4a3), cell morphology (Gpr143), tissue development (Nr4a3 and Gpr143), hereditary disorders (Gpr143 and Ppef2), and reproductive system development and function (Npb). Figure 11A shows that these genes are involved in networks that participate in carbohydrate and lipid metabolism. In contrast, 7 of the 14 downregulated annotated genes identified on both platforms showed no changes while the other 7 showed decreased H4K5Ac binding ( Table 2). We also used quantitative PCR to Genes with H4K5Ac binding per group A B Genes with novel H4K5Ac binding confirm the METH-induced increases in Nr4a3 mRNA in the SM (6.4-fold, p < 0.0001) and MM (9.3-fold, p < 0.0001) groups ( Figure 11B). ChIP-PCR also confirmed the changes in H4K5Ac binding in the SM (2-fold, p = 0.0004) and MM (1.7-fold, p = 0.011) groups ( Figure 11C).
Discussion and conclusion
Our study provides, for the first time, a comprehensive map of acetylated H5K5Ac binding throughout the rat genome and documents the presence of thousands of these sites in genes expressed in the rat striatum. We also show that H4K5Ac binding occurs around TSSs and that the pattern of binding is not affected by METH treatments. In addition, both acute and chronic METH administration caused significant changes in H4K5Ac binding, with additional binding sites being observed in more genes after the acute METH injections. Moreover, levels of gene expression correlated with genome-wide H4K5Ac binding in the striatum. The microarray analysis further revealed that acute METH also caused increased expression of 60 of 86 genes in salinepretreated rats whereas there was mostly decreased gene expression (53 of 71 genes) after an acute METH injection to METH-pretreated rats. Important, the vast majority of genes with increased expression also experienced increased H4K5Ac binding while the genes with decreased expression showed either decreases or no changes in H4K5Ac binding. The findings that METH-induced increased H4K5Ac binding is associated with increased expression of a set of genes after the acute METH injection in METH-naïve rats is consistent with the report that all-trans-retinoic acid caused increased histone H4 acetylation and increased gene expression during leukemic cell differentiation [56]. Our data are also consistent with the observation that deletion of the histone deacetylase, RPD3, produced increased H4K5Ac binding at promoters of several genes in Saccharomyces cerevisiae [41]. MS_H4K5Ac MM_H4K5Ac Figure 7 H4K5Ac binding positively correlates with gene expression in the rat dorsal striatum. ChIP-Seq and expression data were compared as described previously [51]. In short, genes were sorted based on gene expression values (Z scores) and binned into groups of 100 genes. The average gene expression value for each bin was then calculated. H4K5Ac tags were assigned to the nearest promoter region of genes and normalized to the total tag counts for that sample. The mean tag counts of the above mention bins were also calculated. The averaged binned gene expression values were then graphed against mean tag counts for each bin. The values in the insets represent the regression coefficients (R 2 ). This conclusion is consistent with those of other investigators who have reported that individual activators can cause differential patterns of histone acetylation, with some causing increased H4 acetylation but others causing variable effects on H4 acetylation and gene expression [57]. Together, these results suggest that, under the chronic METH condition, METH-induced increased H4K5Ac binding is not sufficient to cause METH-induced increased expression of the majority of genes in the dorsal striatum. These data implicate the existence of other epigenetic factors that might serve to regulate, in conjunction with H4K5Ac binding, the expression of genes that show substantial changes in H4K5Ac binding. This discussion provides a partial explanation for our observation that the acute METH injection caused mostly downregulation of gene expression in the METH-pretreated rats. When taken together with the observations of the existence of epigenetic ensembles that control gene expression in human cells [50,58], our results suggest that combinatorial epigenetic influences [21] might also be responsible for the acute transcriptional changes observed after an acute METH injection to METH-naïve or METH-pretreated rats, with H4K5Ac binding playing a contributory role. We also used qRT-PCR and ChIP-PCR in order to confirm some of the changes observed using the two discovery platforms. We picked Arc, Crem, Egr2, and Nr4a3 because they are implicated in synaptic plasticity [59][60][61][62]. Crem mRNA expression was increased in comparison to the control group only after the acute METH injection to METH-naive rats. H4K5Ac binding around Crem TSS was also increased after the acute METH administration in METH-naïve rats but not in METHpretreated rats. These observations suggest that chronic METH might have caused additional epigenetic modifications that had rendered Crem expression refractory to the acute effects of the drug. These observations are somewhat dissimilar to our observations of the effects of METH on Egr2 expression. Specifically, the acute METH injection caused substantial increases in Egr2 mRNA in saline-pretreated rats. In contrast, there was attenuation of the acute METH-induced effects on Egr2 expression in the METH-pretreated rats. This attenuation occurred in spite of the fact that acute METH caused increased H4K5Ac binding in both METH-naive and METH-pretreated rats. Egr2 is a member of the Kruppel-like zinc finger transcription factors that include Egr1, Egr3 and Egr4 [62,63]. The Egrs are activated by neuronal activity [62,63] and by METH [14,15]. Egr2 mediates stabilization and maintenance of longterm potentiation (LTP) [64] and regulates attentional processes [65]. Although the role of Egr2 induced by small METH doses is not clear, high METH doses have been shown to cause Egr-dependent activation of Fas ligand (FasL)-mediated neuronal apoptosis [15]. Importantly, the observations that chronic METH pretreatment blunted the acute effects of METH on Crem and Egr2 expression are consistent with data from other investigators who had reported that the acute effects of psychostimulants on IEG expression were blunted in animals previously exposed to either cocaine [46]or the amphetamines [18,66,67]. Altogether, the observation of chronic drug administration-induced blunting effects of the acute transcriptional consequences of psychostimulants suggests that these phenomena might participate in molecular events responsible for drug-induced tolerance [68]. They might also explain, in part, the need for repeated drug-seeking and drug-taking behaviors that are sine qua non of drug addiction [69,70].
It is also of interest to discuss the effects of acute and chronic effects of METH on Nr4a3 expression in contrast to the observations with Crem and Egr2 discussed above. Nr4a3 is a member of Nr4a1/Nur77 family of transcription factors (Nr4a1/Nur77/NGFIB, Nr4a2/ Nurr1 and Nr4a3/Nor-1) that belong the superfamily of steroid nuclear hormone receptor superfamily [71,72]. They participate in a number of biological functions including cellular proliferation, differentiation, and apoptosis [71,72]. Nr4a3 also regulates axonal guidance and pyramidal cell survival in the hippocampus [73]. As shown above, PCR assays confirmed both the METH-induced changes in Nr4a3 gene expression and in H4K5Ac binding that were identified by the microarray and ChIP-Seq experiments, respectively. Importantly, we found that chronic METH did not produce blunting of the acute METH-induced increased in Nr4a3 expression in the METH-pretreated rats in contrast to the observations for Crem and Egr2 mRNA expression discussed above. When taken together, our observations hint to a role of these genes as important yet differential regulators of molecular events that are consequent to repeated METH exposure. In addition to the IEGs, acute METH was found to increase neurotensin mRNA levels and H4K5Ac binding around the Nts TSS in the striatum. We also found that the acute effects of METH on Nts expression were somewhat attenuated in METH-pretreated rats. This is of interest because acute METH administration is known to cause increased neurotensin mRNA and protein expression in the rat striatum [74][75][76]. Similar increases are also observed in animals trained to self-administer the drug [6,77,78]. Our observations thus add to the literature that indicates that METH influences this neuropeptidergic system in the rat brain. It is important to note that a recent study had reported that neurotensin levels are significantly downregulated in the striatum of rats that had undergone extinction training after METH self-administration [79], suggesting differential responses in neurotensin expression after acute METH and during drug withdrawal. In any case, when taken together, these observations suggest that neurotensin might play an important role in the acute behavioral responses to the drug and/or in the maintenance of METH self-administration. The demonstration that METH also increased H4K5Ac binding at the Nts gene promoter provides a partial explanation for the acute effects of the drug on neurotensin expression in the striatum.
Another peptide of interest is neuropeptide B (NPB) that also shows increased mRNA expression after chronic METH treatments. NPB, a neuropeptide of 29 aa residues, was identified as an endogenous ligand for the G protein-coupled receptor, GPR7, whose stimulation causes decreased intracellular cAMP production [80,81]. The NPB transcript is widely distributed in the brain [81,82]. NPB has been implicated in the regulation of pain sensation, endocrine function, as well as feeding behaviors [81,[83][84][85][86]. For example, intracerebral NPB injection decreases feeding behaviors [81] whereas NPBknockout mice are obese [85]. These observations are compatible with the known anorectic effects of the amphetamine analogs including METH [87,88] and suggest that NPB might play a role in METH-induced chronic anorexia. The veracity of this argument will need to be tested experimentally. The role of NPB in other behavioral aspects of METH needs also to be considered. In any case, the present observations add to the growing literature that METH can substantially influence the expression of various neuropeptides and implicate these substances in the acute and long-term neuroplastic effects of the drug.
It is also of interest to discuss some of the METHinduced networks that were identified by pathway analysis. The IPA showed that injections of METH induced the expression of genes that are involved in the development of diverse systems. These genes include Egr1, Egr2, c-fos, Nr4a3, and Vgf (Figures 2, 3 and 8). The METHinduced increased expression of the developmental gene, foxa3 (Figure 3) which is a member of the family of forkhead winged transcription factors [89,90], is of interest because, together with the changes in other transcription factors, these observations support the notion that amphetamine and its analogs might recapitulate developmental programs in adult animals [91]. This idea was initially based on the findings of Webb et al. (2009) [92] who had reported that, in zebrafish, amphetamine induced a set of genes enriched with transcription factors that are known to participate in developmental processes. Our observations are also consistent with the idea that drug addiction is dependent on altered synaptic plasticity [93][94][95][96] that are regulated, in part, by developmental factors in adult animals [91,97].
In summary, our study has provided a detailed description of the acute and chronic effects of METH on H4K5Ac binding and gene expression in the brain. We found that acute METH-induced increases in H4K5Ac binding were, in part, responsible for a subset of METHupregulated genes in METH-naïve and METH-pretreated rats. However, given the appearance of many novel H4K5Ac binding sites in the striatum after both acute and chronic METH administration, the observations of METH-induced changes in the expression of only a few genes suggest that the presence of METH-induced novel H4K5Ac binding sites might be necessary but not sufficient to induce transcriptional changes in gene expression. Moreover, because the acute METH injection caused, for the most part, decreased mRNA levels in METHpre-exposed rats, the possibility exists that repeated METH exposure might have triggered epigenetic modifications which had negatively impacted the expression of METH-responsive genes. This idea is consistent with the combinatorial nature of epigenetic events that control inducible gene expression [21,50]. Finally, given the adverse neuropsychiatric and psychosocial consequences of METH addiction, similar studies are necessary to help to identify specific longlasting epigenetic effects of repeated METH exposure. The elucidation of these molecular alterations might help to develop alternative pharmacological approaches for the treatment of this common, yet complex, psychiatric disorder.
Animals
Male Sprague-Dawley rats (Charles Rivers Laboratories, Raleigh, NC), weighing 330-370 g in the beginning of the experiment were used in the present study. Animals were housed in a humidity-and temperature-controlled room and were given free access to food and water. All animal procedures were performed according to the National Institutes of Health Guide for the Care and Use of Laboratory Animals and were approved by the National Institute of Drug Abuse-/ Intramural Research Program (IRP) Animal Care and Use Committee (NIDA/IRP-ACUC).
Drug treatment and tissue collection
Following habituation, rats were injected intraperitoneally with either (±) METH-hydrochloride (NIDA, Baltimore, MD) or an equivalent volume of 0.9% saline over a period of two weeks as described in Additional file 1: Table S1. The saline-or METHpretreated animals received a single injection of saline or METH (5 mg/kg × 1) at 16-18 hrs after the last saline or METH pretreatment injection. This dose of METH does not cause any neurotoxic effects, as much larger doses are required for pathological changes to develop in the rodent brain [47]. Striatal tissues from one side were dissected on ice, snap frozen on dry ice, and stored at −80°C until used in microarray and quantitative PCR experiments whereas the other side was processed for ChIP experiments detailed below.
RNA extraction
Total RNA was isolated using Qiagen RNeasy Mini kit (Qiagen, Valencia, CA) according to the manufacturer's instructions. RNA integrity was assessed using an Agilent 2100 Bioanalyzer (Agilent, Palo Alto, CA) and showed no degradation. The RNA extracted from the striatum was used to measure gene expression by microarray analysis and quantitative PCR was used to confirm the expression of some genes of interest.
Microarray analysis
Microarray hybridization was carried out using RatRef- described by us [16]. Raw data were imported into GeneSpring and normalized using global normalization. The normalized data were used to identify changes in gene expression after the various patterns of METH injections as described above. A gene was identified as significantly affected if it showed increased or decreased expression according to an arbitrary cut-off of 1.7-fold change at p<0.01, according to the GeneSpring statistical package. Similar criteria have been successfully used in our previous microarray studies [15,16]. Network analyses were performed using the Ingenuity Pathway Analysis (IPA) software (Ingenuity Systems, Redwood City, CA). The IPA software allows for the identification of networks, canonical pathways, and biological functions that are affected by the drug. We also used the IPA software to graphically show the cellular location of genes significantly affected by METH.
Quantification of mRNA by quantitative real-time PCR
Total RNA was obtained individually from 6-8 rats per group and was reverse-transcribed with oligo dT primers and RT for PCR kit (Clontech, Palo Alto, CA). PCR experiments were done using the Chroma4 RT-PCR Detection System (BioRad Hercules, CA USA) and iQ SYBR Green Supermix (BioRad) according to the manufacturer's protocol. Sequences for gene-specific primers corresponding to PCR targets were obtained using LightCycler Probe Design software (Roche). The primers were synthesized and HPLC-purified at the Synthesis and Sequencing Facility of Johns Hopkins University (Baltimore, MD). The sequences for the IEG primers have been previously published [15,97]. Additional file 1: Table S2 shows the sequences of the primers. Quantitative PCR values were normalized using OAZ1 (ornithine decarboxylase antizyme 1) based on a previous paper [98]. The results are reported as relative changes calculated as the ratios of normalized gene expression data of each group compared to the SS group. The list of primers is noted in Additional file 1: Table S2.
ChIP-Seq and ChIP-PCR
Striatal tissues were processedfor ChIP-Seq and ChIP-PCR. Briefly, brain tissues were minced to~1 mm-sized pieces, and immediatelycross-linked in 1% formaldehyde for 15 min at room temperature. The tissues were washed four times in cold PBS containing the proteinase inhibitors in the Roche protease inhibitor cocktail tablet (Roche Diagnostics) and 1 mMPMSF (Sigma). Tissues were rapidly frozen on dry ice. The fixed tissues were resuspendedin SDS lysis buffer (EMD Millipore Corp) containing the Roche protease inhibitor cocktail and 1 mM PMSF and each sample was transferred to TPX plastic tube (Diagenode Inc., Denville, NJ) and sonicated 15 cycles of 30 sec Time ON and 30 sec Time OFF using a Bioruptor (Diagenode Inc.). Fragmentation was checked by gel analysis to confirm sheared ranges of 300-600 bp. Dynabeads (Life Technologies, Grand Island, NY) were incubated with 5 μg of a specific antibody directed against H4K5Ac for ChIP-Seq. Similarly, samples from another group of animals were incubated with the same antibody to confirm some of the ChIP-Seq data using ChIP-PCR. Sequences for the ChIP-PCR are shown in Additional file 1: Table S2. For DNA sequencing, adapters were ligated to the precipitated DNA fragments or the input DNA to construct a sequencing library according to the manufacturer's protocol (Illumina, San Diego, CA). Sequencing images generated were analyzed with the Firecrest program followed by base calling using the Bustard program. The first 41 bases were aligned to the rat reference genome using the Gerald program. Firecrest, Bustard and Gerald are part of the Illumina Analysis Pipeline package. H4K5Ac binding was identified by ChIP-Seq and was calculated by comparing the control and METH-treated groups after corrections for DNA inputs. The microarray and ChIP-Seq data have been deposited in NCBI under GEO accession number GSE42776. ChIP-Seq and expression data were compared as described previously [51]. In short, genes were sorted based on gene expression values (Z scores) and binned into groups of 100 genes. The average gene expression value for each bin was then calculated. H4K5Ac tags were assigned to the nearest promoter region of genes and normalized to the total tag counts for that sample. The mean tag counts of the above mention bins were also calculated. The averaged binned gene expression values were then graphed against mean tag counts for each bin.
Statistical analysis
Statistical analysis was performed using analysis of variance (ANOVA) followed by post-hoc analyses (StatView 4.02, SAS Institute, Cary, NC). Values are shown as means ± SEM. The null hypothesis was rejected at p < 0.05.
Additional file
Additional file 1: These include Tables S1 to S4.
performed and analyzed ChIP experiments. BL dissected and helped in the drug injections. FSP performed and analyzed RT-PCR experiments. EL performed microarray and ChIP-Seq experiments. SD performed bioinformatics analysis of ChIP-Seq experiments. KGB supervised the overall microarray and ChIP-Seq experiments. CB analyzed the microarray data. All authors read and approved the final manuscript. | 2017-06-22T11:38:12.757Z | 2013-08-12T00:00:00.000 | {
"year": 2013,
"sha1": "f6ba2f5f24ee0c15e5ce40be08d14b099d0c73dd",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-14-545",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6ba2f5f24ee0c15e5ce40be08d14b099d0c73dd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
166587330 | pes2o/s2orc | v3-fos-license | An Encounter of Diversity of Building Signage in Traditional Street Character at Melaka Historical Centre
Today, street functioning as social arena is less sensitive towards designing a continuity of building appearance in terms of building signage design, thus creating an unattractive setting. The design of building signage nowadays that does not consider the whole context of the street and human scale aspects has created a chaotic ambience. Thus, this paper will discuss the continuousness of building signage design that contributes to street character by referring to traditional street model. A mixed method employing questionnaire survey (n=330), in-depth interview with street users (n=21), content analysis of archival data, and a visual survey was adopted. This study has chosen the streets at Melaka Historical City Centre because they represent the local character. Those streets are Jalan Tukang Besi, Jalan Tukang Emas, and Jalan Tokong. The study shows the continuousness of building design created by the diversity of building signage, thus creating an attractive environment to consider in the new street design. This study concludes that the result acts as a benchmark for designing the future or existing street as a public space. The new street milieu portrays the spirit of a place and has a potential in the future tourism sector by attracting them to places with local character.
Introduction
Street is one of the important elements that form the city or town. Scholars added that approximately, 80% of the city is formed by the street. Besides that, streets also function as important public spaces that act as social arena where people will mingle around, meet somebody, have leisure time, to pass through, and for business. This is different from the road where the function is more towards vehicle movement such as the highway (Jones et al., 2008;Moughtin and Merterns, 2006). Therefore, it is important to design a street with local character that makes people enjoy walking around or relaxing . Thus, street appearance with local character could be more appreciated by the users and fulfill the local needs.
Unfortunately, the fast development has affected street appearance. This includes the debuts of building signage. The increase in businesses and economics has created a diversity of trade along the street. Hence, to market their business and service, large building signage is located at the front of the building façade to inform people about the type of business inside the building. Therefore, we could see building signage appearance in a big scale along the street. Every company wants to enhance their business through large signage. This appearance has covered the architectural style, especially a historical building with unique façade. As a result, this creates a street with chaotic and unpleasant walk-able environment (Ja'afar, 2014;Ja'afar et al., 2014).
Methodology
In order to identify the diversity of appearance of building signage in creating a traditional street character, this study applies mixed mode with fieldwork and survey approaches. The techniques utilized in this study are questionnaire, observation, in-depth interviews, and document review on archival data. For quantitative approach, the questionnaire was distributed to 330 respondents of street users. The study has classified them into two types of user, namely (i) mobile userswho do not depend on the study area, for example, tourists and buyers, and (ii) static userswho fit the study area, for example, residents and people who are working there (Alamoush and Ja'afar, 2017). The data of this technique will be analyzed using Statistical Package for Social Sciences software via simple statistics such as frequency and percentage.
The next approach is qualitative through observation, in-depth interviews, and document review on archival data. This study applies visual analysis for observation technique, while the other three techniques will apply thematic analysis via framework and variable determined through literature review. These techniques use NViVO computer software to analyze the data. For the interview, this study uses a group of respondents for questionnaire; mobile and static users with 21 samples. The number of samples is selected because according to the previous study, a number of 20-30 samples are enough if the study applies qualitative approach (Alamoush and Ja'afar, 2017;Ja'afar, 2018).
Figure-2. The location of study area in Malaysian context
The study area chosen is Jalan Tukang Besi, Jalan Tukang Emas, and Jalan Tokong at Melaka Historical City Centre, Malaysia. The combination of these three streets creates a long street, which is approximately 600 meters. The reason this site is chosen is that it is among traditional streets in Malaysia that portrays local character. Thus, the ambience of the local environment that is rich with culture could befall until today (Ja'afar, 2018; MPMBB Rancangan Khas, 2010).
Result and Discussion
Building signage that contributes to traditional street character has been mentioned through the techniques of questionnaire, interview, observation, and archival data as shown in Table 1. In this study, building signage will be measured through appropriateness of location, size, and visual Historical review from archival data has found that traditional building signage could be found at several locations of the building. These include the (i) wall surface of building or (ii) surface of timber that is crafted directly. The form of building signage includes an alphabet or logo. The location of building signage is different for each building façade. This is because there is diversity of architecture style via diversity of era. As mentioned in Figure 2, there are seven locations of building signage, namely (i) main entrance, (ii) frieze, (iii) in between of shop frontage and frieze, (iv) column at ground floor, (v) tang long, (vi) door, and (vii) window. The appearance of the signage could be in the Roman or Chinese alphabets.
According to a survey, 62% respondents mentioned that building signage is associated with one of the elements in describing the uniqueness of buildings in the study area (Table1). The observation has supported the data of history where we still see inheritance of building signage at different locations for each unit of the building. As a result, the assemblage of building that creates blocks along the street generates a unique ambience that enhances the original architectural façade. However, an observation has also found that the modern building signage could be divided into two types of appearance. As mentioned in Figures 3 and 4, those are (i) building signage with a large scale where it covers or shields the building style of architecture, and (ii) building signage with small size that does not shield the architectural style.
Figure-4. Modern building signage that does not annoy the architectural style
The appearance of heritage and today's signage has been elaborated in detail by respondents through an interview. Below are the responses as quoted: "
From my point of view, big building signage is not needed. It looks ugly because it covers the beauty of building façade. A small signage is already enough to recognize building function while walking. This is because I could recognize the building function according to the types of item they display outside the building." (Respondent-1)
"The uniqueness of this place is that you can find a traditional building signage that looks nice with Chinese words. Usually, the builder carves at the column, which mentions the types of business. No need the today's signage, bigger than the building. … As you can see here (showing the door), they also carve at timber door. Today, this type of door is expensive because there is a value of heritage." The responses above have discovered that the modern building signage through big appearance is not applicable because it does not enhance the uniqueness of historical façade. Besides, respondents could recognize building function not because of large building signage. The second statement describes the type and location of traditional building's signage with Chinese characters that enhance the uniqueness of the street compared to large modern signage. The design of this traditional building signage also portrays visibility of building's function.
The observation has supported the statement of both respondents above. According to Figure 3, an observation has found that the location and big building signage's appearance have hindered the building's architectural style and the continuity between building façade and blocks along the street. In addition, it has been found that there is a mural building signage using bright colors. According to scholars, the theme park's building signage approach should be avoided because it ravages the harmonies of the street environment visually (Syed Zainol Abidin Idid, 2008).
The above interpretation portrays that it is essential to design a building signage with appropriate location, size, and visual quality. This is parallel with another scholar's point of view that the design of building signage in a street that functions as public space should refer to the need for pedestrians' visibility. This is because the main users of the street are the pedestrians; thus, a size that appropriates with the scale of human is necessary. Therefore, a big building signage is not needed. Further, the signage that responds to human scale will enhance the style of building architecture. Thus, the types and location of building signage should consider the orientation of pedestrians along the block of the building. Some of the scholars have suggested several places of building signage as shown in Figure 5 (Gehl et al., 2006;Syed Zainol Abidin Idid, 2008). Other scholars added that building signage with human scale design could reduce the competition between public and traffic signage (Bogert, 2011;City of Meridian, 2009). The characteristics of building signage such as type, location, size, and visual as suggested by previous scholars above could be seen in our traditional building signage (Gehl, 2010;Ja'afar, 2014;Jacobs, 2010;Syed Zainol Abidin Idid, 2008). As explained above, it shows that our traditional building signage generates uniqueness by enhancing the architectural style, not just individually but also by including the whole block on the street. As a result, the ambience will contribute to the visibility of building function, continuity, and unity of the streets. Thus, the appearance of traditional building signage design could become a model or reference in designing today's building signage in the context of street.
To sum up, the design of traditional building signage that uplifts a continuity of architecture and place attraction is crucial in the contribution towards traditional street character. This study has found that this aspect should consider designing a new or existing street via two concepts: Building signage design that is appropriate with the building that appears as individual and the whole blocks that form the street character. The building signage should consider appropriate aspects of location, size, scale, and visual. Thus, the characteristics should cover several parts, namely displaying the visibility of building use, reducing a visual negative impact, reducing the confusion, and reducing the competition with public and traffic signage that could create a continuity of building façade and harmonies of streetscape environment. Building signage for pedestrians that supports the human scale environment where the types' and location's appropriateness should be considered.
Conclusion
The study has identified the diversity of building signage that contributes to traditional character at several streets in the Melaka Historical City Centre. To characterize the attributes is not only depending on archival data and observation on the current situation, but also the users' experience and perception. This is because the users are the 'customers' of the street who use the place. Thus, their point of view could be a point for designing a place with local character because the place is designed for them.
To summarize, there is something that we can learn from our diversity of types, size, location, and visual of the heritage of building signage that we could apply to our new and existing streets today. Besides, we could avoid a 'giant' or standardized building signage that creates a dull environment. | 2019-05-28T13:09:31.476Z | 2018-12-25T00:00:00.000 | {
"year": 2018,
"sha1": "04a16ddf438e617ad5186ff544c9b5c8cd200123",
"oa_license": "CCBY",
"oa_url": "https://arpgweb.com/pdf-files/spi6.180.839-846.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6140cb61a589a5f4e54d18149c43e68673db2d6e",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"History"
]
} |
118870297 | pes2o/s2orc | v3-fos-license | Fitting CMB data with cosmic strings and inflation
We perform a multiparameter likelihood analysis to compare measurements of the cosmic microwave background (CMB) power spectra with predictions from models involving cosmic strings. Adding strings to the standard case of a primordial spectrum with power-law tilt n, we find a 2-sigma detection of strings: f_10 = 0.11 +/- 0.05, where f_10 is the fractional contribution made by strings in the temperature power spectrum (at multipole l = 10). CMB data give moderate preference to the model n = 1 with cosmic strings over the standard zero-strings model with variable tilt. When additional non-CMB data are incorporated, the two models become on a par. With variable n and these extra data, we find that f_10<0.11, which corresponds to G mu<0.7x10^-6 (where mu is the string tension and G is the gravitational constant).
Introduction.-The inflationary paradigm is successful in providing a match to measurements of the cosmic microwave background (CMB) radiation, and it appears that any successful theory of high energy physics must be able to incorporate inflation. While ad hoc single-field inflation can provide a match to the data, more theoretically motivated models commonly predict the existence of cosmic strings [1]. These strings are prevalent in supersymmetric D-and F-term hybrid inflation models (see eg. [2]) and occur frequently in grand-unified theories (GUTs) [3]. String theory can also yield strings of cosmic extent [4]. Hence the observational consequences of cosmic strings are important, including their sourcing of additional anisotropies in the CMB radiation.
In this letter we present a multi-parameter fit to CMB data for models incorporating cosmic strings. It is the first such analysis to use simulations of a fully dynamical network of local cosmic strings, and the first to incorporate their microphysics with a field theory [5,6]. It yields conclusions which differ in significant detail from previous analyses based upon simplified models: we find that the CMB data [7] moderately favor a 10% contribution from strings to the temperature power spectrum measured at mulitpole ℓ = 10 with a correponding spectral index of primordial scalar perturbations n s ≃ 1. There are also important implications for models of inflation with blue power spectra (n s > 1). These are disfavoured by CMB data under the concordance model (power-law ΛCDM which gives n s = 0.951 +0.015 −0.019 [8]) and previous work seemed to show that this remains largely the case even if cosmic strings are allowed (n s = 0.964 ± 0.019 [9]). However with our more complete CMB calculations, we find that the CMB puts no pressure on such models if they produce cosmic strings. Our conclusions are slightly modified when additional non-CMB data are included, with the preference for strings then reduced.
CMB calculations.-In the combined inflation plus strings case, inflation creates primordial perturbations which evolve passively until today but, in the intervening time period, cosmic strings actively source additional perturbations. Given the small size of the observed CMB anisotropies, the perturbations may be treated linearly and any coupling between those seeded by the two mechanisms can be ignored. The string and inflation perturbations can therefore be evolved via separate calculations, yielding two contributions to the CMB power spectrum that are statistically independent and so are simply added together to give the total power spectrum.
Calculating the cosmic string component presents a challenge because their evolution is non-linear and the string width is very much smaller than their separation at times of importance for CMB calculations. Previous comparisons of the string CMB power spectrum against data have relied upon models which neglect the width, representing local strings as 1D objects and then either evolving them according to the Nambu-Goto equations appropriate for a relativistic string [10] or employing an unconnected segment model (USM) [9,11]. These USMs involve ensembles of unconnected string segments with stochastic velocities and with segments removed to mimic the time dependence of the string density seen in simulations. A third approach is to simulate instead global strings, which do not localize their energy into the string cores. The cores may be left unresolved and field-based CMB calculations [12] have been used elsewhere [13,14].
In [5,6] we used a field-based approach for local strings, via the Abelian Higgs model. We were able to resolve the cores and to reach a string separation of ∼ 100 times their width, which we carefully checked to be sufficient to reach a scaling regime. This regime, in which the statistical properties of the network scale with the horizon size, is of critical importance as it enables the statistical results to be applied to the later times required in CMB calculations. A great advantage of the field theory is that it naturally includes the decay of the string network into Higgs and gauge radiation, and the resulting back-reaction on the network. Thus our CMB calculations for strings are the first to include a consistent mechanism for decay and backreaction.
A feature of field theory simulations is a very low density of string loops [15], in sharp distinction to Nambu-Goto simulations on which the conventional cosmic string scenario is based. Further work is needed to understand the origin of the difference, on which bounds from cosmic rays [15] or gravitational wave production [9] sensitively depend, but CMB calculations depend on the large-scale properties, about which there is broad agreement. Indeed the USM has enough flexibility to approximate our power spectrum: the left hand graph of Fig. 1 of the erratum to [11] is similar to Fig. 13 of [5]. However, the USM does not reproduce the detailed shape of the power spectra, nor can it give limits on the string tension µ without reference to simulations such as ours. Our calculations represent a significant step forward in reliability and accuracy, deserving careful comparison to the data.
Data fitting approach.-The form of the cosmic string contribution to the temperature power spectrum is shown in Fig. 1, where it is compared to observational data and the best-fit standard inflation model. The normalization of the inflation and string power spectra components are free parameters, with that for strings being proportional to (Gµ) 2 (where G is the gravitational constant and µ is the string tension). For Fig. 1 the normalization of the string component has been set to match the data at multipole ℓ = 10, corresponding to Gµ = (2.04±0.13)×10 −6 , a factor of 2-3 higher than the corresponding value from previous work [10,11,16]. Clearly a string component this large is ruled out and we hence introduce the parameter f 10 , the fractional contribution from cosmic strings to the temperature power spectrum at ℓ = 10.
Recalculating the inflationary component at a particular cosmology takes only a few seconds, but for the string contribution this takes many hours and it therefore appears that a full Markov chain Monte Carlo (MCMC) multi-parameter fit is unfeasible. However, following [17], we fix the form of the string component and vary only its normalization, via Gµ. Given that any changes in the cosmological parameters are small and that the strings are sub-dominant, this amounts to a small error in the total inflation plus strings prediction, below the uncertainties in the CMB data [27] and the MCMC results are unaffected. We hence use a version of the standard CosmoMC [18] code, modified to incorporate the fixedform cosmic string component.
We primarily consider four different models: two parameterizations of the primordial power spectrum, both with and without strings. We always allow for variations in the Hubble parameter h, the physical baryon and total matter densities Ω b h 2 and Ω m h 2 , as well as the optical depth to last scattering τ . We then either take Harrison-Zeldovich (scale-invariant) adiabatic primordial perturbations with amplitude A s or add the additional freedom The temperature power spectrum contribution from cosmic strings, normalized to match the WMAP data at ℓ = 10, as well as the best-fit cases from inflation only (model PL) and inflation plus strings (PL+S). These are compared to the WMAP and BOOMERANG data. The lower plot is a repeat but with the best-fit inflation case subtracted, highlighting the deviations between the predictions and the data. Note that the string contribution is identical to that shown in Fig. 14 of [5], but here has a linear horizontal axis for ℓ > 100.
of a power-law tilt n s : A 2 s → A 2 s (k/k 0 ) ns . This yields the two zero-string models which we label as HZ and PL respectively, with PL being the established inflationary concordance model and HZ being a restriction of this: n s = 1. We add strings to these two models yielding models HZ+S and PL+S, which therefore have the extra parameter (Gµ) 2 . Then, in the later stages of our discussion, we also consider primordial tensor perturbations and a finite running of the scalar spectral index dn s /d ln k, but we will assume negligible neutrino mass and flat space throughout.
Results using only CMB data.-The results when using measurements from the WMAP, ACBAR, BOOMERANG, CBI and VSA projects [7] are illustrated in Fig. 2. This shows the marginalized 2D likelihood surfaces for f 10 versus h, Ω b h 2 , A 2 s and n s for both HZ+S (points) and PL+S (contours). For PL+S, there is a significant degeneracy, involving primarily these five parameters, that allows large values of f 10 to fit the data [28]. The result is f 10 = 0.11 ± 0.05, which is a 2σ detection of strings. It also yields n s = 1.01 ± 0.04 which is significantly larger than in model PL, or the result of n s = 0.964 ± 0.019 found in [9] for PL+S using the USM. Figure 1(lower) shows the deviations between the bestfit PL+S case, the best-fit PL case and the CMB data. Given that the best-fit PL+S case is given by f 10 = 0.099 and n s = 1.00 (see endnote [29] for the other parameter values), it is clear that not only is n s = 1 under no pressure if cosmic strings are included, but it is able to fit the data moderately better than the n s = 0.952 best-fit under model PL. Indeed, when the maximum likelihood values L max are compared via ∆χ 2 eff = −2 ln(L PL+S max /L PL max ), we obtain ∆χ 2 eff = −3.9 at the expense of a single extra parameter. However, as the PL+S best-fit value of n s is extremely close to one, HZ+S has an almost identical L max value. Therefore model HZ+S gives ∆χ 2 eff = −3.9 relative to the concordance model with zero cost in terms of the number of parameters.
A more complete analysis of the freedom in a model is provided by its Bayesian evidence value [19] and Liddle et al. [19] have previously used this statistic to demonstrate that WMAP data does not actually rule out model HZ, despite the n s = 0.951 +0.015 −0.019 result returned under the standard model PL. Here, we calculate evidence ratios for our four models using the Savage-Dickey method [20] with flat priors of 0 < f 10 < 1 and 0.75 < n s < 1.25, giving the results shown in the table. We find that the relative evidence of PL+S to PL is barely distinguish-able from unity, as expected for merely a 2σ detection of strings. However, model HZ+S has a Bayes factor of 7.3 ± 1.2 relative to PL and is therefore moderately preferred. That is, a finite string component is favored by CMB data over a tilted power spectrum and the result: f 10 = 0.10 ± 0.03 from model HZ+S is therefore of interest.
Use of non-CMB data.-We must also check that these conclusions remain valid when non-CMB data are included and we hence consider that the Hubble Key Project (HKP) yielded h = 0.72±0.08 [21]. Further, measurements of deuterium abundance in high redshift gas clouds, combined with big bang nucleosynthesis (BBN) calculations, gives Ω b h 2 = 0.0214 ± 0.0020 [22] and while similar determinations using other light isotopes do not yield global concordance, it is still interesting to consider this measurement also. Figure 2 shows these two measurements via vertical lines in the relevant plots and it is clear that they each disfavor large values of f 10 in model PL+S. It is also evident that they lower the preference for model HZ+S since the majority of the plotted MCMC plots lie at least 1σ from these two results.
With these data included, model PL+S now yields f 10 = 0.05 +0.03 −0.04 or f 10 < 0.11 (95% confidence) and the 2σ detection is removed. However, the result of n s = 0.97 ± 0.02 still does not rule out n s > 1 with any confidence (cf. n s = 0.953 ± 0.015 obtained using the USM with these data [9]). The Bayes factor for model HZ+S relative to PL is reduced, but only to 0.68 ± 0.12, leaving HZ+S on par with the standard model.
We also incorporate galaxy survey data via the matter power spectrum, although there are uncertainties over the use of such data when strings (or other defects) are included [17]. However the CMB constraints, together with our calculated string contribution to the matter spectrum, imply that strings make a negligible contribution to the matter power spectrum on large scales (as is also the case using USM calculations [11]). These are the same scales where the zero-string case needs no corrections for non-linearity, which have been questioned in [23]. We therefore conservatively include SDSS Luminous Red Galaxy data [24] for only k/h < 0.08 Mpc −1 finding that it leaves our results essentially unchanged: f 10 = 0.10 ± 0.04 and n s = 1.00 ± 0.03 for PL+S with an evidence value of 7.7 ± 0.7 for HZ+S. Including also data for 0.08 < k/h < 0.2 Mpc −1 and non-linear corrections [25] gives f 10 < 0.11, n s = 0.97 ± 0.02 and evidence 0.50 ± 0.05 but the use of the non-linear regime makes these results less reliable.
Hence, while we await further updates from the observational community regarding these additional data, even with them included, model HZ+S remains competitive relative to PL.
Tensors and running.-When the freedom for a nonzero primordial tensor contribution is incorporated as a generalization of model PL, tensor modes give a negligi-ble (and possibly zero) improvement in the fit to CMB data. However they do raise the allowed n s to 0.98 ± 0.03 [8] (CMB only), which is a greater effect than the USM strings of [9]. As an addition to PL+S, tensors are more preferred but again they increase the allowed n s values. For the CMB+HKP+BBN case we find n s = 0.99 ± 0.02, hence even the BBN data puts no pressure at all on n s > 1 when both strings and tensors are included.
Adding finite dn s /d ln k (running) to model PL does give a marginal improvement to the fit, with CMB data preferring a slight negative running [8]. This lowers small and large scales relative to intermediate ones and may hence be thought to have a similar effect as strings. However, adding strings smooths out the acoustic peaks and in fact there is little correlation between dn s /d ln k and f 10 . Hence we find that the above results are barely affected by finite running.
Conclusion.-By including cosmic strings, we find a 6 parameter model with n s = 1 that performs better than, or about as well as, the established concordance model, and that the latest data does not necessarily favor n s < 1. We also find that, when incorporating the (debatable) deuterium BBN result, the cosmic string contribution is constrained to f 10 < 0.11 or Gµ < 0.7 × 10 −6 . Even at this level it is likely that cosmic strings will be soon detectable using the B-mode polarization of the CMB [6] and we await future data releases with great excitement.
Finally, we note that our bounds have been derived only for classical Abelian Higgs strings with equal vector and scalar particle masses [5], and that, for example, Fterm inflation may be more accurately treated using simulations with different values. Similarly, different CMB predictions for strings may be found with (p, q)-string networks from string theory [4], or from other models such as semi-local strings [26]. A confirmed string detection would open up the challenge of differentiating the models and hence learning a great deal about inflation and high energy physics.
We would like to thank Rob Crittenden, Richard Battye, Andrew Liddle, and David Parkinson for helpful conversations. We acknowledge financial support from PPARC/STFC ( | 2019-04-14T01:48:11.878Z | 2007-02-08T00:00:00.000 | {
"year": 2007,
"sha1": "47caebd5e158a0a8ef243fc0af3aa2baff9d5ba6",
"oa_license": null,
"oa_url": "http://sro.sussex.ac.uk/id/eprint/15288/1/PhysRevLett.100.021301.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7395f2c968ec65e3d0862542c8438603fd6607de",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
229529367 | pes2o/s2orc | v3-fos-license | YIELD OF POTATO CULTIVARS AS A FUNCTION OF NITROGEN RATES
The use of fertilizers at appropriate doses positively impacts the production and the environment. Therefore, we aimed to evaluate the influence of nitrogen (N) rates on the crop yields of the potato cultivars, Ágata and Atlantic in Unaí, Minas Gerais (MG), and Ágata in Mucugê, Bahia (BA), Brazil. The cultivation of Ágata and Atlantic was conducted in MG from May to August and June to September 2014, respectively. In BA, Ágata was cultivated between September and December 2014. A random block experimental design was used with treatment rates of 0, 30, 70, 120, and 280 kg ha of N. The macro and micronutrient concentrations in potato leaves were evaluated. At the end of the growth cycle, the production of tubers was also evaluated. In the absence of N application, it was observed that P, K, S, and B were below the adequate levels in Atlantic-MG, the S and Zn levels were lower than the adequate levels in Ágata-MG, and the N, K, Mg, and S levels were less than the adequate levels in Ágata-BA. The other nutrients met the needs of the potato, with the N increase being favorable to the levels of most nutrients in all experiments. The maximum rates of N varied between 138 and 194 kg ha in the high and low cationic exchange capacity (CEC) regions, respectively. The knowledge of the interaction among soil attributes, climate conditions and crop specificities allows for the improved prediction of the dosage of N and a reduction in the optimum amount without affecting
INTRODUCTION
The potato (Solanum tuberosum L.) is an important food source due to its high energy and nutritional contents (SHEN et al., 2019). Under favorable conditions of climate, cultivar, and advanced technology, the productivity of the crop can reach 60 t ha -1 . However, the characteristics that increase the cost of production, in association with market price fluctuations, may configure a financial risk to the producer (STÜRMER et al., 2014).
The low availability of N in the arable soil layer, in addition to the high demand for the nutrient by plants, means that this essential nutrient is one of the most limiting factors in the productivity of potato crops (BRAUN et al., 2013).
The plants adapt to nutrient fluctuations around the roots by means of the coordinated adjustment of root morphology and gene expression. This happens through the regulation of the rate of absorption by the roots, which decreases with deprivation and increases with the availability of N (ISHIKAWA-SAKURAI; HAYASHI; MURAI-HATANO, 2014). Thus, the supply of N synergistically or antagonistically alters the absorption and use of other nutrients present in the soil, with the genetically controlled absorption of nutrients affected by the genotype-environmentmanagement interaction (MA et al., 2016;MALTAS;DUPUIS;SINAJ, 2018).
Low rates of N results in lower potato production, smaller tubers, early senescence, accumulation of carbohydrates in the leaves, and higher level of carbon allocated to the root (MOKRANI; HAMDI; TARCHOUN, 2018). To avoid this, it is possible to apply excessive quantities of N fertilizers to ensure desirable yields since the cost of this fertilizer is relatively low as compared to the total cost of production. The reason why farmers use large amounts of N fertilizer is related to the attempt to achieve the amount of N supply required for high crop productivity without a proper study of the soil, compensating for this with the excessive N application, and the lack of precise methods to help nutritional management (KONG et al., 2013).
A disparity between the optimum fertilizer amount recommended by the literature and that used by producers, associated with the lack of technical information on the subject (QUEIROZ et al., 2013), justifies the need to research this area. The results should guide the consistent, conscientious use of fertilizers, and ensure a sustainable relationship between the environment and the economy by reducing the cost of production (AYYUB et al., 2019).
The optimization of the use of N, increased potato yield, and reduced nutrient loss depend on the correct selection of the N source, dose, and time of application (CAMBOURIS et al., 2016, SOUZA et al., 2019MILROY;WANG;SADRAS, 2019) along with the interaction with the genotype, soil moisture level, and soil type (SARAVIA et al., 2016). The development of strategies for aligning agricultural practices to ensure the availability of N in the soil, which is compatible with potato N demand, is considered challenging by researchers and producers (RENS et al., 2018;SOUZA et al., 2020).
Nutritional management is dynamic and affects external factors and internal factors within a crop, in which adequate rates reflect an increased balance of the system. Thus, the aim of this study was to evaluate the influence of rates of N fertilizer on the yield of Ágata and Atlantic potato cultivars.
MATERIALS AND METHODS
The cultivation of Ágata and Atlantic potato cultivars was carried out in Unaí, Minas Gerais (MG; 6º21'27" S and 46º54'22" W, 640 m altitude, Awi climate in the Köppen climate classification, and with a clay-textured soil classified as Dystrophic Red Latosol), Brazil, from May to August and June to September in 2014, respectively. The maximum and minimum temperatures from May to August varied between 29-37°C and 9-14°C, respectively, and from June to September varied between 29-40°C and 9-17°C, respectively. The relative humidity ranged from 51-68% and 52-64% in the period from May to August and June to September, respectively. The total rainfall was 50 mm and 57 mm from May to August and June to September, respectively.
In Mucugê, Bahia [BA; 13º00'19" S and 41º22'15" W, 986 m altitude, Cfb climate according to the Köppen classification, and soils of a medium texture that are classified as Red-Yellow Latosol (TEIXIERA et al., 2017)], an experiment was conducted with the Ágata cultivar from September to December in 2014. The maximum and minimum temperatures during the study period ranged from 25 -29°C and 12-16°C, respectively. The relative humidity ranged from 62-80%, and the total rainfall over the study period was 350 mm.
All experiments were conducted in fields used for potato production. Before planting, soil sampling was carried out in the 0-20 cm layer, and the sample was chemically analyzed in accordance with the method described by Teixiera et al. (2017). The values are in Table 1. A random block experimental design was used with five rates of nitrogen application (0, 30, 70, 120, and 280 kg ha -1 of N) and four replicates per N treatment. Each experimental unit consisted of six rows of 6 m in length, with rows spaced 0.8 m apart and plants spaced 0.30 m apart, totaling 28.8 m 2 per plot. The evaluations were carried out on the two central lines, with the assessed area of each experimental unit totaling 8 m 2 .
A standard dose of the nutrients, P and K, fixed at 480 kg ha -1 of P 2 O 5 and 220 kg ha -1 of K 2 O, was applied following the recommendations of the Commission of Fertility of the Soils of Minas Gerais (RIBEIRO; GUIMARÃES; ALVAREZ, 1999) and based on the soil analysis results. The sources of N, P, and K used were urea (45% N), triple superphosphate (41% P 2 O 5 ), and potassium chloride (60% K 2 O), respectively. At the time of planting, micronutrients were applied: 2.2 kg ha -1 of B and Cu and 5.4 kg ha -1 of Mn and Zn in the plots in Unaí-MG and 2.5 kg ha -1 of B, Cu, and Zn and 5 kg ha -1 of Mn in Mucugê-BA.
Three months before planting, liming was carried out with dolomitic limestone (total relative neutralizing power: 90%) at a dose of 1 and 0.9 t ha -1 for Ágata and Atlantic in Unaí, respectively, and 0.6 t ha -1 for Ágata in Mucugê. After the ground was prepared with plowing, grating, and opening of the grooves, the application of fertilizers was carried out along with the treatment rates of N. The fertilizers were distributed manually to the planting furrow and were incorporated into the soil with the aid of a hoe.
Of the total N and K applied, 60% was applied during planting, and the remaining 40% was applied in coverage at 27 days after planting (DAP), at which time it was piled up.
The potato crops were irrigated by a pivot system with a sufficient water supply for the full development of the potatoes in the growing period (around 500 mm). In general, the irrigation rates in both areas were 6 mm every 2 days until plant emergence, 10 mm every 2 days in the vegetative development phase, and 12 mm every 3 days in the stolonization and tuberization phases. Phytosanitary care was carried out when needed based on monitoring of pests, diseases, and weeds using products registered for the potato crop and at rates recommended by the manufacturers.
At 35 DAP in each cultivar and location, 20 complete sheets were collected from the third, fullydeveloped trifoliate leaves of each plot (RIBEIRO; GUIMARÃES; ALVAREZ, 1999). The leaves, packed in paper bags, were taken to the laboratory for the foliar analysis of nutrients.
The leaf samples were washed and then placed to dry in an oven with a forced circulation of air (65ºC ± 5ºC). After drying, the leaves were ground and the foliar macronutrient concentrations (N, P, K, Ca, Mg, and S) and micronutrient concentrations (B, Cu, Fe, Mn, and Zn) were determined according to the methodology of Möller el al. (1997).
At the end of the experiments (112 and 115 DAP for Ágata and Atlantic in MG, respectively, and 106 DAP for Ágata in BA), the tubers produced in the assessed areas of the plots (disregarding 0.5 m at each end) were manually harvested, classified, and weighed on an electronic scale. From these results, the potato yields were estimated in t ha -1 .
The tubers were classified by diameter as: Special (> 5 cm), 1X (4-5 cm), and 2X (< 4 cm). The total commercial yield was estimated by ratings and the tuber production of all assessed tubers.
The data were submitted to an analysis of variance to determine significant differences among treatments. For the comparison of the means, the F test and a polynomial regression analysis were applied to identify differences that were significant. All statistical analyses were carried out in the SISVAR statistical program (FERREIRA, 2014).
Foliar content of macro and micronutrients
The polynomial equations that were suitable for the experimental N treatment rates for the macronutrients (N, P, K, Mg, and S) and micronutrients (Cu, Mn, and B) are presented in Tables 2 and 3.
Nitrogen fertilization mostly influenced the foliar nutrient concentrations in Ágata-BA, where only Ca and Fe were not different among the N rates. On the other hand, the application of N in Ágata-MG did not significantly affect the Zn concentration and all macronutrients except N. The concentrations found in Ágata-MG of P, K, S, and Mg were 2.9-3.2, 42.8-43.6, 1.2-1.3, 6.2-6.5 g kg -1 dry mass (MS) and 30.2-35.7 mg kg -1 MS for Zn. Of these, only S was below the appropriate level suggested by Lorenzi et al. (1997), whereas the others were within the range indicated for potatoes (Tables 2 and 3). Table 2. Polynomial equations adjusted for macronutrient foliar concentrations in Atlantic and Ágata potato cultivars grown in Unaí, Minas Gerais (MG) and Ágata grown in Mucugê, Bahia (BA) under different N rates. The rate of N (Xmax) required to reach a higher foliar nutrient concentration (Ymax) in the respective potato cultivar is given, and the interval for the appropriate foliar concentration for potatoes was determined according to Lorenzi et al. (1997). The highest response to N rates in the foliar levels of macro and micronutrients in Ágata-BA probably related to the characteristics of soil, especially the soil texture. In the soil of BA, there were greater dynamics among the nutrients because it is sandy with a larger amount of free nutrients in the soil solution and, therefore, more available to plants and also to leaching processes.
The dynamics that occur in the soil are influenced by climatic conditions; in particular, the temperature x precipitation/irrigation of the sites interferes with the soil microbial activity, which can affect nutrients by mineralization-immobilization (CHEN et al., 2013). This highlights the importance of considering these regional factors when recommending fertilizer application rates.
Calcium concentrations were not significantly affected by N does in any cultivar or location. The concentrations ranged from 15.7-17.7, 15.5-17.2, and 12.6-13.5 g kg -1 MS for Atlantic-MG, Ágata-MG, and Ágata-BA, respectively (Table 1), which were all within the appropriate range recommended by Lorenzi et al. (1997) of 10-20 g kg -1 MS.
The increasing dose of N promoted a linear increase in the concentration of N in Atlantic-MG and Ágata-BA. In Ágata-MG, rates over 149 kg ha -1 reduced the concentration of N. The Atlantic-MG cultivar was the least responsive to variations of the N rate in terms of foliar concentrations of nutrients, as the N rate did not affect the Mg, Fe, Cu, Zn, and B concentrations, which were 67-80, 254.2-335, 49-73, 65.7-76, and 17-21.2 mg kg -1 MS, respectively (Tables 1 and 2). Only B was below the appropriate level suggested by Lorenzi et al. (1997).
The P and K concentrations at low rates of N (0 and 30 kg ha -1 ) were below the appropriate range recommended by Lorenzi et al. (1997) in Atlantic-MG. The maximum concentrations of P and K (3.6 and 49 g kg -1 , respectively) were at N rates of 202 and 259.5 kg ha -1 in Atlantic-MG. In Ágata-BA, the maximum concentrations of P and K (3.9 and 37.3 g kg -1 , respectively) were at the N rates of 142.5 and 149.5 kg ha -1 (Table 1).
Increasing the N rate promoted an increased in the Cu concentration in Ágata-MG. At the maximum evaluated N rate (280 kg ha -1 ), the estimated maximum concentration of Cu was 32.2 g kg -1 MS. The Mn concentration also increased in response to the increased N rate in Ágata-MG and Ágata-BA, as did B concentrations, with maximum concentrations in Ágata-MG of 122.5 and 34.4 g kg -1 for Mn and B, respectively, and in Ágata-BA of 347.2 and 54.5 g kg -1 for Mn and B, respectively (Table 2).
There was quadratic adjustment for the Cu concentrations in Ágata-BA (maximum concentration of 20.4 g kg -1 MS at the N rate of 156.5 kg ha -1 ) and Mn in Atlantic-MG (maximum concentration of 80.9 g kg -1 MS at the N rate of 251.2 kg ha -1 ) ( Table 2).
In Ágata-BA at all N rates, the K was below adequate concentrations, including at the maximum rate. This may have occurred because the rate of absorption of cations and anions was unequal, with the presence of the available nutrients in the solution of no guarantee of absorption and translocation (MENGEL; KIRKBY, 1987).
The S concentration was below the appropriate range in all locations and cultivars, with levels below 2 g kg -1 MS. This indicated a reduced concentration of S in the soils of MG and BA, making it necessary to apply some fertilizer containing S or perform plastering, which is a way to increase the concentrations of S and low-cost Ca in the soil.
The concentrations of the micronutrients Cu, Fe, Mn, and Zn fit into the appropriate potato crop range (Lorenzi et al., 1997) (Table 2) in all the cultivars and sites evaluated. Boron was below the range in Atlantic-MG, within the range in Ágata-MG, and above the range for Ágata-BA. This may be related to differences in the dynamics, absorption efficiency, and translocation between the two cultivars in MG and between the MG and BA soil types.
In a study of the extraction and export of nutrients in different potato cultivars by Fernandes, Soratto and Silva (2011) and Soratto, Fernandes and Souza-Schlick (2011), the foliar concentrations of N, P, K, Mg, S, Cu, Fe, Mn, Zn, and B were higher than those found in this study for Ágata and Atlantic. It is worth noting that the differences between the regions of cultivation, the N rate applied, and general crop management were also responsible for the different responses of the cultivars to the absorption of nutrients between experiments.
When examining the lower limits of the levels recommended by Lorenzi et al. (1997) and the levels found in populations where there was no application of N fertilizer, it was observed that P, K, S, and B presented levels 32.8, 6.3, 36.8, and 15.1%, respectively, lower than the appropriate concentrations in Atlantic-MG, the S and Zn levels were 50.8 and 27.6%, respectively, lower than adequate in Ágata-MG, and the concentrations of N, K, Mg, and S were 6.5, 13.1, 14.2, and 50.8%, respectively, lower than adequate in Ágata-BA.
All other nutrients met the needs of the potato, in accordance with Lorenzi et al. (1997). Luz, Queiroz and Oliveira (2014) also found adequate value of N in the absence of N fertilizer application, stating that the levels contained in the soil were sufficient for the needs of the cultivar.
The levels of nutrients applied along with those contained in the soil were sufficient to ensure good yield in the absence of N application, even above the national average yield (30.5 t ha -1 ) at 7.8, 53, and 15.6% for Atlantic-MG, Ágata-MG, and Ágata-BA, respectively.
Yield of potato tubers
The potato cultivars responded in a quadratic manner to the application of N fertilizer in the production of tubers of Special class and total classes. The rate -estimated maximums of N in Atlantic-MG, respectively,141.50,140,and 192.14 kg ha -1 for a yield of 34.27, 42.87, and 52.9 t ha -1 of tubers classified as Special (Figure 1 According to the regression equation, it was inferred that a reduction of 50% of the maximum estimated N rate would reduce the yield by 3, 4.5, and 12.1% in Atlantic-MG, Ágata-MG, and Ágata-BA cultivars, respectively. Therefore, it was observed that responses for the maximum yield of Special Class tubers of different cultivars grown in the same region were similar, whereas the same cultivar in different regions showed greater changes in yield and responses to N fertilizer application.
Ágata-BA required an N fertilization rate 27.1% higher than that of Ágata-MG; however, its yield was 18.9% higher. The climate conditions for Ágata-BA are ideal for growing potatoes; this, combined with the high technological level of the region's producers, contributed to the high yield observed in BA.
On the other hand side, a 20% reduction in the maximum N rate generated a decrease of only 0.4, 0.6, and 1.6% in the production of Special Class tubers of Atlantic-MG, Ágata-MG, and Ágata-BA, respectively. In this sense, producers should pay attention to their production conditions, consistently analyzing the cost of production and trends in the market price of potatoes to decide how much fertilizer applications can be reduced without affecting the final estimated profit. These calculations are especially important when considering the Special Class tuber, in view of the potato size being the most desired characteristic by consumers.
As for 1X Class tubers, only Ágata-MG responded to the application of N ( Figure 2B), with a minor decrease in the production of this class as the rate increased. The minimum yield of 5.77 t ha -1 was observed at the maximum N rate. For Atlantic-MG and Ágata-BA tubers, the yield of 1X Class tubers varied from 2-2.8 t ha -1 and 3.7-9 t ha -1 , respectively (Figure 2A and 2C).
For 2X Class tubers, the response to N differed between cultivars and growing regions. Atlantic-MG showed production levels between 0.3 and 0.6 t ha -1 . The smallest yield (4.05 t ha -1 ) and the largest yield (1.55 t ha -1 ) were in Ágata-MG and Ágata-BA, respectively, and occurred at the maximum rate of N (280 kg ha -1 ) (Figure 3). The 2X Class tubers have a low market value; therefore, growing conditions that do not favor this class should be employed by producers. Although the Ágata-BA cultivar showed greater variation in 2X Class potato yields between the extreme rates (0 and 280 kg ha -1 of N), it produced half the quantity of this tuber class produced by Ágata-MG. Bangemann, Sieling and Kage (2014) reported that N fertilization has significantly influenced the classification of potato tubers across several years and favors the formation of tubers with larger diameters; in contrast, it does not alter the production of tubers with smaller diameter.
Rev
The estimated maximum N rates for total yield in Atlantic-MG, respectively,170.75,138.37,and 194.56 kg ha -1 of N to achieve total yield of 38.58, 54.45, and 65.88 t ha -1 (Figure 4).
The decreased yield in the three experiments as a function of the high N rate can be attributed to the physiological role of N in photosynthesis. The N absorbed by plants can result in vigorous vegetative growth, but this does not necessarily translate into high yield (LIU et al., 2017). The difference between the dosages of the two areas can be justified by the characteristics of the soil, which was sandier in BA than in MG and meant that the nutrients in the BA soil were less attached to colloids, facilitating nutrient loss through leaching. This was expected because the soil analysis of the sites showed MG had higher CEC than that of BA. Thus, correctly interpreting the soil analysis can allow the producer to manage fertilization rates sensibly.
Studies proved that good fertilizer management practices have the potential to minimize future impacts and maximize resource use efficiency (HERATH et al., 2014). This is particularly true for areas with sandier soils that are more prone to leaching, such as BA.
The N level required for the maximum yield is reduced with the reduction in the irrigation rate (AHMADI et al., 2016). A moderate irrigation rate (40-50%) and moderate N (135-150 kg ha -1 N) were shown to achieve yields that tripled the national average yield in the northwest of China; this demonstrates how appropriate management can result in the production of top quality potatoes and the preservation of irrigation water (YANG et al., 2017).
The mild climate of Mucugê-BA greatly favored the development of the Ágata-BA potato, with production 21% higher than that of Ágata-MG with a hotter climate. This is because higher temperatures can impact the efficiency of N use in plants. Thus, during a season when the temperature is higher, less N fertilizer should be provided (ZHOU et al., 2017).
In addition, the discrepancies between N rates and observed yield may be related to the influence of other factors such as agronomics and producer management (LUZ; QUEIROZ; OLIVEIRA, 2014). Regarding management aspects, it was shown that some potato cultivars used for processing required about 150% more N and K than that of table potatoes (DAS et al., 2015). Similarly, we observed that the Atlantic-MG cultivar (used for processing) needed higher N inputs than that of Ágata-MG (by 23.7%) in order to achieve high yield; nonetheless, the Atlantic-MG cultivar showed lower yield.
In Brazil, the fertilization rates with N are variable and can be between 120 and 200 kg ha -1 (RIBEIRO; GUIMARÃES; ALVAREZ, 1999). The results of the current experiment fell within this broad range of recommended N application, but the focus of our work was to emphasize that cultivars respond to N application with higher yields at N rates below the maximum recommendation.
In the literature, it is apparent that there are variations between the optimal N rates in potatoes grown under different management strategies, technological levels, and in different locations. It was found that the yield of the BRS Ana cultivar did not respond to N rates greater than 100 kg ha -1 , which suggested that the recommended rate according to the soil analysis (120 kg ha -1 ) could be reduced by up to 17% (SILVA et al., 2014). Silva et al. (2007) recommended rates of 163-171 kg ha -1 of N, depending on whether the scenario is favorable or unfavorable to the price of the potato. Kawakami (2015) reported that 120 kg ha -1 of N, with half applied at planting and half in coverage, results in high yield.
Banjare, Sharma and Verma (2014) also highlighted the importance of parceling N applications for more sustainable management. Splitting the application of N fertilizer can reduce leaching and more efficiently use space and time because plants only need a low input of nutrients at the beginning of their development (HERATH et al., 2014). This management practice is particularly essential in regions with medium to sandy soil textures, such as BA.
CONCLUSIONS
The yield of the Special Class and total potato tubers responded to N rates in all cultivars and locations. The maximum N rates for the total yield varied between 138 and 194 kg ha -1 of N in regions of high and low CEC, respectively.
Nitrogen fertilization should be oriented towards the potato production context, with the soil attributes and the dynamics of the soil's constituents (texture and fertility), climatic conditions (precipitation and temperature), specificities of the cultivar (genetic control of absorption and formation of the root system), and a cost analysis (economic and environmental) considered. Focusing attention on the interactions among such factors over many years will make it possible to improve the prediction of the required N dosage by reducing the optimum amount of N applied without affecting the market value of the potato crop, which requires site-specific management strategies. | 2020-11-26T09:03:19.039Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "05017067876911dff406ccab4af036c728d1e203",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/rcaat/a/h7KPfzfKJbnQFgj7HLFzppq/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b05720fe3e2e8655806a58b4d1ba4ff34e66cd24",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
10115259 | pes2o/s2orc | v3-fos-license | Estimation of Physical Activity Energy Expenditure during Free-Living from Wrist Accelerometry in UK Adults
Background Wrist-worn accelerometers are emerging as the most common instrument for measuring physical activity in large-scale epidemiological studies, though little is known about the relationship between wrist acceleration and physical activity energy expenditure (PAEE). Methods 1695 UK adults wore two devices simultaneously for six days; a combined sensor and a wrist accelerometer. The combined sensor measured heart rate and trunk acceleration, which was combined with a treadmill test to yield a signal of individually-calibrated PAEE. Multi-level regression models were used to characterise the relationship between the two time-series, and their estimations were evaluated in an independent holdout sample. Finally, the relationship between PAEE and BMI was described separately for each source of PAEE estimate (wrist acceleration models and combined-sensing). Results Wrist acceleration explained 44–47% between-individual variance in PAEE, with RMSE between 34–39 J•min-1•kg-1. Estimations agreed well with PAEE in cross-validation (mean bias [95% limits of agreement]: 0.07 [-70.6:70.7]) but overestimated in women by 3% and underestimated in men by 4%. Estimation error was inversely related to age (-2.3 J•min-1•kg-1 per 10y) and BMI (-0.3 J•min-1•kg-1 per kg/m2). Associations with BMI were similar for all PAEE estimates (approximately -0.08 kg/m2 per J•min-1•kg-1). Conclusions A strong relationship exists between wrist acceleration and PAEE in free-living adults, such that irrespective of the objective method of PAEE assessment, a strong inverse association between PAEE and BMI was observed.
Introduction
Physical activity (PA) is important for the prevention of several chronic diseases such as diabetes, cardiovascular disease, and certain cancers [1]. However, there is uncertainty about the dose-response relationships as well as the prevalence of the exposure, owing to difficulty in assessing habitual physical activity accurately [2]. Several methods now exist but wrist accelerometry is becoming a more common objective measure of habitual physical activity in largescale epidemiological studies [3,4], due to its relative low cost and high acceptability to study participants. This necessitates a better understanding of the relationship between wrist acceleration and other measures of physical activity so that estimates of prevalence and disease relationships can be compared between populations assessed using different methods.
A recent consensus statement expressed the imminent need for harmonisation of accelerometry data collected in free-living adults [5]. The current lack of comparability between measurement modalities limits possibilities of assessing the global prevalence of physical activity, or pooling data from multiple sources to better understand its relationship with disease. For example, a meta-analysis aiming to determine whether physical activity attenuates the effect of the FTO gene on obesity risk was forced to dichotomise physical activity (active or inactive) across the multitude of exposure measures [6]; while this was sufficient to confirm the existence of an interaction, it was not possible to determine what dose of activity was necessary to protect against the deleterious FTO variant.
An important component of physical activity is its associated increase in energy expenditure (PAEE). If PAEE is captured during free-living in high time resolution, this produces intensity time-series data that can be used to describe a person's behavioural profile. A number of previous studies have validated wrist acceleration derivatives against gold-standard measures of energy expenditure, such as the doubly-labelled water (DLW) method [7] and indirect calorimetry from respiratory gas analysis [8]. However, the high cost of DLW has prohibited such work in large population samples, and the nature of the measurement only allows the exploration of total activity volume, rather than the underlying intensity time-series. Breathby-breath analysis does provide intensity time-series data but the method is not a feasible solution for monitoring energy expenditure in free-living. While laboratory-based comparisons have utility in elucidating the relationships between wrist acceleration and energy expenditure during specific activities, such experiments are unlikely to adequately capture the full spectrum of human activities in representative proportions, and we remain ill-equipped to recognise different activity types in free-living records.
The purpose of this study was to complement existing validation studies by building predictive models of classic PA measures from wrist acceleration derivatives, using both acceleration of the trunk and PAEE collected in free-living as criteria. We then evaluate the derived models by cross-validation in age, sex, and BMI strata, and finally investigate if model performance translates into valid methods of harmonisation by examining their association with obesity, compared to that of the criterion measure.
Methods
This dataset was part of the Fenland Study [9], an ongoing prospective cohort study designed to identify the behavioural, environmental and genetic causes of obesity and type 2 diabetes. Participants were recruited to attend one of three clinical research facilities in the region surrounding Cambridge, UK. All participants provided written informed consent and the study was approved by the local ethics committee (NRES Committee-East of England Cambridge Central) and performed in accordance with the Declaration of Helsinki.
A subsample of 1695 participants were asked to wear two devices simultaneously; a combined heart rate and movement sensor (Actiheart, CamNtech, Cambridgeshire, UK), which measured heart rate and uniaxial acceleration of the trunk in 15-second intervals [10], and a wrist accelerometer (GeneActiv, ActivInsights, Cambridgeshire, UK) worn on the non-dominant wrist, which recorded triaxial acceleration at 60 Hertz. Participants were asked to wear the monitors for 6 complete days and advised that both monitors were waterproof and could be worn continuously including during showering and sleeping.
At the clinic visit, prior to the free-living monitoring period, participants performed a ramped treadmill test to establish their individual heart rate response to a submaximal test [11]. These measurements produced calibration parameters to inform a branched equation model of PAEE [12], which has been validated against instantaneous PAEE (intensity) from indirect calorimetry [13,14]. Following pre-processing of the heart rate data collected during free-living to eliminate potential noise [15], the branched equation model was applied to calculate instantaneous PAEE (J•min -1 •kg -1 ). This methodology has been successfully validated against PAEE from DLW in several populations [16,17], including a sample of UK men and women where it was shown to explain 41% of the variance in free-living PAEE and with no mean bias [18].
The raw triaxial wrist acceleration data was auto-calibrated to local gravitational acceleration (in g) using a method described elsewhere [19]. The calibrated acceleration was then used to calculate Vector Magnitude (VM) per sample: VM, or Euclidean Norm, can be interpreted as the magnitude of acceleration the device was subjected to at each measurement, including gravitational acceleration. There will also be a potential sensor noise component in the high frequency domain (above human physiological range), which we filtered out by a 20 Hertz low-pass filter. In the present study, we calculated two derivatives of VM, both aiming to remove the gravity component from the signal in order to isolate the activity-related acceleration component; 1) Euclidean Norm Minus One (ENMO) subtracts 1g from VM and truncates the result to zero at sample level, whereas 2) High-Pass Filtered Vector Magnitude (HPFVM) applies a high-pass filter to the VM signal at 0.2 Hertz, therefore treating gravity as a low-frequency component to be filtered out. These two signals, ENMO and HPFVM, are both plausible approximations of acceleration as a result of human movement [20], and are the primary descriptions of wrist acceleration used in the following analyses.
Non-wear detection procedures were applied to both the wrist acceleration [7] and combined-sensing traces [18], and any such non-wear periods were excluded from these analyses. Briefly, non-wear in the wrist acceleration data was defined as time periods where the standard deviation of acceleration in each axis fell below 13mg for longer than 1 hour, and non-wear in the combined sensing data was defined as extended periods of non-physiological heart rate concomitantly with extended (>90min) periods of zero movement registered by the accelerometer.
All signals were summarised to a common time resolution of one observation per 5 minutes, an example of which is shown in Fig 1. This was chosen as an appropriate window length based on a variety of competing considerations. Firstly, the time-lagged physiological response of heart rate to movement precludes an instantaneous comparison and necessitates a physiologically appropriate time buffer. Secondly, due to hardware limitations and initialisation conditions, we could not guarantee a perfect time synchronisation between the two monitors. Finally, maintaining the highest possible time resolution within these constraints preserves the most variance in the intensity time-series, and maximises the number of observations in the dataset. The models derived in this work (described in detail below) exclusively use time-invariant signal features such as arithmetic means; this means that they are robust to changes in window size, and it is therefore equally appropriate to use them to estimate hourly or daily outputs from hourly or daily inputs.
Multi-level linear regression models were designed to independently predict PAEE and trunk acceleration from wrist acceleration. Four models were tested: a linear and quadratic model for each of ENMO and HPFVM.
The models were derived in a randomly chosen subset containing 60% of people in the cohort (n = 1050) and evaluated in the remaining 40% (n = 645). Model performance was assessed using within-and between-individual explained variances (Pearson's coefficient) and Root Mean Squared Error (RMSE) metrics, as determined from ANOVA repeated measures modelling specified with random effects at the participant level. After assessing the performance of these models on the test dataset as a whole, we selected the strongest model and tested for differential bias by sex, age and BMI categories of under/normal-weight, overweight, and obese (<25, !25 & <30, !30 kg/m 2 , respectively) within the test dataset. All statistical tests were performed in Stata version 14 (StataCorp, Texas, USA). In order to test the epidemiological utility of the derived models, we examined the associations between BMI and PAEE in the test dataset (n = 645). Using our criterion PAEE measure, we first characterised the linear dose-response relationship with BMI, adjusting for age and sex. We then repeated this analysis using predicted PAEE from each of the derived prediction models, and compared the beta coefficients and 95% confidence intervals to those using criterion PAEE.
Results
A description of the population sample included in this analysis is given in Table 1. In total, 1752287 valid 5-minute windows from 1050 individuals were included in the training dataset; the median number of observations per individual was 1738, equating to just over 6 days. Mean PAEE across the sample was 36.4 J•min -1 •kg -1 , with higher average in men than women (38.1 and 34.7, respectively). Mean wrist acceleration according to both the ENMO and the HPFVM metrics was similar in men and women.
The overall performance of each of the models in predicting both PAEE and trunk acceleration are shown in Fig 2. Between-individual explained variance in trunk acceleration was between 51% and 56% for all models. For PAEE, there were only minor differences between models in terms of explained variance, ranging from 44% to 47%; but there were slightly more pronounced differences in RMSE, ranging from to 38.8 J•min -1 •kg -1 at worst to 34.4 at best. (For reference, 1 standard MET is 71 J•min -1 •kg -1 .) Model 4 was the strongest model to discriminate activity intensity levels, as it yielded the lowest RMSE for both criterion measures. The derived PAEE and trunk acceleration equations for each model are listed in the appendix (S1 Table and S2 Table).
Model 1 contains a linear term for ENMO, which is the most common signal derivative in current use for wrist acceleration data; it explained 44% of the between-individual variance in PAEE and has a RMSE of just above 0.5 METs.
The family of models using HPFVM as the wrist acceleration metric generally outperformed their ENMO counterparts by 2 to 3% in predicting both trunk acceleration and PAEE. The quadratic models outperformed their linear counterparts, decreasing RMSE by 2 to 8% implying that the relationships between wrist acceleration and both trunk acceleration and PAEE are curvilinear, rather than linear.
Comparing the predictions of model 4 to PAEE from combined sensing in the cross-validation sample (n = 645) showed a negligible mean bias (0.07) with 95% limits of agreement between -70.6 and 70.7 J•min -1 •kg -1 (Fig 3, panel 1). Stratified by sex, results indicated a 1.2 J•min -1 •kg -1 overestimation in women, and a 1.8 J•min -1 •kg -1 underestimation in men. Age and BMI were centred on their means for this analysis, therefore their coefficients imply a trend from underestimation in the younger and less obese towards overestimation in the older and more obese (0.2 J•min -1 •kg -1 per year relative to mean age, and 0.3 J•min -1 •kg -1 per kg/m 2 relative to mean BMI). The distribution of this estimation error is visualised in Fig 3 using violin plots and overlaying traditional boxplots; the first panel shows the error distribution in the whole test set, and the remaining panels show error distributions within specific groups within the test set for comparison. It can be seen that estimation error was densely concentrated around zero for all groups, that there were no unusual estimation artefacts, and there were no outstanding differences between any of the groups. The association between PAEE and BMI was inverse across all models; the beta coefficients and their respective confidence intervals are visualised in the forest plot in Fig 4. All but one of the point estimates from the prediction models fell within the 95% confidence interval of the combined-sensing beta coefficient, and all confidence intervals from the wrist models overlapped the point estimate from combined sensing. The one outlying point estimate was from model 1, the weakest performing model according to other evaluations; however its quadratic counterpart (model 2) yielded the closest matching beta coefficient of all models.
Discussion
To our knowledge, this is the first study to describe the validity of predicting high-resolution free-living PAEE from wrist acceleration in a large population sample of adult men and women, allowing evaluation of model performance in population subgroups.
Simple models of wrist acceleration intensity were found to explain a high proportion of variance in both PAEE and trunk acceleration, with no evidence of significant difference in bias by age or BMI categories but small opposite biases in men and women (underestimation and overestimation of PAEE, respectively).
The derived equations with non-linear terms were not monotonic; however the non-linear terms responsible for these were statistically significant in all cases. The global maxima of these equations (983mg and 1369mg in models 2 and 4, respectively) are likely reflective of the highest observed activity levels within the measured population; the data is naturally densely concentrated in the low end of physical activity intensity, and very sparse at the high end, therefore the downward trend at the high end can be considered an artefact of overfitting to the lower end. In practice, an implementation of these equations should truncate the estimates to the global maxima and minima (or zero) where appropriate.
The slightly better performance of HPFVM models compared to ENMO models suggests that applying a high-pass filter to the VM signal may be a more effective approach to the removal of gravity from an acceleration trace. However, the result of this filtering is likely to be dependent upon various signal properties, such as the machine noise level and sampling frequency, and the rotational frequency of human movement with respect to gravity [20].
A traditional validation study would only be able to report the estimation error structure, leaving readers to speculate whether similar associations between a measured behaviour and an outcome would be observed, irrespective of method. We compared the associations between PAEE and BMI as an example; the beta coefficients in the models for predicted PAEE were strikingly close to the beta-coefficient for PAEE from combined sensing, with a strong overlap of the 95% confidence intervals. This final analysis demonstrates that a similar direction and magnitude of relationship between PAEE and BMI can be observed in this population, regardless of whether PAEE is estimated by wrist acceleration or combined-sensing.
The models explored in this analysis only utilised the magnitude of wrist acceleration for prediction, and still achieved strong results. There is potentially a greater explanatory power to be found in the multitude of signal features that are derivable from three-dimensional acceleration in waveform resolution. Nonetheless, we should be cautiously optimistic that even the basic and robust properties of this easily obtainable and commonly used measure are strongly related to the criterion measure of PAEE from combined sensing.
The validity of these analyses is naturally contingent upon the validity of the criterion measure, individually-calibrated combined sensing of heart rate and trunk acceleration. While it is not considered a gold-standard measurement of PAEE, this estimation method does have established validity of both intensity [13,14] and PAEE during free-living in the population used for the present evaluation [18], and to our knowledge this study currently represents the largest aggregation of simultaneous wrist acceleration and energy expenditure signals in freeliving.
An additional potential limitation of this study is that it is neither nationally or globally representative, but confined to a relatively affluent and culturally homogenous population living in the East of England. The prevalence of many activities, during which wrist acceleration may be more or less representative of PAEE, is likely determined by several factors such as culture, climate, and local landscape, and it is therefore possible that the specific relationships and error structures that we report here may not be universal. Still, our analytical sample comprises both men and women across a wide BMI and activity level range, thus providing a comparative framework for interpreting wrist acceleration data from population studies.
In conclusion, we have demonstrated that a strong relationship exists between PAEE and wrist acceleration. The best performing model explained 47% of the between-individual variance in PAEE with a RMSE of 34 J•min -1 •kg -1 (0.48 METs) and all prediction models produced similar associations with BMI. Further work should aim to improve upon the accuracy of PAEE prediction using a wider range of the signal feature space, and to explore generalizability in other populations.
Supporting Information S1 | 2018-04-03T00:17:14.727Z | 2016-12-09T00:00:00.000 | {
"year": 2016,
"sha1": "d917a054cff84e5658500ca65a7ea9f3325f7c70",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0167472&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d917a054cff84e5658500ca65a7ea9f3325f7c70",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
144277711 | pes2o/s2orc | v3-fos-license | Seminário Internacional de Bibliometria
A Chamada de Trabalhos atraiu artigos de pesquisa e de revisão, completos ou em desenvolvimento, bem como estudos de caso, relacionados aos estudos quantitativos e/ou qualitativos da ciência e tecnologia. Estudos Bibliométricos Cientométricos, Informétricos, Patentométricos e Webométricos foram considerados de particular relevância, sem diminuir a importância de métodos e abordagens analíticos qualitativos.
TransInformação edits thematic issue
The present edition of our periodical TransInformação contains texts selected from the VII International Seminar on Quantitative and Qualitative Studies of Science and Technology. We present below the professors who participated in this issue and whom we once again thank.
International Seminar on Bibliometrics
As happens every two years in April, the International Seminar on Quantitative and Qualitative Studies of Science and Technology "Professor Gilberto Sotolongo Aguilar, " now in its seventh edition and held under the auspices of the biennial International Congress on Information, took place at the Palacio de Convenciones in Havana, Cuba. The 2014 Seminar began on April 15 th with the inauguration of the poster session and ended on April 17 th . The program included 47 presentations, 28 oral presentations, and 19 posters. As seen in previous years, there were many studies from Cuba and Mexico, but other countries were also represented such as Belgium-Hungary, Brazil, Peru and Spain.
The Call for Papers attracted research studies, review papers, and case studies completed or in progress related to the quantitative and/or qualitative studies of science and technology. Bibliometric, scientometric, informetric, patentometric and webometric studies were considered to be of particular relevance, without discounting the importance of qualitative analytical methods and approaches.
Presentations covered diverse themes, for example university rankings, scientific communication studies, public health, and marketing in information science, astronomy, and neurosciences, among others. Those related to issues of particular concern were of special interest to Latin American experts, such as quality of national databases and the representation of national publications in the most well-known databases, such as Web of Science and Scopus. Both aspects were considered significant to the use of mainstream sources for measuring local science. Another widely-commented topic was the decrease in the scientific and technological production of several of the countries in the region in the last three years.
Invited speakers were Dr. Wolfgang Glänzel, KU Leuven, Belgium and Library of the Hungarian Academy of Sciences, Budapest with a talk entitled "Analysis of coauthorship patterns at the individual level" and Dr. Rogério Mugnaini from the University of São Paulo, Brazil, with a presentation called "Scientific communication in Brazil (1998Brazil ( -2012: Indexing, growth, flow and dispersion". An important outcome of the VII Seminar is this monographic issue of the journal Transinformaçao with a selection of papers presented during the event. Although the proceedings of International Congress on Information include short articles in electronic format of the presentations at the Seminar, the publication of an extended version of these studies provides the opportunity not only to publicize the Seminar and encourage greater participation in the event but also for a wider audience to learn of current research in Latin America. In today's world, evaluation and accountability studies are increasingly demanded by research managers and science policy makers. For this reason alone, Latin America needs a body of research and researchers dedicated to advancing knowledge in the different metric specialties of information studies with specific emphasis on the needs and characteristics of science and technology in the region. TransInformação, Campinas, 26(3):225-228, set./dez., 2014http://dx.doi.org/10.1590 Authors of all papers accepted for oral presentation at the event were invited to submit an extended version of their study for possible publication in Transinformação giving special attention to theoretical aspects, analytical development and discussion of the results. Submitted manuscripts were externally peerreviewed by the journal, and 11 articles were selected to make up this special issue.
The first two studies clearly show contrasting levels of data aggregation. While the first by Wolfgang Glänzel looks at co-authorship patterns of individual researchers using a range of bibliometric methods and indicators, the second by Mugnaini and co-authors analyses large data sets of Brazilian articles to understand growth, flow, and dispersion of journals across Bradford zones over a period of 15 years.
Social network analysis was present in a number of studies; among these is the paper by Pinto and Aguilar who analysed Latin American production on this subject from 1990 to 2013, giving special attention to contributing countries, universities and authors.
The study by Collazo-Reyes and co-authors used a qualitative approach to analyse the citation practices of the Mexican production in the field of astronomy from 1952 to 1972. Using references as means for semiotic interpretation, the study looked at the relationship between marks or signs associated with local affiliations and that of modern scientific communication patterns.
Three studies focussed on specific topics. Zacca-González and co-authors developed production, visibility, and collaboration indicators for health research for the period from 2003 to 2011 using the Scopus database. Freitas and co-authors analysed specific concepts in knowledge organization while González-Valiente studied marketing in the field of information disciplines.
The special issue closes with four technological studies. The first by Salguiero and Flores is qualitative in nature and makes a detailed comparison of 13 different open access tools for the retrieval and analysis of information used in a course in the area of Applied Cybernetics. Patentometrics, a subject little studied in Brazil, is the focus of the other contributions; in the case of Arreortúa and co-authors it is applied to vegetable oil combustion; Cabrera and co-authors study water and wastewater treatment technology; and Díaz-Pérez and co-authors study shale gas. All four use the International Patent Classification; Salguiero and Flores, and Cabrera and co-authors also use social network analysis.
The Seminar which will celebrate its eighth edition in 2016 continues to be an essential venue to bring together specialists from Latin America to present and discuss their research on relevant topics for the region. | 2019-05-05T13:04:13.963Z | 2014-11-28T00:00:00.000 | {
"year": 2014,
"sha1": "02137d34a8827b12e7c7452059bf63e90edc29e3",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/tinf/v26n3/0103-3786-tinf-26-03-00225.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "02137d34a8827b12e7c7452059bf63e90edc29e3",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
266804357 | pes2o/s2orc | v3-fos-license | Strengthening National Defense: Quality Management in Indonesian Army (TNI AD) Curriculum Development
ABSTRACT
INTRODUCTION
The enhancement of human resources (HR) in defense can be achieved through the education provided by the Indonesian National Armed Forces (TNI).This aligns with TNI's foundation in safeguarding the sovereignty of the Unitary State of the Republic of Indonesia (NKRI), as stated in the preamble to the 1945 Constitution: "to protect the entire Indonesian nation and the entirety of the Indonesian homeland."The Indonesian Army (TNI AD) is a component of the Indonesian National Armed Forces (TNI) (Sekretariat Negara Republik Indonesia, 2004).This necessitates that TNI AD enhances its professionalism, particularly in developing defense human resources, primarily through education.Consequently, it is expected to address various threats, disruptions, challenges, and obstacles that arise because of global changes, especially concerning issues related to defense and security on land (Sebastian, 2018).
In its tasks, the Doctrine Development, Education, and Training Command (Kodiklat) of the Indonesian Army (TNI AD) adheres to a vision and mission that must be incorporated into the education curriculum.The concept of Kodiklat TNI AD is to be a professional military education institution within the land forces, while its first mission is to be responsible for the development of doctrine for land force operations, which will serve as a reference throughout the training and operational cycles.Additionally, Kodiklat plays a role in fostering education management and enhancing the human resources capabilities of TNI AD soldiers who uphold the Sapta Marga spirit and adhere to the Education Trilogy: Responsive, Adaptable, and Resilient.Lastly, Kodiklat is also responsible for nurturing the management of TNI AD subordinate units to improve their operational capabilities and readiness.The TNI AD doctrine, known as "Kartika Eka Paksi," is a guiding principle to strengthen the convictions and determination of all TNI AD personnel and is based on philosophical and conceptual foundations to serve as a guideline for everyone (Markas Besar Angkatan Darat, 2001).
The role of curriculum development management in enhancing the quality of education by Kodiklat TNI AD is crucial, especially considering the proliferation of various issues in Indonesia, particularly those related to defense and security, which are becoming increasingly complex and dynamic (Setiawibawa, 2017).As emphasized by Nurmantyo, (2016), in today's highly competitive global arena, the destruction of a nation does not always occur through conventional warfare by enemy states.Still, it can also take the form of new, often covert methods of warfare whose presence is difficult to detect but can have equally devastating effects, and sometimes even more so, than conventional warfare methods.
However, both military education and general education face challenges in their utilization.These challenges occur both within TNI AD units and among the general population.One of the reasons for this is the increasing number of students in Indonesia.This has prompted educational institutions to adopt better foreign systems (transnational education), altering the public's perception of local education (Winarno, 2014).Therefore, TNI AD education requires changes to ensure its continued existence as an institution representing Indonesia nationally and internationally.
Such changes can be realized if TNI AD can effectively manage and enhance accountability and proportional responsibility in improving its personnel's education quality.This aligns with the White Paper on Indonesian (Pertahanan, 2015), which states, "military defense development is to develop personnel, organizations, material, and facilities according to the fundamental needs of the TNI."This is expected to address the demands of these changes by implementing its core tasks.
This research is intriguing as it has identified the dominant factors, policies, and efforts in curriculum development management that support the success of educational quality at the Cavalry Training Center and Infantry Training Center.This study makes theoretical contributions by generating generalizations, principles, and arguments about implementing quality in education.As for practical benefits, the research findings provide valuable contributions as input and evaluation material in enhancing education delivery.It also serves as a reference for future researchers who will delve into studies on education within Kodiklat TNI AD units.
METHODS
In this research, the researcher employed a qualitative approach.Qualitative research is a research method grounded in post-positivism and is used to investigate natural conditions where the researcher serves as the key instrument.A sampling of data sources is purposeful, data collection techniques involve triangulation, data analysis is inductive/qualitative, and the results of qualitative research emphasize meaning over generalization.Qualitative research is based on a holistic natural background, positions humans as the research instrument, conducts data analysis inductively, prioritizes the process over the results, and the research findings are agreed upon by the researcher and the research subjects (Sugiyono, 2009).
The methods to be used are survey and descriptive qualitative methods.The method is based on logic in that the data obtained and the research results follow reason, not out of one's mind.And research that is corroborated by researcher experience data both directly and indirectly.Data collection techniques using interview techniques, observation, and document studies.In preparing the curriculum by the Indonesian Army Training and Training Agency, the interview method can be divided into a series of stages, as described in Table 1.At the planning stage, the initial step involves the Branch Center and Kodiklat TNI AD, which designs various types and educational programs to meet the organization's needs.They proposed this plan to Kasad, who then determined the academic curriculum that would be integrated into the TNI AD program and budget.The second phase involves the results of work programs and budgets by the Indonesian Army Training and Branch Centers, forming a special working group to compile the curriculum.The curriculum can be operationalized after obtaining approval.In the preparation stage, by the program and budget of the Indonesian Army, Kodiklat and Branch Centers prepare personnel who form a working group for curriculum preparation.This curriculum drafting team is responsible for coordinating with the Field of Technical Power, which acts as a material coach in educational institutions related to the implementation of education.The curriculum is prepared according to a pre-designed plan during the implementation stage.
FINDINGS AND DISCUSSION
Management is everything related to expertise in planning, executing, and overseeing the effective and efficient utilization of resources to achieve a desired goal.The opinions expressed by several experts fundamentally revolve around it being a process of planning and implementing activities to achieve an organization's objectives by working collaboratively with others and utilizing available resources.Stoner offers various perspectives on the definition of management, as cited by Wijayanti, (2008).Management is a process involving planning, organizing, controlling, and leading multiple efforts by members of an organization.According to Terry (1986), the management process begins with planning, organizing, implementing, and monitoring.Management is a distinctive process consisting of planning, organizing, motivating, and controlling or supervising activities conducted to determine and achieve the organization's goals by utilizing human and other resources, as described by Freeze & Raschke (2007).
Meanwhile, according to White et al, (2018) management encompasses the entire group effort process, whether in the scope of government, public or private, civilian or military, large or small.According to Gie, (2000), management is the overall process of coordination in every collaborative effort of a group of people to achieve specific goals.Management is the complete collaboration process between two or more people based on rationality to achieve goals (Rohman, 2012;Tarigan et al., 2020).
Dominant Factors in Curriculum Quality Development Management
According to the Director of Doctrine, several dominant factors impacting education at Kodiklat TNI AD have been identified.These factors can arise within the education center's environment and external influences.Additionally, there are factors associated with the learners themselves.According to the education center staff in the Kodiklat TNI AD hierarchy, the dominant factors influencing curriculum quality development include: Dominant factors at the education center include not all educational personnel capable of curriculum development as expected.While some have qualifications as military instructors, their numbers are limited.As a result, the products created in curriculum quality development give the impression of repetitive education.The methods from the received educational materials do not provide sufficient motivation for learning.Furthermore, there is also a dominant factor on the part of the education center, which is a time-related issue.Synchronizing the curriculum development process with the opening of education can be challenging, so sometimes education has already commenced, but the curriculum is not yet available.
A dominant factor frequently encountered by the education center is the limitation of necessary references.Given the increasing technological advancements, there is a need for new concerns related to these developments.Not only are there limitations in the quantity, but the quality of references provided by instructors or trainers specializing in the subject matter has yet to be adequately fulfilled.
Curriculum Quality Development Management Policy
In observing the curriculum quality development management process, the quality of personnel plays a pivotal role in producing the expected curriculum.It has become an organizational norm that personnel strive to demonstrate competence in their tasks.As previously understood, quality is inherently linked to self-esteem, so high-quality human resources will compete to showcase their strengths.Therefore, Kodiklat TNI AD consistently implements incentive mechanisms for educators and educational personnel.
Organizational changes within TNI AD are efforts to adapt to the increasingly dynamic environment, aiming to enhance the units' capabilities in fulfilling their national defense roles, duties, and functions.In line with this thinking, the policy regarding assessing the need for organizational formation and changes aims to achieve a professional, effective, efficient, and modern (PEEM) organization, enabling it to carry out its tasks and functions more optimally.The success in preparing TNI AD units that are Professional, Effective, Efficient, and Modern (PEEM) is fundamentally determined through an integrated and integrative development cycle of TNI AD functions, particularly in Doctrine, Education, and Training.However, various issues from field development efforts indicate suboptimal and unsynchronized implementation of Doctrine, Education, and Training within TNI AD.
The curriculum has been elaborated at the operational level, which is active and known within the Kodiklat TNI AD as the Educational Control Instrument, abbreviated as Katdaldik Operational Level.Information obtained in an interview with the Commandant of the Education Center explains that the Katdaldik Operational Level consists of instructional programs (Progjar), educational calendars (Kaldik), detailed lesson frameworks (RPT), weekly schedules (Jadming), and teaching preparations (Siapjar).Implementing curriculum-related management policies by Kodiklat TNI AD includes curriculum development or revision.The stages involved are planning, preparation, execution, and conclusion.
In the planning stage, a curriculum is developed, beginning with curriculum planning.Kodiklat TNI AD, in collaboration with the Branch Centers, formulates a plan regarding the types and forms of education to be conducted.This is done to fulfill the capabilities of soldiers according to the organization's needs, and it is subsequently proposed to the Chief of Staff of TNI AD.Based on these proposals, the Chief of Staff of TNI AD determines the curriculum to be developed and incorporates it into the work program and budget of TNI AD.Following the work program and budget issued by the Chief of Staff of TNI AD, Kodiklat TNI AD and the Branch Centers established a Curriculum Development Working Group.Once the curriculum developed by the working group is approved, it is then implemented in the following fiscal year.Consequently, the Education Center is expected to have sufficient time to prepare other necessary educational components.
Dominant Factors in Curriculum Development Management
In their explanation during the interview conducted in the Infantry Education Center, the educational operations staff emphasized that a crucial aspect carefully planned for in every educational implementation is the Educational Control Instrument.At the curriculum level, this is also called the operational-level curriculum or the Educational Program (AP).This instrument governs and details several critical aspects of educational execution, including the topics taught to students, categorized according to their respective subjects.Furthermore, the objectives of each topic are outlined as guidelines for achievement.The lesson content for each topic is also included in this instrument, along with a breakdown of the time frame for education, which is detailed through both practical and theoretical methods in the classroom.Additionally, the Educational Program specifies each subject's categorization as essential, necessary, or beneficial.References used in teaching each subject topic are also described in this instrument.
According to the research interviewee, several dominant factors have been identified in explaining the implementation of curriculum quality development in education, which undoubtedly have consequences for providing education within Kodiklat TNI AD and at the education centers.These factors can also arise within the education center's environment and external influences.Additionally, there are factors associated with the learners themselves.
Therefore, motivating educators and educational personnel should continue to be a priority.In Kodiklat TNI AD, limitations in improving quality are still being faced due to the limited number of personnel and the allocation of teaching staff.On the other hand, education continues to be conducted, even with lecturers having to take on multiple roles in delivering content.The quantity of material assigned to lecturers has not yet been standardized.Efforts made so far have not led to standardization but are still the result of benchmarking studies from abroad.Therefore, Kodiklat TNI AD should consider initiating educational standardization.This effort is an integral part of curriculum quality development.Standardization across all education centers provides a standard benchmark for education centers within the Kodiklat TNI AD hierarchy in implementing the learning process.Standardization applies not only to the curriculum but also to the teaching staff.This is intended to ensure that educators in the education centers throughout the organization possess the same capabilities.
Curriculum Quality Development Management Policy
Planning involves setting targets to be achieved in the future.Planning entails defining actions and assessing various resources and methods (Nudin et al., 2023).In line with the research focus on curriculum quality development, the research has been interpreted to include management stages, efforts, and challenges encountered in the field.Related to the research focus, educational management, particularly concerning the curriculum, interviews and literature reviews have been conducted, along with direct observations at the research site, Kodiklat TNI AD, and its implementing units.Data related to curriculum-related management has been obtained from the research findings.Subsequently, the researcher can systematically explain the conclusions, starting from planning, organizing, implementing, monitoring, and evaluating.The management process has been implemented and is ongoing to the present day; therefore, the obtained data is limited to 2016-2017 following data limitations.
The explanation about planning has shown that the curriculum quality development management planning conducted by Kodiklat TNI AD is not yet comprehensive.The existing planning is not systematically coordinated with educational institutions within the Kodiklst TNI AD hierarchy.On the one hand, its implementation involves other agencies from within and outside the country.The onesided formulation of plans has resulted in education centers not fully understanding the planning process precisely.Although personnel have generally been involved in the curriculum development working groups to support these activities, the planning created has determined the objectives of the education programs.However, neither the user units nor the education centers that implement the education programs themselves have fully understood the goal-setting process within the curriculum because the education centers only serve as implementers in the curriculum, as explained above.This certainly affects the measurement of how far the expected objectives can be achieved and perceived by the user units.The planning described by the education center staff is not implicitly linked to the vision and mission of the education institution or the overall concept of TNI AD.Thus, the relationship between the planned curriculum quality development program and the idea of TNI AD has not been accommodated and incorporated into the education curriculum.
Thus, it can be confirmed that the Infantry Education Center deeply understands the objectives of curriculum development and enhancement programs.The Cavalry Education Center has also expressed the same sentiment.Furthermore, in terms of organization, the staff of Kodiklat TNI AD also appreciates the efforts made to improve the quality of educators and educational personnel within the education centers in nurturing educators and academic personnel in their respective environments, enabling them to have insights into how to develop and revise the curriculum as part of curriculum quality development.As for the explanation, the programs implemented by the education centers align with Kodiklat TNI AD, and many other activities are conducted jointly with the education centres within its hierarchy.Given that Kodiklat TNI AD also shares the same responsibility of mentoring educators and educational personnel in all education centers.The internal programs conducted by the education centers remind all that Kodiklat TNI AD and its hierarchy will improve the curriculum and enhance the quality of education.
According to the lecturer serving at the Education Center of Kodiklat TNI AD, the implementation of curriculum operation is heavily influenced by instructional tools and aids.This is because the materials provided need to be supported by such equipment.Still, the condition of this equipment has not kept up with technological advancements as expected in educational goals, which can hinder the achievement of educational objectives.Similarly, the quality and quantity of human resources will undoubtedly impact the curriculum's operational implementation if not in line with standards.The education process dramatically relies on facilities and infrastructure.However, facilities and infrastructure will experience a decline in quality over time.Since goods are received from the seller, they will undergo a decrease in both quality and quantity (Lestari et al., 2015).If facilities and infrastructure are no longer feasible, they must be replaced with more suitable facilities and infrastructure (Ridwanulloh et al., 2023).The management of facilities and infrastructure involves the overall provision of educational facilities and infrastructure, which is a deliberately planned and earnestly pursued process, as well as the continuous development of educational objects to keep them ready for use in the teaching and learning process, to make it more effective and efficient in helping achieve the established educational objectives (Suban & Ilham, 2023).
From the research findings, it can be concluded that curriculum management in the implementation has been conducted in line with curriculum development guidelines.The planning process carried out by the Education Center begins with submitting an education plan that will be held in the upcoming year.Based on this, Kodiklat will include that type of education in the work plan and budget.Subsequently, after approval from the Indonesian National Army (TNI AD), the next step is to establish education in the Indonesian National Army's work program.In its implementation, this is Jonni Widjayanto, Priyanto / Strengthening National Defense: Quality Management in Indonesian Army (TNI AD) Curriculum Development accomplished by issuing directives or guidelines as a form of policy-level curriculum to ensure that education planning is carried out.This process will continue until the commencement of education.
Education Center in managing curriculum quality improvement
Effective learning is a measure of the success of the learning process, so several factors, such as the availability of learning resources, high motivation for learning, student activities, smooth network access, task results, sufficient materials, and supportive locations, are aspects that need to be considered (Rajab et al., 2023).Additionally, "Management is one of the skills needed in managing a program.The focus of the problem in this study is the program's planning, implementation, and evaluation," as stated in (Rahmawati et al., 2023).
One crucial aspect of implementing information and communication technology is the procurement, installation, and configuration of the necessary software and hardware to support data and information management within TNI AD environment.By facilitating the implementation of information and communication technology within the TNI AD, Disinfolahtad can help enhance the efficiency and effectiveness of data and information management, expedite decision-making processes, and improve the TNI AD's capabilities in executing operational and administrative tasks (Mutaqin et al., 2023).
During the interview with the instructors at the Infantry Education Center, it was mentioned that efforts have been made to improve the education delivery to make it more comprehensive.These efforts include contributing to developing educational programs that represent the policy-level curriculum.Suggestions encompass the types of education required by infantry units as the end-users of the graduates' outcomes.Additionally, recommendations were made regarding the operational-level curriculum, including the Core Learning Plan and Educational Events.These input suggestions have been given; however, the situation in the field during education delivery may sometimes differ or remain the same as before.This is because of limitations in other aspects of management, such as planning.This stage requires sufficient time, but the gap between the determination of the education to be carried out and its actual commencement is minimal.As a result, efforts to improve the operational-level curriculum have not been fully maximized by the Education Center.
From the human resources perspective, the Education Center has also enhanced its quality through formal education attended by instructors, various training activities, and informal mentoring provided to newcomers by experienced instructors.This is intended to ensure a shared understanding in education delivery.Additionally, education planners are willing to continuously update the curriculum to align with technological advancements and tactics applied by the end-user units of the graduates.This ensures synchronization between the educational institution and the end users.The Education Center has also conducted benchmarking studies abroad regarding human resource quality.This program aims to ensure that the education cycle performed by the Education Center remains on par with other countries.Given that the quality of soldiers is a benchmark of a nation's defense strength.
Planning a curriculum quality development program
The submission of curriculum quality improvement program planning, implemented in the form of curriculum development and revision, should accompany a written plan on how the program will be executed.This is an organizational behavior that should be carried out.According to (Thoha & Setiawan, (2021), organizational behavior begins with the behavior of individuals within the organization.Coordination without thorough planning reflects unprofessional individual behavior.Therefore, this is necessary to ensure that relevant parties within the Kodiklat TNI AD and educational centers understand the program's vision for curriculum quality enhancement.Terry & Rue, (2010) defines management as a process or framework.Thus, Kodiklat TNI AD can base its curriculum quality improvement planning on the previously submitted proposal.This ensures the program is carried out efficiently and effectively to achieve better graduate outcomes.Therefore, planning requires the interconnection of other elements, such as administrative program development.The relationship between the planning and administration functions is to provide time and space for implementing parties and recipients of activities to adapt and determine their responses to the plans developed by Kodiklat TNI AD.Likewise, for Kodiklat TNI AD, the responses received from educational centers serve as the basis for completing subsequent programs.This ensures that the maturity of the curriculum quality improvement planning process can be achieved.Quality, as a part of overall quality, is related to education, where curriculum development requires quality as its guideline.According to Pieters and Austin in (Sallis, 2012), 'quality is related to self-esteem and enthusiasm.' Human resources with the expected competencies will have a different motivation level than others.
The planning prepared for curriculum development is not only intended to anticipate and predict changes during the curriculum implementation phase.Clear instructions for field implementers must be included in the plan.According to Terry (2006), human resources is a critical aspect of management.In management, people are the operators who contribute to achieving organizational goals.Therefore, detailed guidance and directions will provide positive support for the human resources of the curriculum development team in structuring curriculum planning within the Kodiklat TNI AD environment.
The planning developed for curriculum development is not only to anticipate and predict changes during the curriculum implementation phase.Clear instructions for field implementers must be included in the plan.According to Terry (Terry, 2006), human resources is a crucial management aspect.They consider that in control, humans are the operators for achieving organizational goals.Therefore, detailed guidance and directions will provide positive support for the human resources of the curriculum development team in organizing curriculum planning in the Kodiklat TNI AD environment.
In addition to the educational roadmap, it is also necessary to be supported by an academic strategic plan.This way, curriculum development management becomes directed according to the program.Based on the program, as previously explained, it includes objectives for easier supervision.Therefore, the phases of the curriculum development plan of Kodiklat TNI AD should be as transparent as possible in its objectives and included in the established roadmap.This will serve as a control tool for achieving the desired educational goals.The importance of control as a form of supervision is a benchmark for Kodiklat TNI AD regarding curriculum development.Furthermore, it can also be used as an evaluation tool to assess how the curriculum aligns with the dynamic needs of the quality of learners.Fundamentally, the determinant of educational success is the resulting graduates' professionalism when deployed in operational units.
Terry & Rue (2010) states a "plan is a document used as a scheme to achieve objectives."Plans typically include resource allocation, schedules, and other crucial actions.Programs can be categorized based on scope, time frame, specificity, and frequency of use.However, delays often occur in the context that concerns educators in educational institutions regarding curriculum planning.Additionally, even new curricula are approved just before the start of the educational program.Indeed, this is not in line with the theory stating that the time frame is an important aspect that education planners should be vigilant about.Delays indicate that the planning aspect has not been well implemented.
Therefore, the organization must implement its plans accurately and foster unit interaction.This helps create effective teamwork in curriculum planning.Handoko (1995) states, "The organization's goals are statements about conditions or situations that do not exist now but are intended to be achieved through organizational activities."Therefore, the organizing body responsible for curriculum preparation should understand that what they are doing is for future activities, not just the present.Consequently, the organizational mindset should not perceive their actions solely for the present moment.Therefore, an organization's planning capacity requires a broad perspective of corporate principles.
According to Kabeyi, (2019), organizing involves assigning individuals according to their abilities to achieve the organization's goals.This phase aligns with what Kodiklat TNI AD has implemented by creating an organizational structure for the curriculum development workgroup based on the abilities and expertise within the Kodiklat TNI AD unit.In terms of management aspects, it has been explained that organizing using a functional organization involves aligning human resources, tasks, and resources within the organizational structure.Consequently, this forms a unified organizational behavior in curriculum development that can achieve overall quality.Furthermore, organizational behavior is a means of interconnection involving various aspects of human behavior within an organization or a specific group.The elements it generates influence the organization of individuals, and conversely, the factors it causes can influence individuals within the organization.Thus, the curriculum development workgroup organization and revision or curriculum development processes can become more effective.
This research focuses on the development management policies in the Indonesian Army's Infantry Education Center and Cavalry Education Center, specifically in the context of curriculum quality.The research questions that guide this study include: What are the dominant factors, policies, and efforts in curriculum development management that support the success of educational quality at the Cavalry Training Center and Infantry Training Center?How does implementing curriculum quality development in education impact education provision within Kodiklat TNI AD and at the education centers?What are the limitations in improving quality due to the limited number of personnel and the allocation of teaching staff?How can standardization across all education centers be achieved to provide a standard benchmark for education centers within the Kodiklat TNI AD hierarchy in implementing the learning process?
Therefore, the organization planned by Kodiklat TNI AD in the form of the curriculum development workgroup is considered appropriate.An organization's management aspect should also be linked to utilizing other resources in the curriculum quality development program.When forming workgroups, involving human resources from other institutions or fields is necessary to make the organization appear more comprehensive.
Through the discussion outlined above, dominant factors in curriculum quality improvement management must be given due attention to serving as an evaluation basis for establishing a good curriculum.Interaction within curriculum improvement management between planners, implementers, and evaluators must be continually developed to create synergy in their tasks.Standardization of education components, including the curriculum, has been established.These standards should be adhered to achieve the desired curriculum quality.This synchronization occurs when planning based on previous evaluations is consistently applied.The key is to ensure sufficient planning time so that the implementation of the curriculum, especially at the operational level, can align with the planned objectives.Proficiency in curriculum development tasks cannot be separated from the quality of human resources.A sense of pride in the study as a noble endeavor is yet to be cultivated, and the perspective that curriculum development is just a routine task should be eliminated to emphasize the seriousness of the work.
CONCLUSION
The study concludes that curriculum development management in the TNI AD Training and Education Command (Kodiklat TNI AD) has followed the stages of management functions, supported by policies aimed at improving the quality of education at the Training Centers.However, the implementation of character values is still dominated by physical and physiological importance, and the quality of human resources in curriculum development needs improvement.The research also suggests that the organization planned by Kodiklat TNI AD in the form of the curriculum development workgroup is considered appropriate.It emphasizes the need for interaction between planners, implementers, and evaluators within curriculum improvement management to create synergy in their tasks.The study also points out the need to standardize education components, including the Jonni Widjayanto, Priyanto / Strengthening National Defense: Quality Management in Indonesian Army (TNI AD) Curriculum Development curriculum, to achieve the desired curriculum quality.This research has some limitations in quality improvement due to the limited number of personnel and allocation of teaching staff.The quantity of material provided to lecturers has not been standardized, and the efforts made so far have not led to standardization but are still the results of benchmarking studies from abroad.For future research, it is recommended to extend the data collection period to provide a more comprehensive understanding of the curriculum development management policies.It would also be beneficial to explore strategies for overcoming the limitations in human resources and the allocation of teaching staff.Further research could also focus on developing a standardized curriculum and teaching materials to enhance the quality of education.
Table 1 .
Stages of Curriculum Preparation According to Interview Results
Table 2 . Data Collection Instrument Lattice No Research Question Required Data Data Source/ Respondents Instruments Ques tions Inter- view Obser- vation Docu- ments
Jonni Widjayanto, Priyanto / Strengthening National Defense: Quality Management in Indonesian Army (TNI AD) Curriculum Development | 2024-01-07T16:55:21.228Z | 2023-12-15T00:00:00.000 | {
"year": 2023,
"sha1": "f3ef3f8b5fcc2e72b14806390a698095a94a02b6",
"oa_license": "CCBYNCSA",
"oa_url": "https://journal.staihubbulwathan.id/index.php/alishlah/article/download/4709/1982",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dd9dcb2358e060e6947a8f6f25c192dd287075aa",
"s2fieldsofstudy": [
"Education",
"Engineering"
],
"extfieldsofstudy": []
} |
212740962 | pes2o/s2orc | v3-fos-license | Learning causal networks using inducible transcription factors and transcriptome‐wide time series
Abstract We present IDEA (the Induction Dynamics gene Expression Atlas), a dataset constructed by independently inducing hundreds of transcription factors (TFs) and measuring timecourses of the resulting gene expression responses in budding yeast. Each experiment captures a regulatory cascade connecting a single induced regulator to the genes it causally regulates. We discuss the regulatory cascade of a single TF, Aft1, in detail; however, IDEA contains > 200 TF induction experiments with 20 million individual observations and 100,000 signal‐containing dynamic responses. As an application of IDEA, we integrate all timecourses into a whole‐cell transcriptional model, which is used to predict and validate multiple new and underappreciated transcriptional regulators. We also find that the magnitudes of coefficients in this model are predictive of genetic interaction profile similarities. In addition to being a resource for exploring regulatory connectivity between TFs and their target genes, our modeling approach shows that combining rapid perturbations of individual genes with genome‐scale time‐series measurements is an effective strategy for elucidating gene regulatory networks.
Thank you for having submitted a manuscript entitled "Learning regulators from IDEA, a set of hundreds of dynamic transcriptome-wide induction experiments" for consideration for publication in Molecular Systems Biology. Your paper has now been seen by Editors of the Journal, and we have decided to return it to you without sending it for extensive peer review.
In this study you present IDEA, a gene expression time-course dataset monitoring the response to the individual induction of hundreds of transcription factors (TFs) in yeast. You further present a computational framework for inferring direct regulatory connections among genes, based on integration of all time-course data in IDEA and computational modeling, without requiring prior information. We appreciate that you identify novel transcriptional regulators and that you experimentally validate three previously unknown transcriptional hubs that are predicted in your model. However, we think that in absence of a global assessment of the obtained results and a direct comparison of IDEA to existing alternative methods, the validity of the identified regulatory connections and hubs seems somewhat tentative, and the potential of IDEA and the presented computational approach for deriving novel biological insights that would not be possible to obtain using existing workflows, remains to be demonstrated. Overall, we are not convinced that study provides the kind of broadly relevant resource and the kind of decisive methodological advance with demonstrated potential to generate novel biological insight that would be required for publication at Molecular Systems Biology.
Authors' re-submission 26th October 2019 Based on your comments, we've gone back and benchmarked our whole-cell modeling predictions in a few ways. We've now shown that not only can we predict new regulators, but we can also predict the presence of genetic interactions within two different published networks (Yeastnet and Costanzo). This ability of using a model of transcription to predict gene-gene interactions (with AUC ~ 0.7), is quite unprecedented --furthermore, it highlights new biology that we now discuss in great detail. Additionally, we've benchmarked our dataset against ChIP and published gene expression data of mutants, which highlights that dynamic perturbation data is more functionally coherent than other data types with far fewer data sets.
Given the results of these new analyses, we were hoping you would consider a resubmission.
2nd Editorial Decision 11th December 2019 Thank you again for submitting your work to Molecular Systems Biology. We have now heard back from two of the three reviewers who agreed to evaluate your manuscript. Unfortunately, after a series of reminders we did not manage to obtain a report from reviewer #3. In the interest of time, and since the recommendations of referee #1 and #2 are quite similar, I prefer to make a decision now rather than further delaying the process. As you will see below, the reviewers acknowledge that the presented approach and dataset seem potentially useful for the field. They raise however a series of concerns, which we would ask you to address in a major revision.
I think that the recommendations of the reviewers are rather clear so there is no need to repeat the points listed below. All issues raised by the reviewers need to be convincingly addressed. As you may already know, our editorial policy allows in principle a single round of major revision, so it is essential to provide responses to the reviewers' comments that are as complete as possible. Please feel free to contact me in case you would like to discuss in further detail any of the issues raised by the reviewers. In this paper, Hackett et al have taken a previously-published system for small-molecule transcriptional induction system in Saccharomyces cerevisiae and applied it to more than 200 transcriptional controllers (transcription factors, chromatin modifiers, etc). The authors constructed strains, performed genome-wide gene expression profiling using DNA microarrays over several short-term timepoints to identify the genes that change in expression as a result of overexpression, and these expression changes were used to solve dynamic models that correspond to regulator-gene connections. Parameters of these solved models were then used as evidence for edges in a gene regulatory network.
General Comments
This is an important and well-executed study. The overall experimental design is excellent and represents a major improvement over comparable methods that typically use gene deletion alleles in steady-state to identify TF targets. The resulting dataset is of very high quality and of considerable immediate value to researchers in the field. The core analysis of this data is rigorous and well done. I expect that this work will be of great interest for developing computational regulatory network inference tools in the short term, and this work will stand as a major advance in our understanding of the transcriptional regulatory network in yeast. I have some comments on the data presentation and analysis which, if addressed, will enhance this work. I would also like to specifically commend the interactive presentation of this data set.
Major 1. The authors state that the target TF is over-expressed 53-fold relative to t=0. Does this mean that there is some leaky expression in the uninduced state? How does this compare to the fold induction of natural inducible systems like the GAL or NCR systems. It would be useful to have some idea of how overexpressed each TF is relative to wildtype levels.
2. The authors don't provide a rationale for why cells were grown in chemostats. Although I am sure that they would argue that this means the cells are in "steady-state" I think a stronger argument for the use of chemostats would help. Moreover, does the condition in which the experiment is performed have an effect on the observed induction? For example, in Figure S9 suggests that many genes that lie below the diagonal are metabolic-related TFs. And in Figure S4 it appears that PHO4 has the largest number of impulse responses -is this because the cells are grown in phosphatelimited chemostats. Ideally, the authors should provide experiments that illustrate the extent to which the method is sensitive to the conditions by over-expressing the same TF in multiple conditions.
3. I was surprised that the strain construction did not replace the endogenous regulatory sequence, but inserted the inducible system regulatory sequence downstream of the endogenous sequence. Do the authors have evidence that this doesn't result in any problems e.g. can GAL4 still be induced effectively in rich glucose media?
4. Supplemental figure 11 shows a large number of experiments which have few or no changes in gene expression. In the discussion it states that ~40 TFs had no effect. It would be valuable to explore if those induced genes that give no response have anything in common (e.g. requiring some stoichiometry with other proteins, some highly-controlled posttranslational processing like proteolytic cleavage, etc) 5. The validation experiment is confusing in presentation, and the information content of figure 6 seems to be quite low for a main text figure. The authors' model has predicted network edges associated with 10 potential regulators; it is necessary to include confusion matrixes for all of the validation experiments and a reasonable model performance metric (chi-square does not seem appropriate). In addition I would like to see the differential expression heatmaps (as in Figure 5) for all10 confirmation experiments (as supplemental figures). Induction experiments which result in no gene expression changes are trivially explained as regulators that do not function correctly after induction and overexpression; experiments where there are gene expression changes that are not predicted by the initial model are much more interesting.
6. Functional coherence doesn't seem like a valid way to benchmark gene regulatory networks. The underlying rationale is shaky, especially when using GO slim terms (is an enrichment of 'response to chemical' meaningful?). I also have a general concern about using differences between p-values as statistical evidence outside the context of hypothesis testing. The ROC curves associated with S14 are much more compelling ( I am surprised they are not main-text figures).
7. The supplemental methods package related to the model-fitting is excellent (I found the answer to every margin note that I made about the modeling approach while reading the results section). However, the main-text methods section lacks details of a number of analyses that are presented in the manuscript; I believe that as written it would be very difficult to replicate some of the supporting analyses in this work. The authors should provide more detail on these methods. 8. In Figures S3 and S4 there are other very obvious horizontal and vertical banding patterns that are difficult to interpret. It would help to have these better annotated. The figure legend simply points to one of these bands. 9. In all of the heatmaps that the authors present some genes go up and some genes go down. Is this expected -are these TFs known to have repressive activity? Are these all due to secondary biological effects? Can the authors rule out that data normalization doesn't play a role in this effect?
10. The website is very nice. However, I was surprised that the dataset is not presented as a global gene regulatory network "at the scale of the entire genome" (which is the phrase the authors use to motivate their study).
Minor 1. In the introduction the authors state that "by not utilizing prior knowledge, we minimize bias against re-learning known biology". This seems counterintuitive to me -incorporating prior knowledge builds on years of work and effort, and "re-learning" known biology is a means of validating the approach. Is there a way to rephrase this statement?
2. It would be useful to the reader to have an intuitive explanation of the Chechik and Koller kinetic model.
3. The authors use 'timecourse' in different contexts; the intro has timecourses which are experiments ("We generated over two-hundred TF induction timecourses"), and results seems to switch between timecourses which are experiments and timecourses which are gene measurements in an experiment ("The signals from these 100,036 timecourses were retained"). The terminology used needs to be clearly defined and preferably not reused. I found this to be very confusing when first reading the manuscript.
4. It would be helpful to include a brief introduction to Aft1 and the motivation for selecting that gene as the focus.
5. The results section refers to Rpn4 and cites supplemental figure 14; Rpn4 is not in that figure and it's not clear where the Rpn4 claims originate.
6. Figure 4 has annotations for validated & invalidated regulatory nodes without any further explanation. What makes a node valid or invalid? It's also not clear how regulatory pathways (ontologies?) have been integrated into this network.
7. The yeastract citation isn't compiled and the eQTL study (ref 17) is incomplete.
Reviewer #2: Review of "Learning causal regulatory networks with inducible promoter alleles and massively parallelized time series measurements".
Overall I thought this paper was great, and describes a great dataset and an interesting analysis. It hurt a bit to see how disconnected the paper was from prior work, and a major comment is that more work needs to be done to properly frame this in the context of the mature field it naturally fits into. I am quite positive about this paper and only have so many critical comments due to my keen interest in the topic.
A main flaw is that the network inference was not sufficiently well described, I want to hear more about alternatives, why you chose this model, and have a proper formulation of the network inference in the methods section. What does "Hyper-ChIP-able" mean ? ... much to informal and inexact. Say less, but be specific.
"Additionally, we find that 79% of genes reported as being directly bound by a TF do not exhibit a significant expression response in the corresponding TF's induction experiment" "In IDEA, realized regulation is directly measured and there is stronger agreement between early induction events in IDEA with TF-DNA interactions as assessed with transposon calling cards than by ChIP " No, many TFs need to be post transcriptionally or post translationally modified to be active. This could be due to interactions. TFs do not act alone. Also where are you getting these numbers ?
Binding is not a great proxy for "regulates" I agree there, mostly suffering from false positives. But over expression dynamics will still have a huge false negative rate due to post-X mods not being lined up. Functional coherence is used as a proxy for regulates. This seems like a very bad idea.
Compare known motifs (Cis-BP and FiMo) to your de novo analysis.
"we fit a Bayesian version of the Chechik & Koller kinetic model to each 5 timecourse [23]" ... please explain this more thoroughly.
On p. 10 you, while discussing the use of other networks (a genetic interaction network and a functional association network) state : "To our surprise, the magnitudes of regression coefficients are reasonable predictors of edges in both types of networks (AUC ~ 0.7), while early t rise times do not have predictive power of genetic interactions ( Figure S14) [22,28]." 1st Revision -authors' response 10th January 2020 Reviewer #1:
Summary
In this paper, Hackett et al have taken a previously-published system for smallmolecule transcriptional induction system in Saccharomyces cerevisiae and applied it to more than 200 transcriptional controllers (transcription factors, chromatin modifiers, etc). The authors constructed strains, performed genome-wide gene expression profiling using DNA microarrays over several short-term timepoints to identify the genes that change in expression as a result of overexpression, and these expression changes were used to solve dynamic models that correspond to regulator-gene connections. Parameters of these solved models were then used as evidence for edges in a gene regulatory network.
General Comments
This is an important and well-executed study. The overall experimental design is excellent and represents a major improvement over comparable methods that typically use gene deletion alleles in steady-state to identify TF targets. The resulting dataset is of very high quality and of considerable immediate value to researchers in the field. The core analysis of this data is rigorous and well done. I expect that this work will be of great interest for developing computational regulatory network inference tools in the short term, and this work will stand as a major advance in our understanding of the transcriptional regulatory network in yeast. I have some comments on the data presentation and analysis which, if addressed, will enhance this work. I would also like to specifically commend the interactive presentation of this data set.
Major 1. The authors state that the target TF is over-expressed 53-fold relative to t=0. Does this mean that there is some leaky expression in the uninduced state? How does this compare to the fold induction of natural inducible systems like the GAL or NCR systems. It would be useful to have some idea of how overexpressed each TF is relative to wildtype levels.
We thank the reviewer for making these points and agree they are important to address. To determine how a TF's expression compares between engineered and WT strains, we've added a new informatic analysis. Specifically, we obtained the red and green channel values for each TF from the t = 0 and t = 90 min. timepoints from its respective induction experiment. Additionally, for each TF, we calculated its red/green ratio across all experiments to create a distribution for each TF. Since the vast majority of these ratios come from strains where the TF is under its native control, the median of these distributions should provide an estimate of how much the TF level varies between or experimental strains and our universal reference WT strain. For each TF, the median ratio was ~1, indicating that we can directly compare the normalized red and green channels from the t = 0 minute sample to estimate the level of leakiness and 90 minute sample to estimate the fold-change above WT. These data are shown in a newly added Appendix Figure S2. We estimate that the TF expression in 86% of our synthetic-promoter driven TF strains is lower than that in WT at t = 0 min. At t = 90 min, the median TF is 28.4-fold above WT TF levels. We've added these data to the manuscript near the beginning of the Results section. The legend of the new figure reads, "Appendix Figure S2: Estimating leakiness and inducible of synthetic promoter-driven TF alleles. For each TF strain, the red (sample) and green (reference) microarray values were obtained from the t = 0 min. and t = 90 min. samples. The red/green ratio provides an estimate of leakiness for the t = 0 min. histogram (blue) in which 86% of synthetic promoter-driven TFs have expression less than WT TF levels. At t = 90 min., the red/green ratio provides an estimate of induction above WT TF levels (red histogram). The median level of TF induction over WT TF levels is 28.4." GAL overexpression can result in a 500-1000-fold increase in expression. Exploring a recently published dataset that looked at how patterns of gene expression change in response to a large pulse of a preferred nitrogen source, we estimate the repressibility of NCR genes to be at least 100-fold. We've added this result to the manuscript as a point of comparison for the interested reader. We now write: "Induction of a target TF is detectable in <5 minutes and reaches saturation within ~10 minutes following β-estradiol addition at a median level 53-fold higher than at t = 0 min (Appendix Figure S1B) Figure S9 suggests that many genes that lie below the diagonal are metabolic-related TFs. And in Figure S4 it appears that PHO4 has the largest number of impulse responses -is this because the cells are grown in phosphate-limited chemostats. Ideally, the authors should provide experiments that illustrate the extent to which the method is sensitive to the conditions by over-expressing the same TF in multiple conditions.
The authors don't provide a rationale for why cells were grown in chemostats. Although I am sure that they would argue that this means the cells are in "steady-state" I think a stronger argument for the use of chemostats would help. Moreover, does the condition in which the experiment is performed have an effect on the observed induction? For example, in
We thank the reviewer for these important points. We address each of them in full below.
Although I am sure that they would argue that this means the cells are in "steady-state" I think a stronger argument for the use of chemostats would help.
We completely agree with the reviewer's comment. In the original submission, we only wrote that cells are in steady state and provided no further context for why that actually matters. We've modified the beginning of the results section to emphasize the relevance of the steady-state condition. We've added, "We chose chemostats, in part, because the steady-state condition of chemostat cultures is a particularly useful feature for mathematical modeling. Under steady-state conditions, the levels of molecules and activities of processes are not changing at a culture-wide level. Therefore, following TF induction in a steady-state culture, immediate dynamic changes result from the TF induction itself. The ability to choose a single growthlimiting nutrient also makes the chemostat ideal for exploring how input-output relationships between TFs and target genes vary under different nutritional conditions [29]."
Moreover, does the condition in which the experiment is performed have an effect on the observed induction?
The reviewer makes a very good point in asking if the growth condition affects the expression response. The answer to this question is "yes". We previously performed induction experiments of the methionine TFs (Met4, Met28, Met31, Met32, and Cbf1) under methionine limitation. We found that Cbf1 would switch from an activator (under methionine limitation) to a repressor (under phosphate limitation where cells have excess extracellular methionine). Additionally, there are other targets in the genome where Cbf1 is an activator independent of methionine limitation. For methionine metabolic genes, the regulatory connections exist in both conditions --but the sign of those edges (positive versus negative) is different. We've added the following text to the introduction: "This study also revealed that Cbf1 could act as an activator or a repressor, depending on which promoter it targets [29]. For certain methionine metabolic genes, Cbf1 can act as an activator of target genes when yeast are limited for methionine, but can switch to being a repressor of those same genes when yeast are limited for phosphate and have excess extracellular methionine [29]. These results highlight the ability of TF induction to reveal condition-dependent regulatory connections, and that a TF can act as both a positive and negative regulator of gene expression depending on local DNA context and environmental conditions."
Ideally, the authors should provide experiments that illustrate the extent to which the method is sensitive to the conditions by over-expressing the same TF in multiple conditions.
This is a very interesting point. We previously published a study with the methionine TFs, but didn't discuss in great detail in our original submission -this was an oversight on our part. We have expanded the introduction to explain those results. TF activity can certainly depend on condition, and our induction approach has been used to reveal such dependencies. In IDEA, we also have induction experiments for multiple nitrogen TFs performed under both phosphate and nitrogen limitation. They are in the dataset, but were also not discussed specifically in our original submission. We now include some discussion of those data, and highlight them for readers particularly interested in nitrogen metabolism. In short, the nitrogen TFs looked remarkably similar in both conditions (unlike the Cbf1 example). In the case of Gln3, we interpreted the similar responses to mean that it wasn't the limitation that mattered (nitrogen versus phosphate), but rather the source of nitrogen itself, since we used ammonium sulfate as a nitrogen source in both experiments. Had we used a poorer nitrogen source (like proline), the expression responses to Gln3 activation may have been quite different. In the Results section we had add, "Finally, IDEA also contains several TFs induced under multiple conditions. Gln3, Dal80, and Gzf3 were induced under phosphate and nitrogen limitation (with ammonium sulfate used as the sole nitrogen source in both cases). For each TF, the resulting expression patterns are strikingly similar in the two tested environments (Appendix Figure S3), suggesting that the activity of nitrogen-related TFs may depend more on the quality of the nitrogen source (proline vs. ammonium sulfate, for example), rather than the choice of growthlimiting nutrient." 3. I was surprised that the strain construction did not replace the endogenous regulatory sequence, but inserted the inducible system regulatory sequence downstream of the endogenous sequence. Do the authors have evidence that this doesn't result in any problems e.g. can GAL4 still be induced effectively in rich glucose media?
We thank the reviewer for this comment, and agree that explaining/justifying insertion versus replacement is important. The promoter constructs we used are ~2kb in length and include a drug selectable marker linked a synthetic promoter that is placed upstream of a TF. In a strain that has been transformed with our construct, the TF's native promoter is 2kb away from the TF open reading frame (ORF). Since upstream activating sequences are normally only a few hundred basepairs from ORFs they regulated in Saccharomyces cerevisiae, the conventional wisdom is that the native promoter, in our engineered strain, is too far from the TF ORF to affect its expression -this is quite different than the situation in human cells, for example. The lack of long-range activation of genes in yeast by upstream activating sequences (UASs) has been quantified by Fred Winston's group with several UAS-TATA constructs (Dobi and Winston, 2007).
The lack of activation over long distances is one reason for using the insertion strategy. But additionally, by not removing native DNA, if there is a gene that shares a promoter with the TF and is divergently expressed (such as in the case of GAL1 and GAL10 sharing a common promoter region), its upstream regulatory region should remain unperturbed using the insertion strategy. We've added a reference to the Dobi and Winston paper, as well as the following text to the materials and methods: "Synthetic promoters were inserted into the genome without removing native DNA for two reasons. First, we believed that removing at TF's native promoter could disrupt expression of a divergently transcribed gene. Second, binding sites in S. cerevisiae need to be within a few hundred base pairs of an ORF to be functional [63]. Therefore, in our case, displacement of the native promoter by ~2 kb is likely to remove its regulatory potential of the TF-encoding gene." The last part of the reviewer's comment refers to Gal4 --specifically, can Gal4 be induced and function in glucose-rich medium? To address this concern, we have included a Gal4 experiment in IDEA, and concluded that the answer is "yes". We found that Gal4 can indeed turn on Gal4 target regions rapidly in our growth conditions (phosphate limitation with 2% glucose). On the IDEA website, if one goes to the "Induction heatmaps" section and types in "GAL4" they can see the genes that respond most strongly to Gal4 induction. GAL7 and GAL1, for example, respond within 15 minutes to Gal4 induction with β-estradiol in our experimental conditions. figure 11 shows a large number of experiments which have few or no changes in gene expression. In the discussion it states that ~40 TFs had no effect. It would be valuable to explore if those induced genes that give no response have anything in common (e.g. requiring some stoichiometry with other proteins, some highly-controlled posttranslational processing like proteolytic cleavage, etc)
Supplemental
We thank the reviewer for this comment. We've modified the text slightly to read "Thirty-eight TFs affected the expression of <50 genes each (Appendix Figure S12A)." Furthermore, we've modified the figure legend to highlight which TFs we are referring to for the interested reader to explore more deeply. We do not see an obvious connection to explain the sparseness of the expression responses following induction of these particular TFs -they regulate diverse biological processes, and a few are annotated to act in complexes. We thank the reviewer for these important points. We address each of them in the revised manuscript. We've included a new supplementary figure (Appendix Figure S17), which shows heatmaps for all 10 validation experiments.The model edge predictions are also incorporated into this heatmap (shown in blue and red).
The validation experiment is confusing in presentation
We also completely agree with the reviewer on the importance of including confusion matrices between predictions and results. We now include these confusion matrices as Table 5. We also now include an additional statistical assessment of significance using Fisher's exact test. Finally, we've moved Figure 6 to the supplement, based on the reviewer's suggestion.
6. Functional coherence doesn't seem like a valid way to benchmark gene regulatory networks. The underlying rationale is shaky, especially when using GO slim terms (is an enrichment of 'response to chemical' meaningful?). I also have a general concern about using differences between p-values as statistical evidence outside the context of hypothesis testing. The ROC curves associated with S14 are much more compelling ( I am surprised they are not main-text figures).
Based on comments from both reviewers, we've removed the analysis on "functional coherence" from the revised manuscript. We've moved Figure S14 from the original submission to the main text as the reviewer suggested (it is now Figure 5).
7. The supplemental methods package related to the model-fitting is excellent (I found the answer to every margin note that I made about the modeling approach while reading the results section). However, the main-text methods section lacks details of a number of analyses that are presented in the manuscript; I believe that as written it would be very difficult to replicate some of the supporting analyses in this work. The authors should provide more detail on these methods.
We thank the reviewer for pointing this out, and are glad that the model-fitting methods "ticked all the boxes". We tried to be extremely fastidious about describing the computational methods in detail. Some of the methods were placed in the main text, while others were placed in the Appendix.
We have made several substantive changes that we hope will address the reviewers concerns. First, in our newly added "Data Accessibility" section, we now include links to the code on Github for both the Chechik & Koller curve fitting model (http://https//github.com/calico/impulse) as well as the dynamical systems modeling that is used to solve Equation 1 in the main text (https://github.com/google-research/googleresearch/tree/master/yeast_transcription_network).
Second, we've moved more of the methods from the Appendix to the main text. Specifically, the summary of the dynamical systems modeling implementation has been moved. We've also added a sentence that points the interested reader to the Appendix for a complete and very technical accounting of the entire dynamical systems modeling approach. That, combined with the code provided above, will allow those who are interested to perform this type of analysis. Additionally, we've moved the entire methods section describing "marginal attribution analysis" used in Figure 3 to the main text as well. Figures S3 and S4 there are other very obvious horizontal and vertical banding patterns that are difficult to interpret. It would help to have these better annotated. The figure legend simply points to one of these bands.
In
We thank the reviewer for pointing this out. We've added more annotations to the figures as well as descriptions within the legends. We now clearly state (in the legend of Figure S3) that the vertical banding patterns are actually TFs that are hubs (they affect the expression of many genes when induced). The TF experiments are sorted alphabetically, and we highlight a subset of them including GAT3, GAT4, GCN4, MSN2, MSN4, SFP1, and UME6. The horizontal banding patterns are due to weak time-dependent signals that accompany most experiments (e.g., ESR-related). We highlight two of the most striking bands. The top horizontal band is due to the high variation of expression observed in ORFs that have no standard ORF ID beyond their chromosome and position (many of these are dubious ORFs). We've now clearly labeled those in both figures. The second large clear band is ribosomal genes, which we also now clearly annotate in both figures as well. It's also worth pointing that color palette saturates at log2 = -1 and log2 = 1. We chose this dynamic range to accentuate some of the features in the raw data. We think that the labels the reviewer requested are a significant improvement, and will be useful to readers in interpreting these heatmaps.
In all of the heatmaps that the authors present some genes go up and some genes go down. Is this expected -are these TFs known to have repressive activity? Are these all due to secondary biological effects? Can the authors rule out that data normalization doesn't play a role in this effect?
TFs can act as direct activators and repressors of gene expression (like Cbf1). The full extent to which dual function (activation and repression) of TFs exist is unknown, but in cases where we see a clear binding motif associated with the induced regulator the motif tends to be overrepresented as either activating or inhibiting. This contrasts with some of the non-TF regulators that we identified from modeling and later validated. This is because Fmp48 is a putative kinase and activates some targets while inhibiting others.
We think that the vast majority of transcriptional changes elicited in this dataset are due to indirect responses to TF induction, and are interpreted as transcriptional cascades. As an example, Gln3, a well-studied strong transcriptional activator involved in nitrogen metabolism, represses many genes through indirect means. We see additional cases of indirect regulation, particularly those manifesting as "perfect adaptation", which often involve acute changes in genes involved in ribosomes or anabolism. We think that some of these effects are mediated through global factors that we do not directly measure such as amino acid pools.
We have attempted to remove signals that are driven by gene-and array-level noise as well as removing stress-dependent patterns.Therefore, we think there are few examples of large dynamic changes which could be attributed to mis-normalization. The retention of signal is partially demonstrated by the replicate heatmaps of different TFs (notably Gln3), which have clear concordance when strong signal occurs. We describe all of the signal processing steps in great detail, and make the data available at all levels of processing for others to explore and develop alternative methods for interrogating signal of interest.
The website is very nice. However, I was surprised that the dataset is not presented as a global gene regulatory network "at the scale of the entire genome" (which is the phrase the authors use to motivate their study).
We thank the reviewer for making this point, and we've corrected this oversight in the revision. We have added an interactive version of our network from Figure 4 to the website (https://idea.research.calicolabs.com/network). The Cytsoscape file itself can be downloaded at https://idea.research.calicolabs.com/data.
We expect that future work from us and others will continue to refine how we explore and visualize these data to drive new discoveries. We want to point out that we also did some work to integrate IDEA with the Costanzo genetic interaction network, and this can be found under the "TF effects" panel on the website. We've made movies of how regulator-gene connections form over time on this network. While some TFs regulate clear "galaxies" within the genetic interaction "universe", others do not. Understanding how to better integrate these kinds of data, and especially how different processes are coordinated to give rise to a healthy cellular state, is something we are keen to explore in the future, and believe others will too.
In the introduction the authors state that "by not utilizing prior knowledge, we minimize bias against re-learning known biology". This seems counterintuitive to me -incorporating prior knowledge builds on years of work and effort, and "re-learning" known biology is a means of validating the approach. Is there a way to rephrase this statement?
We agree with the reviewer that this sentence was poorly phrased, and have removed the offending text. We now say: "Our approach implicitly dissects indirect regulation into a series of direct regulatory relationships. Predicted intermediate regulators span canonical transcriptional regulators and genes of unknown function."
It would be useful to the reader to have an intuitive explanation of the Chechik and Koller kinetic model.
We thank the reviewer for this comment. The second reviewer also suggested that we add a more complete explanation of this model in the main text, and we agree that this would be useful for readers. We have left the detailed equations in the supplement, and have added a general explanation of the Chechik & Koller (CK) model to the main text. Specifically, we have added the text shown below in purple: "As is the case in the Aft1 experiment, timecourses with significant signal across IDEA typically exhibit either a sigmoidal or impulse-like response (double sigmoidal); thus, we fit a Bayesian version of the Chechik & Koller (CK) kinetic model to each timecourse [42,43] (see materials and methods for more details on curve fitting; code for implementing CK fits can be found at https://github.com/calico/impulse). The CK model characterizes a timecourse as a double sigmoid but can be reduced to a simpler sigmoid that has fewer parameters. Specifically, the original CK kinetic model contains six parameters, which we reduced to five parameters because the initial amplitude for all timecourses is zero due to normalization. The impulse (double sigmoid) response is ideal for capturing two-transition behavior in biological timecourses. One sigmoid characterizes the onset response and a second sigmoid characterizes the offset response [42]. Parametric fits enable direct comparisons of timecourses by revealing kinetic parameters. Our Bayesian implementation ensures that these parameters are interpretable by penalizing unrealistic and impossible parameterizations (e.g., stepfunction responses or changes which precede β-estradiol introduction). Since the impulse and single sigmoid models are nested (i.e., the simpler model contains all of the terms within the more complex model), we can -for a given timecourse -use a likelihood ratio test to determine if extra parameters improve the fit sufficiently to justify the more complex model.
Sigmoidal responses are summarized with a half-max time constant (trise), an asymptotic expression level (vinter), and a slope parameter (β). Impulses include two additional parameters: tfall, which describes the time when the response returns halfway to its final level, and vfinal, the asymptotic expression level of the impulse ( Figure 2B) [43]."
The authors use 'timecourse' in different contexts; the intro has timecourses
which are experiments ("We generated over two-hundred TF induction timecourses"), and results seems to switch between timecourses which are experiments and timecourses which are gene measurements in an experiment ("The signals from these 100,036 timecourses were retained"). The terminology used needs to be clearly defined and preferably not reused. I found this to be very confusing when first reading the manuscript.
We thank the reviewer for pointing this out, and completely agree that this needs to be clearer. We have gone through the text and now use "experiment" to refer to the set of all gene expression responses that follow the induction of a single TF. Additionally, we now use "timecourse" to specifically refer to a particular genes' expression response in a single experiment. At the beginning of the Results section we have added, "In this manuscript, an experiment refers to all of the gene expression responses that follow from induction of single TF. A timecourse refers to the kinetic response of a single gene within a single experiment."
It would be helpful to include a brief introduction to Aft1 and the motivation for selecting that gene as the focus.
We thank the reviewer for making this suggestion, and we've added the following text to manuscript where introduce Aft1: "Aft1 was originally identified as an activator of genes that uptake iron into the cell [38]. Aft1 responds to defects in iron-sulfur cluster biogenesis [39], and its activity is negatively regulated by Met4, the primary activator of methionine biosynthetic genes [39,40]. We highlight Aft1, in part, because we observe a range of expression responses following its activation." It's also worth noting that we also decided to focus on Aft1 due to our modeling predictions, which indicated that Hmx1, (part of the Aft1 regulon) was itself a regulator of gene expression.
The results section refers to Rpn4 and cites supplemental figure 14; Rpn4 is not in that figure and it's not clear where the Rpn4 claims originate.
We thank the reviewer making this comment. We sought to clarify this result in the revision by adding both text and an additional figure. The observation that Rpn4, which activates proteasomal subunits, genetically interacts with many of its targets, came from the ROC analysis shown in what is now Figure 5 (Figure S14 from our original submission). It has long been known that Rpn4 up-regulates proteasomal subunits (and thereby proteasomal activity). In IDEA, Rpn4 robustly activates these genes as well. What hasn't been appreciated is that Rpn4 genetically interacts with many of its targets. The ROC curves (comparing the magnitudes of coefficients from our transcriptional model with two published networks [Yeasnet and Costanzo]) revealed this connection. We've added a new supplemental figure (Appendix Figure S15) to show rpn4∆'s strongest interactions from Costanzo (using the interactive tool available at http://thecellmap.org). We also modified the text to explain how we made these observations. Specifically, we added, "Based on the ROC analysis, we next explored the strongest model coefficients that overlapped the genetic interaction profiles from Costanzo et al. This immediately revealed two interesting biological observations."
Figure 4 has annotations for validated & invalidated regulatory nodes without any further explanation. What makes a node valid or invalid? It's also not clear how regulatory pathways (ontologies?) have been integrated into this network.
We thank the reviewer for noting these issues as being unclear in our initial submission. In Figure 4, validated nodes are the genes we induced in our validation experiments and saw a significant overlap between responding genes and generegulator connections within the model. In equation, those connections are represent as "alpha" values. We have modified the legend of the figure to clearly state what "valid" and "invalid" mean in this particular context. Specifically, we now write, "Validated nodes (green) are genes where validation experiments confirmed a significant overlap between measured gene-regulator connections and model-predicted coefficients. Invalidated nodes (red) are genes where validation experiments failed to confirm model-predicted coefficients." The GO categories are meant to simplify the network and make it more "human readable". If a regulator is connected to hundreds of targets, we pick the GO term with highest significant to visualize rather than the hundreds of targets individually. We've modified the legend to include, "Predicted regulators are linked to GO categories based on having a significant overlap with their predicted targets."
The yeastract citation isn't compiled and the eQTL study (ref 17) is incomplete.
We thank the reviewer for pointing out these errors. We've fixed them both. The eQTL reference is shown below. Reviewer #2:
Review of "Learning causal regulatory networks with inducible promoter alleles and massively parallelized time series measurements".
Overall I thought this paper was great, and describes a great dataset and an interesting analysis. It hurt a bit to see how disconnected the paper was from prior work, and a major comment is that more work needs to be done to properly frame this in the context of the mature field it naturally fits into. I am quite positive about this paper and only have so many critical comments due to my keen interest in the topic.
We greatly appreciate the reviewer's positive view of our work, and as detailed below, we've expanded the text (including the introduction) to better frame this paper in the context of previous work.
A main flaw is that the network inference was not sufficiently well described, I want to hear more about alternatives, why you chose this model, and have a proper formulation of the network inference in the methods section.
We thank the reviewer for this comment, and we have sought to improve our formulation of the network inference approach in the main text (both in the Results section and the Methods section). The full model derivation, we believe, is beyond the scope of the main text, so we relegate it to the following four sections in the Appendix: Linear Regression, BIC regularization, Hyperparameter Search, and Cross-Validation. But we also now clearly highlight these sections for the reader in the main text, which was missing from our original submission. Additionally, in the revision, we have included a link to code in a public repository for implementing the network inference approach. To the main text Results section, we've added the following text to explain the approach and discuss alternatives, as suggested by the reviewer: "To arrive at this approach, we considered a suite of modeling strategies. We explored modeling dataset-level dynamics using a system of differential equations; however, such a model is both hard to fit and not robust to model mis-specification. Since a model of cellular regulation that exclusively includes transcriptional regulation is inherently incomplete, the parameters of such a model would be inappropriately contorted to compensate for in-expressible regulation. Regression models that express the measured abundance of a gene of interest based on measured abundances of candidate regulators do not suffer from such a problem. As many regression models can be posed, we explored a wide-space of model formulations defined by a set of hyperparameters (e.g., modeling in log-or linearspace, allowing for interaction terms, and adjusting regularization strength) (see Appendix for complete details). To arrive at an optimal model formalism we used cross-validation, whereby whole experiments were held-out and then predicted using all other experiments (encompassing 50 million regressions in aggregate)." Furthermore, in the main text Methods section, we now include a section call "Dynamical systems modeling overview", which reads: "We pursued a linear regression approach to modeling Equation 1. First, we constructed an estimator of the time derivative of the gene expression response, which is treated as the dependent variable. We then fit a linear model to extract the coefficients of the dynamical system. This works because the time derivatives of the gene expression levels are modeled as linear functions of gene expression levels (possibly with quadratic terms as well). We note that this does not actually correspond to a full solution of the dynamical system, but requires point-wise consistency with the dynamical system description. Selection of regularization levels with crossvalidation yielded a model for the transcriptional effects of gene expression levels. This model was interrogated to identify which regulators were most important for predicting observed expression changes in each timecourse. Derivations implementing this modeling approach are presented in the Appendix sections: Linear Regression, BIC regularization, Hyperparameter Search, and Cross-Validation."
More direct citations are available.
We thank the reviewer for this comment, and we completely agree with it. In our original attempt at brevity, we missed a number of references as well an opportunity to properly frame our work in the context of previous work on GRNs. We have added many references (in total, the revised manuscript has 66 references, compared to 44 in the original submission), and now give a more detailed description of early work on the inference of GRNs and networks motifs. We've also cited more recent work on using mutant libraries and CRISPR tools to identify GRNs in multiple organisms.
We've added the text in purple to the Introduction: "The direct and indirect molecular interactions that achieve a particular cellular state can be described as regulatory edges that collectively form Gene Regulatory Networks (GRNs) [3,4]. As genome-scale datasets started to become available over 20 years ago, work by Alon and colleagues established that certain GRN topologies are enriched in biological systems [4]. Understanding the functional properties of such "network motifs" became the subject of intense experimental and theoretical investigation [4][5][6][7][8][9][10][11][12][13][14]. Combined with genomic tools and extensive prior knowledge, it became possible to identify network motifs/GRNs associated with core cellular processes, with early work in yeast focusing on cell cycle control and the DNA damage response [15,16]. The widespread development and adoption of genome-scale technologies, including the creation of mutant libraries and the power of CRISPR-Cas systems, has further enabled GRN discovery in living organisms, from plants [17], to yeast [18], to humans [19]." We have also added a reference and description of a beautiful recent paper (Solis et al) that used a combination of genomic tools and time series analysis using the "anchor away" method to identify a core GRN within the Hsf1/proteostasis network. Specifically, we've added the following text: "Integration of multiple 'omic technologies combined with time series measurements can help identify direct functional interactions to elucidate GRNs, as was done in a recent study that combined RNA-seq, NET-seq, and ChIP-seq to identify a core regulon for Hsf1 in yeast [28]." p. 2. How can we develop the regulatory clarity of GRNs at the scale of the entire genome? What is Clarity?
We have removed this sentence, and agree with the reviewer that our original phrasing was confusing. We have modified the offending sentence to say, "How are genomic approaches commonly applied to identify GRNs?" What does "Hyper-ChIP-able" mean ? ... much to informal and inexact. Say less, but be specific.
We thank the reviewer for pointing this out -our phrasing was too colloquial, and not adequately explained. We removed the word "hyper-ChIPable", and modified the text to read, "Target genes with similar ChIP profiles can exhibit opposite expression responses [26], and highly expressed portions of the genome can exhibit strong ChIP signal even amongst unrelated proteins [27]. Interpreting the biological importance of such peaks must be done with sufficient controls to distinguish whether signals are truly biological versus technical in origin, but the challenge remains that ChIP-based approaches alone provide no assessment of TF functionality." "Additionally, we find that 79% of genes reported as being directly bound by a TF do not exhibit a significant expression response in the corresponding TF's induction experiment". "In IDEA, realized regulation is directly measured and there is stronger agreement between early induction events in IDEA with TF-DNA interactions as assessed with transposon calling cards than by ChIP ". No, many TFs need to be post transcriptionally or post translationally modified to be active. This could be due to interactions. TFs do not act alone. Also where are you getting these numbers ?
We thank the reviewer for this question/points. We tackle them, one at a time, below.
"In IDEA, realized regulation is directly measured and there is stronger agreement between early induction events in IDEA with TF-DNA interactions as assessed with transposon calling cards than by ChIP ". No, many TFs need to be post transcriptionally or post translationally modified to be active. This could be due to interactions. TFs do not act alone.
We've removed the above statement in the revision, and clarified the text to read, "Additionally, we find that 79% of genes reported as being directly bound by a TF based on published ChIP measurements do not exhibit a significant expression response in the corresponding TF's induction experiment (Appendix Figure S10) [41]. The low recall of reported transcriptional regulation underscores the value of dynamic data. Realized regulation may be impacted by chromatin accessibility and the regulatory context of the extracellular environment, which can result in different post-translational modifications of TFs [29,[44][45][46]." Also where are you getting these numbers?
We've added a citation to Yeastract in the main text, and clarified that we are calculating the percentage of previously identified binding sites that result in measurable changes in expression of target genes in response to TF induction. The vast majority of these sites don't result in realized regulation in our dataset. We don't want to over-interpret this result, because there are biological and technical reasons a difference could be observed that are beyond the scope of the current study. Therefore, we just write "...79% of genes reported as being directly bound by a TF based on published ChIP measurements do not exhibit a significant expression response in the corresponding TF's induction experiment (Appendix Figure S10) [41]." Binding is not a great proxy for "regulates" I agree there, mostly suffering from false positives. But over expression dynamics will still have a huge false negative rate due to post-X mods not being lined up.
We thank the reviewer for making these important points. We've modified the introduction substantially to address this, and explicitly discuss Cbf1, which some of us previously showed can switch between activating or repressing methionine genes depending on environmental conditions. An interesting future analysis could be to compare binding to induction experiments and look for TFs that have the largest disagreements across the genome between binding and expression effects. This kind of "computational screen" could identify TFs that likely require modification to be active under various conditions. We've also added a supplemental figure for Gln3, which is an activator regardless of whether or not the cells are limited for nitrogen or phosphate. As mentioned above, we've also added some discussion of Solis et al. that combined RNA-seq, NET-seq, ChIP-seq, and time series analysis (using the "anchor away" method) to identify a core GRN for Hsf1/proteostasis. This multi-omic approach was focused on one TF, but also provides a useful framework for exploring functional versus non-functional TF binding. We thank the reviewer for pointing this out. This figure does indeed contain quite a bit of information. To help the reader navigate this figure, we now include five "speech bubbles" which explain what each part of the figure, and provides the reader an order in which to view different parts of the figure (i, ii, iii, iv, and v). For (i), we now specify "Aft1 expression peaks shortly after induction", and point to the purple dot that represents Aft1. Aft1 is strongly activated so its vinter value from the CK model is much greater than 0. Then, in (ii), we now write "Aft1 is predicted to regulate genes which in turn regulate downstream expression of many other genes". This speech bubble points directly at 3 genes (Fet3, Tis11, and Hmx1), which are predicted intermediate regulators. In the third (iii) speech bubble, we write "Hmx1 is the predicted primary regulator of downstream expression of turquoise-colored genes. See inset donut chart for this example, Yhb1." For (iv), we write, "Arn2 is the predicted primary regulator of downstream expression of orange-colored genes, whose differential expression "peaks" later in the timecourse." For (v), we write, "Hmx1 is predicted to make the largest marginal contribution to downstream expression of Yhb1 in this experiment, though Arn2, Fet3, and others contribute as well." We believe that these will help make the figure much easier to understand.
Functional coherence is used as a proxy for regulates. This seems like a very bad idea.
Based on comments from both reviewers, we've removed this analysis from the revised manuscript.
Compare known motifs (Cis-BP and FiMo) to your de novo analysis.
In our original submission, we included a table that included the motifs we discovered with DREME, including matches to known motifs. The known motifs (PWMs) were downloaded from the Yeastract database and matched to the motifs we identified with DREME using the TOMTOM software package (which, like FiMo, is part of the MEME suite of software tools). Based on the reviewer's comment, we've repeated this same analysis using motifs from Cis-BP. This results are nearly identical, and we've included them as part of Table 3 in our revision. We've also added a reference to Cis-BP to the main text (Weirauch et al, 2014). We thank the reviewer for this comment. Indeed, a better explanation of the Chechik & Koller model was requested by both reviewers. The text we added to fully explain the model is shown above in response to the first reviewer's comments. Additionally, we have added a link to all of the code for performing the sigmoid and impulse fits (http://github.com/calico/impulse). Thank you for submitting your revised manuscript to Molecular Systems Biology. We have now heard back from the two reviewers who agreed to evaluate your manuscript. As you will see the reviewers are now overall supportive and I am pleased to inform you that your manuscript will be accepted in principle pending the following essential amendments: 1. Reviewer #1 has expressed concerns about the use of the word "timecourse" in the text, please address this properly. 2. Please address reviewer #2's concern by improving the introduction and/or discussion sections in light of previously published work.
- The revised manuscript has addressed all of my comments. I have one remaining concern that should be addressed in the text; I don't believe that any additional experimental or analytic work is necessary.
The authors have continued to use 'timecourse' to refer both to whole genome expression changes through an experiment, and to single gene expression changes through an experiment. In the results section, they have defined these terms to be "In this manuscript, an experiment refers to all of the gene expression responses that follow from induction of single TF. A timecourse refers to the kinetic response of a single gene within a single experiment." Figure 2A clearly refers to a timecourse as whole genome expression changes, and figure 2B clearly refers to a timecourse as a single gene expression change through an experiment. Many of the textual uses of 'timecourse' are ambiguous in context and could be read either way (changing the interpretation of several key parts of this work). Based on the definition provided, I read "The Aft1 timecourse is an illustrative example of the value of induction data for revealing intricate regulatory phenomena" as the change of Aft1 expression, but it seems more likely to be the experiment where Aft1 is induced. It is absolutely essential that the terminology used be consistent throughout this work, including figure legends and captions (like Appendix Figure S3, 5, & 6).
Reviewer #2: The authors have responded to most of my comments. They still fall short on connecting this to prior works, but have made minor improvement on that front. Overall I was positive prior and remain positive given the improvements in the paper. I am in favor of publishing this work.
2nd Revision -authors' response 13th February 2020 Reviewer #1: The revised manuscript has addressed all of my comments. I have one remaining concern that should be addressed in the text; I don't believe that any additional experimental or analytic work is necessary.
The authors have continued to use 'timecourse' to refer both to whole genome expression changes through an experiment, and to single gene expression changes through an experiment. In the results section, they have defined these terms to be "In this manuscript, an experiment refers to all of the gene expression responses that follow from induction of single TF. A timecourse refers to the kinetic response of a single gene within a single experiment." Figure 2A clearly refers to a timecourse as whole genome expression changes, and figure 2B clearly refers to a timecourse as a single gene expression change through an experiment. Many of the textual uses of 'timecourse' are ambiguous in context and could be read either way (changing the interpretation of several key parts of this work). Based on the definition provided, I read "The Aft1 timecourse is an illustrative example of the value of induction data for revealing intricate regulatory phenomena" as the change of Aft1 expression, but it seems more likely to be the experiment where Aft1 is induced. It is absolutely essential that the terminology used be consistent throughout this work, including figure legends and captions (like Appendix Figure S3, 5, & 6).
We thank the reviewer for pointing this out. We've carefully gone through the manuscript and made sure that the language is consistent throughout the main text and appendix. We've edited Figures S3, S5, and S6 to include the word "experiment" and not "timecourse".
Reviewer #2: The authors have responded to most of my comments. They still fall short on connecting this to prior works, but have made minor improvement on that front. Overall I was positive prior and remain positive given the improvements in the paper. I am in favor of publishing this work.
We are pleased that we addressed the majority of the reviewer's comments in the first revision. We had added a number of references to papers on GRN inference and discovery in the previous revision. In this revision, we've significantly expanded the introduction to spell out work from the DREAM project on network inference. Specifically, we added, "Finally, there is a growing literature of computational methods for reconstructing GRNs from highthroughput data [29][30][31][32][33][34][35]. The Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, which is organized around annual challenges, provides a framework to benchmark network inference methods [29]. Network inference performance can depend on implementation as well as the network structure itself [31]. In the DREAM5 challenge, no single inference method performed optimally across multiple datasets. Integrating predictions across all participating teams (35 inference methods in total) to generate "community networks" had the most robust performance [31]." Additionally, we added a reference to Chua et al. in the Introduction: "A seminal paper from Chua et al. revealed that overexpression of a single TF, followed by transcriptome profiling at a single time point, can reveal functional regulator-gene connections that are absent when profiling TF deletion mutants [36]. Following that work, we combined TF activation with dynamic transcriptome profiling to dissect the incompletely understood regulatory connectivity of the yeast sulfur regulon [37]." | 2020-03-18T13:04:42.498Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "fadc0cb756fe6b70ae128c353b22bc3fcf3e2f1a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.15252/msb.20199174",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23b12a66db072893deba9d457860b1658d82b37c",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246333590 | pes2o/s2orc | v3-fos-license | Preclinical Characterization of Relatlimab, a Human LAG-3–Blocking Antibody, Alone or in Combination with Nivolumab
Preclinical studies demonstrate that relatlimab specifically blocks the interaction between LAG-3 and its ligands. The data provide a biological rationale for combining relatlimab with the PD-1 antibody nivolumab as an effective cancer immunotherapeutic strategy.
Introduction
Immune-checkpoint blockade (ICB) has revolutionized treatment options for patients with cancer, improving the survival of patients with a range of different malignancies (1). An effective immune response against cancer relies on immune surveillance of tumor antigens expressed on cancer cells, which ultimately results in an adaptive immune response and cancer cell death (2)(3)(4). Tumor immune escape, facilitated in part by the expression of inhibitory ligands, can be reversed by the blockade of receptors such as programmed death-1 (PD-1) and cytotoxic T-lymphocyte-associated antigen-4 (CTLA-4), but there remains a need for additional novel combinations to improve patient outcomes (5)(6)(7).
Previous studies have demonstrated that LAG-3 and PD-1 act in a nonredundant manner to suppress T-cell stimulation. In an in vitro antigen-specific T-cell stimulation system, the peptide responsiveness of T cells transduced to express both LAG-3 and programmed death ligand 1 (PD-L1) shows lower levels of interleukin-2 (IL2) secretion in coculture with APCs expressing PD-L1 and MHC II compared with the responses of T cells expressing either receptor alone (16). Preclinical data have demonstrated a synergistic relationship between the inhibitory receptors LAG-3 and PD-1 in regulating immune homeostasis, preventing autoimmunity, and enforcing tumor-induced tolerance (9,16). Importantly, in mice, antibody blockade of both receptors results in more robust immune responses compared with blockade of either individual receptor (17)(18)(19).
Here, we describe the development of relatlimab, a human LAG-3 (hLAG-3)-blocking antibody, and preclinical analyses that demonstrate the binding affinity, specificity, functional activity, and safety of this novel immune-checkpoint inhibitor. In vitro assay results were consistent with published in vivo data showing that LAG-3 blockade combined synergistically with PD-1 blockade to achieve enhanced antitumor and immunomodulatory activity. Aside from MHC II as the canonical ligand of LAG-3, the literature on other reported LAG-3 ligands and their biology as a potential driver of LAG-3-mediated T-cell exhaustion in cancer is limited but represents an evolving area of scientific research. Nonetheless, data on FGL1 as a LAG-3 ligand led to the evaluation of the interaction between FGL1 and LAG-3, and of its modulation by relatlimab as part of the current study (12). We observed a weak, but measurable, interaction in vitro between FGL1 and LAG-3 and confirmed both the inhibitory potential of this interaction and the ability of relatlimab to block it. Our overall findings are consistent with the results from the RELATIVITY-047 study (NCT03470922), the first phase II/III trial evaluating dual administration of relatlimab and nivolumab in patients with previously untreated or unresectable melanoma. In this trial, the combined blockade of LAG-3 and PD-1 demonstrated superior progressionfree survival (PFS) compared with the blockade of PD-1 alone (20,21).
Mice
Generation of relatlimab was done using proprietary transgenic mice bred at Medarex, Inc., comprising germline configuration human immunoglobulin (Ig) miniloci in an endogenous IgH and Igk knockout background (22,23). For in vivo tumor efficacy studies, female C57BL/6 mice were obtained from Charles River Laboratories and female A/J mice were obtained from Harlan. All animals were provided chow (Prolab Isopro; Dean's Animal Feeds) and drinking water ad libitum. The Animal Use Protocols used for the antibody generation work and tumor efficacy studies were approved by the Medarex Institutional Animal Care and Use Committee prior to the initiation of animal treatments, and all animal husbandry was performed according to Medarex Standard Operating Procedures. The Medarex Animal Facility was accredited by the Association for Assessment and Accreditation of Laboratory Animal Care.
Cell lines
MC38 colon adenocarcinoma and SA1N fibrosarcoma tumor cells, obtained from the Bristol-Myers Squibb (BMS) Master Cell Bank, were maintained in Dulbecco's modified Eagle medium (DMEM; Corning/ Cellgro) supplemented with 10% fetal bovine serum (FBS; Hyclone). P3Â63Ag8.653 myeloma cells were obtained from the ATCC and maintained in RPMI-1640 (Corning/Cellgro) with 10% FBS (Gemini Bio-Products). Expi-293F cells were obtained from Invitrogen/Thermo Fisher Scientific and maintained in Expi293 expression media (Invitrogen). Human Daudi and Raji B-cell lymphoma lines expressing endogenous MHC II were obtained from the ATCC and maintained in RPMI-1640 and 10% FBS. The RJ225 MHC II-low variant of the Raji cell line was obtained from Dr. Matija Peterlin (University of California San Francisco, San Francisco) and maintained in RPMI-1640 and 10% FBS without selection (24). 3A9-hLAG-3 cells overexpressing full-length human LAG-3 and the LK35.2 APC line were obtained from Dr. Dario Vignali (University of Pittsburgh, Pittsburgh) and were maintained in RPMI-1640 with 10% FBS without selection.
The LK35-FGL1 MT cell line was generated at BMS by electroporation of parental LK35.2 cells with a construct fusing FGL1 to the intracellular domain of Type II transmembrane protein human FIBCD1 following the example of Wang and colleagues (12) using the manufacturer-recommended protocol (Nucleofector 2b, 2 Â 10 6 cells, 5 mg DNA, Solution V, Program L-013; Lonza). This cell line was archived in the BMS repository and maintained in RPMI-1640 medium (Corning/Cellgro) supplemented with 10% FBS (Gemini Bio-Products).
Cell lines were validated to be free of mycoplasma by PCR (Pro-moKine PCR Mycoplasma Test; VWR). The cell lines were maintained in culture for no more than 3 weeks prior to use in the appropriate in vitro or in vivo studies. The cell lines were not authenticated at the time of use. Cell lines used for in vivo studies were validated to be free of adventitious agents by PCR (Idexx Bioanalytics).
Relatlimab generation and characterization
Generation of human monoclonal anti-LAG-3 (relatlimab) Proprietary transgenic mice, bred at Medarex (Medarex, Inc.), comprising germline configuration human Ig miniloci in an endogenous IgH and Igk knockout background (22,23), were immunized at 14-day intervals by i.p./s.c. administration with 10 mg recombinant human LAG-3-Fc protein (hLAG-3-hFc; R&D Systems), consisting of the extracellular domain of LAG-3 (Leu23-Leu450) fused to the Fc portion of human IgG1, together with Ribi adjuvant (Ribi lmmuno-Chemical Research). Spleens were harvested from immunized mice 2 weeks after the final immunization of antigen, and splenocytes were fused with P3Â63Ag8.653 myeloma cells (ATCC) and screened for hybridomas producing human monoclonal antibodies (mAb) reactive to hLAG-3-hFc by enzyme-linked immunosorbent assay (ELISA), as described below. Purified antibodies specific for hLAG-3, including clones 25F7, 1C2, 23C9, and 10G7, were analyzed by flow cytometry, as described below, for the potency of binding to transfected CHO-S cells overexpressing hLAG-3 (BMS), but not to parental CHO cells. The variable region sequences of clone 25F7 were cloned and subsequently grafted onto human k and IgG4 constant region sequences for recombinant CHO cell expression of the resulting derived antibodies, LAG3.1-G4P and LAG3.5-G4P (relatlimab; US patent US9505839B2).
Hybridoma screen via ELISA
Hybridomas derived from the fused spleens of hLAG-3-Fcimmunized mice were screened by ELISA for the production of human antibodies specific for hLAG-3. Hybridoma supernatants, including irrelevant supernatants from cultures of the myeloma fusion partner as control, were screened for binding activity to hLAG-3 by ELISA. Briefly, microtiter plates (Corning/Costar; catalog CLS9018) were coated with recombinant hLAG-3-Fc (R&D Systems) at 1 mg/mL in phosphate-buffered saline (PBS), 50 mL/well, incubated at 4 C overnight, and then blocked with 1% bovine serum albumin (BSA) in PBS buffer, pH7.4. Hybridoma supernatants were added neat and incubated for 1 hour. The plates were washed with PBS containing 0.05% Tween 20 (Sigma-Aldrich) and incubated with goat-anti-human kappa light chain conjugated with horseradish peroxidase (Bethyl Laboratories), diluted 1:25,000 in 1% BSA in PBS buffer, pH7.4, for 1 hour. After 3Â washing, the plates were developed with ABTS substrate (Moss) and analyzed on a SpectraMax Plus 384 spectrophotometer (Molecular Devices) at optical density (OD) 405 nm. To rule out any nonspecific binding, an irrelevant human Fc-fusion protein control was used to test the hybridoma supernatants.
Epitope characterization
The relatlimab-binding epitope on LAG-3 was determined by several methods, including peptide library binding, binding to LAG-3 truncation mutants, differential chemical labeling, and X-ray crystallography. Detailed descriptions of the methods used are Octet biolayer interferometry characterization of relatlimab blockade of LAG-3/MHC II and LAG-3/FGL1 interactions LAG-3/MHC II interaction and blockade by relatlimab Recombinant human leukocyte antigen-DR isotype (HLA-DR) with a C-terminal biotin modification was generated at BMS using human HLA-DRA (accession code P01903, residues 26-207) and HLA-DRB1 (accession code P01911, residues 31-221) genes cloned into pTT5 expression plasmids (GenScript). HLA-DRA contains a Cterminal His6 tag and BirA recognition sequence, and HLA-DRB1 has a C-terminal Flag tag and an N-terminal peptide with a GS linker (SGPKYVKQNTLKLATGSGGGSLVPRGSGGGGSG; ref. 25). HLA-DRA and HLA-DRB1 were subsequently coexpressed in Expi293 cells, purified from clarified supernatant using Ni Sepharose Excel (GE Healthcare), and then run on a Superdex 200 16/60 column in PBS. The purified heterodimer was subsequently biotinylated using a BirA biotin-protein ligase bulk reaction kit according to the manufacturer's instructions using the kit-provided buffers (Avidity), followed by buffer exchange via dialysis into PBS without calcium and magnesium (Corning/Cellgro, catalog 21-040-CM) for further experiments. A recombinant hLAG-3 (D1-D4)-hFc fusion protein was produced by cloning hLAG-3 (D1-D4) into a pTT5 expression plasmid (GenScript) with an osteonectin signal peptide on the N-terminus and a C-terminal hFcG1 tag. Clarified supernatants were purified using MabSelect SuRe LX resin (GE Healthcare), washed with PBS, eluted in 100 mmol/L (pH 3.6) sodium citrate buffer, and neutralized with 1 mol/L (pH 8.0) Tris buffer (Corning/Cellgro). hLAG-3 (D1-D4) was further purified on a Superdex 200 column into 1Â PBS. The binding of HLA-DR1 to the LAG-3 and evaluation of the effect of various antibodies on this interaction were investigated on an Octet HTX instrument with streptavidin capture (SA) biosensors (ForteBio/Sartorius). HLA-DR1 was captured at 10 mg/mL for 180 seconds in PBS þ 0.05% BSA, with 0.05% TWEEN20, pH 7.4 (Sigma-Aldrich; PBST-BSA), followed by a 60-second wash with PBST-BSA. hLAG-3 (D1-D4)-hFc (1 mmol/L) alone or in the presence of 2.5-fold excess antibody was tested for binding to the captured HLA-DR protein over 300 seconds. For all biolayer interferometry (BLI) assays, the temperature was 30 C with a shake speed of 1,000 RPM. Data were visualized on ForteBio data analysis software.
LAG-3/FGL1 interaction and blockade by relatlimab
Recombinant human fibrinogen-like protein-1 (FGL1)-mFc fusion protein and recombinant human FGL1-FD-mFc fusion protein [consisting of only the fibrinogen domain of human FGL1 (residues 74-312)] were produced by BMS. Briefly, these were produced by cloning into a pTT5 expression plasmid (GenScript) with an osteonectin signal peptide followed by an mFcG1 tag on the N-terminus. These were expressed in Expi293 cells and clarified supernatants were purified using MabSelect SuRe LX resin (GE Healthcare), washed with PBS, eluted in 100 mmol/L (pH 3.6) sodium citrate buffer, and neutralized with 1 mol/L (pH 8.0) Tris buffer (Corning/Cellgro) and 1 mol/L (pH 8.5) arginine buffer. These were finally run on a Superdex 200 16/60 column. The binding of recombinant FGL1-mFc and FGL1-FD-mFc to recombinant hLAG-3-hFc was investigated on an Octet HTX instrument with anti-human IgG Fc capture (AHC) biosensors (ForteBio). LAG-3-hFc was captured at 10 mg/mL for 300 seconds in PBST-BSA, followed by a 60-second wash with PBST-BSA. Serial dilutions of FGL1-mFc and FGL1-FD-mFc were tested for binding to the captured LAG-3-hFc fusion protein. To assess the blockade of LAG-3/FGL1 engagement by relatlimab, LAG-3-hFc was first captured on AHC biosensors at 10 mg/mL for 600 seconds in PBST-BSA, followed by a 30-second wash with PBST-BSA. The biosensor was next quenched with a cocktail of keyhole limpet hemocyanin (KLH) human IgG1 (hIgG1), KLH-hIgG2, and KLH-hIgG4 antibodies (each at 15 mg/mL, produced at BMS) for 600 seconds in PBST-BSA to block the surface. This was followed by a 30-second wash with PBST-BSA, exposure to buffer alone or relatlimab (200 nmol/L) in PBST-BSA for 1,000 seconds, and a 10-second wash with PBST-BSA. Finally, for biosensors exposed to buffer alone, FGL1-FD-mFc (2 mmol/L) in the absence of relatlimab was tested for binding. For biosensors previously exposed to relatlimab, FGL1-FD-mFc (2 mmol/L) in the presence of excess relatlimab (200 nmol/L) was tested for binding. For all biolayer interferometry assays, the temperature was 30 C with a shake speed of 1,000 RPM. Data were visualized on ForteBio data analysis software.
LAG-3 deletion variants binding to relatlimab
Recombinant human LAG-3 (D1-D2), LAG-3 (D1-D2) Δrelatlimab, and LAG-3 (D1-D2) Δloop insertion variants (deletion designs in the D1 domain described in Supplementary Fig. S9C) with C-terminal His 6 tags were generated at BMS by cloning each into pTT5 expression plasmids (GenScript), which were subsequently coexpressed in Expi293 cells, purified from clarified supernatant using Ni Sepharose Excel (GE Healthcare), and polished on a Superdex 200 16/60 column in PBS. Binding of relatlimab to the LAG-3 variants was investigated on an Octet HTX instrument with AHC biosensors (ForteBio). Relatlimab was captured at 2 mg/mL for 180 seconds in PBS 0.05% BSA with 0.05% TWEEN20 pH 7.4 (Sigma-Aldrich), followed by a 60-second wash with PBST-BSA. The hLAG-3 variants at 100 nmol/L and 200 nmol/L were tested for binding to the captured relatlimab over 300 seconds. For all BLI assays, the temperature was 30 C with a shake speed of 1,000 RPM. Data were visualized on ForteBio data analysis software.
IHC analysis in normal human tissues
IHC analyses were performed in a selected panel of normal human tissues. Fresh frozen normal human tissues were purchased from commercial tissue networks/vendors (Analytical Biological Services, Inc.; Asterand, Inc.; and Cooperative Human Tissue Network). Tissues evaluated (n ¼ 1 unless otherwise indicated) were human spleen, thymus, pancreas, cerebrum, cerebellum, heart, liver, lung, kidney, pituitary (n ¼ 3), and tonsil (n ¼ 2). Stained slides were evaluated using a DMLB brightfield microscope (Leica).
Cryostat sections at 5 mm were fixed with acetone and then washed twice with PBS at room temperature. LAG-3-expressing lymphocytes and LAG-3-negative elements in tonsil sections were used as positive and negative control tissues, respectively. Cryostat sections of CHO-S-LAG-3 cells and CHO-S cells were also used as positive and negative controls. Endogenous peroxidase activity was blocked by incubation with peroxidase block supplied in the Dako EnVision System (Agilent; catalog K4011). Slides were washed in PBS and then incubated with Dako protein block supplemented with 0.5% human g globulins (Sigma-Aldrich) to block nonspecific binding sites. Subsequently, primary antibodies (relatlimab-FITC or LAG-3.1-G4P-FITC) or isotype control (hIgG4-FITC) were applied onto sections and incubated for 1 hour. After washing, slides were subjected to serial 30-minute incubations separated by repeat washes, with rabbit anti-FITC, peroxidase-conjugated anti-rabbit IgG polymer, and finally 6 minutes with diaminobenzidine (DAB) substrate-chromogen solution (Dako EnVision System; Agilent). Slides were washed with deionized water, counterstained with Mayer's hematoxylin, dehydrated, cleared, and cover-slipped with Permount following routine histologic procedures. Stained slides were evaluated by light microscopy as described above. Grading criteria were as noted in Supplementary Table S4.
To verify the cell type within the pituitary gland to which relatlimab bound, double immunofluorescent staining of LAG-3.1-G4P-FITC with antibodies specific for five pituitary hormones was performed on cryostat sections. Hormone-specific antibodies used were the following: mouse anti-adrenocorticotropic hormone (Dako), antiluteinizing hormone (Dako), and anti-thyroid stimulating hormone (Abcam), or rabbit polyclonal anti-growth hormone (Dako), and rabbit antiprolactin (Dako). In the same manner as above, slides were incubated with Dako protein block supplemented with 1% human g globulins to block nonspecific binding sites. To reduce the background further, the PBS used in immunofluorescence contained high salt and low detergent (300 mmol/L NaCl and 0.01% TWEEN20). Subsequently, LAG-3.1-G4P-FITC with each of the above five hormone antibodies were simultaneously applied onto sections. Human IgG4-FITC, with either mouse IgG1 or rabbit IgG, was used as isotype control. After washing, slides were incubated with secondary antibodies: Alexa Fluor 488conjugated goat anti-FITC/Oregon-Green (Invitrogen), Cy3-labeled donkey anti-mouse (Jackson ImmunoResearch), and Cy3-conjugated donkey anti-rabbit IgG (Jackson ImmunoResearch). After washing, sections were counterstained with Hoechst 33342 (Invitrogen) mounting with ProLong-Gold Antifade reagent (Invitrogen). To assess the specificity of the LAG-3.1-G4P-FITC staining, preabsorption of the primary antibody with human LAG-3 fusion protein was performed by preincubating LAG-3.1-G4P-FITC with 5-or 10-fold excess of human LAG-3 for 2 hours at room temperature before applying to the sections from two pituitary samples, following respective protocols, for immunoperoxidase and immunofluorescent methods. PBS supplemented with 0.5% human g globulins and 0.5% BSA was used as a diluent for both primary and secondary antibodies. For more stringent nonspecific blocking conditions, Dako protein block supplemented with 1% human g globulins or PBS supplemented with 1% human g globulins was used as a blocking buffer or diluent, respectively. Both the staining intensity and frequency were evaluated using an Imager D1 fluorescence microscope (Zeiss).
Antitumor activity of anti-LAG-3 and anti-PD-1 in mouse models Syngeneic tumor models MC38 colon adenocarcinoma and SA1N fibrosarcoma tumor cells from the BMS Master Cell Bank were maintained in DMEM (Corning/ Cellgro) supplemented with 10% FBS (Gemini Bio-Products). Cell lines used for in vivo studies were validated to be free of adventitious agents by PCR (Idexx Bioanalytics). Cells displayed a doubling time of 24 hours and were harvested near 80% confluence, washed in FBS-free medium three times, and resuspended in DMEM to provide subcutaneous injections of 1.0Â10 6 cells (0.1 mL) into the right flank of each study animal using a 1-cc syringe (Becton Dickinson) and 25-gauge half-inch needle. Tumors were measured in 3 dimensions (l  w  h/2) with a Fowler Electronic Digital Caliper (Model 62379 531; Fred V. Fowler Co.) and recorded. Animals were randomized into study groups with similar tumor size ranges. Mice with palpable tumors on day 7 after tumor implantation (average volume 75-100 mm 3 ) were administered with monoclonal antibodies interperitoneally, as appropriate by group, on days 7, 10, and 14 at a dose of 10 mg/kg, unless otherwise noted. Tumor size and body weight of study animals were monitored and recorded twice weekly until tumors reached endpoint (≤1,500 mm 3 ).
Surrogate antibodies used in these studies included anti-mouse PD-1 monoclonal clone 4H2 (6), anti-mouse LAG-3 monoclonal clone C9B7W (4), anti-mouse LAG-3 monoclonal clone 19C7 [BMS, proprietary chimeric antibody raised against mouse LAG-3 antigen (R&D Systems) with rat IgG variable regions grafted onto mouse IgG1 Fc region incorporating a D265A mutation for reduced Fc receptor engagement, produced in a transfected CHO cell line], and purified mouse IgG1 (MOPC-21) isotype control antibody was obtained from Bio X Cell.
Toxicity studies in cynomolgus monkey model Study design As part of a 4-week multidose toxicity study to evaluate the potential toxicity of relatlimab alone and in combination with nivolumab when administered intravenously, cynomolgus monkeys (Macaca fascicularis) were administered a single intramuscular bolus of KLH antigen (Thermo Fisher/Pierce) on day 1 and were dosed once weekly (q.w.) for five total doses with relatlimab alone or in combination with nivolumab (both antibodies produced at BMS). KLH, a protein purified from the mollusk Megathura crenulata, can stimulate a T cell-dependent antibody response (TDAR) upon immunization. Antibody formation to KLH can be used to monitor the effect of drug candidates on the humoral immune response. In this study, blood samples were collected at preselected time points to measure systemic exposures to relatlimab and nivolumab, for analysis of TDAR to KLH (IgG, IgM, and IgA collectively) by ELISA, and for analysis of T-lymphocyte subsets by flow cytometry (see Supplementary Methods). At scheduled necropsies following the final antibody dose and following a 6-week recovery period, unfixed spleen sections were collected for analysis of splenic T-lymphocyte subsets by flow cytometry. The study procedures were approved by the Institutional Animal Care and Use Committee, and the monkeys were monitored daily for clinical signs. The clinical evaluation parameters and dosing strategy for this study are described in Supplementary Table S1.
The potential toxicity of relatlimab was also assessed in a 3-month, multidose toxicity study in monkeys administered drug intravenously q.w. for 3 months. The clinical evaluation parameters and dosing strategy for this study are described in Supplementary Table S2, and full details of the toxicity studies are provided in the Supplementary Methods included in Supplementary Data.
Statistical analyses
Statistical analyses of the peripheral blood lymphocyte phenotyping, KLH-specific antibodies, and splenic T-lymphocyte subset phenotyping data were performed by Global Biometric Sciences Nonclinical Biostatistics. Because the sample sizes within each sex were small, males and females were combined to provide sample sizes of about n ¼ 6/group on day 30, and about n ¼ 4/group on day 72. Homogeneity of group variances was assessed at the 5% significance level using Levene's test. Normality of the group distributions was assessed graphically. If significant heterogeneity of variance was observed (P ≤ 0.05), or if nonnormality was observed, an appropriate transformation of data was utilized. A one-way analysis of variance (ANOVA) model was utilized to analyze the percentage of splenic lymphocytes for each subset population; the Dunnett multiplecomparison t test procedure was utilized to compare the means for the vehicle-control group (group 1) to the means of each of the other groups (groups 2-5). Two-sided test statistics were calculated at the 5% significance level. Analyses were performed using SAS version 9.1.2. Graphs were produced using GraphPad Prism software.
Additional methods
Additional details on methods and experiments relating to relatlimab binding to human and cynomolgus monkey LAG-3, binding affinity, cell-based bio-and blocking assays, antibody-dependent cellular cytotoxicity (ADCC), PCR analysis, splenic T-lymphocyte subset phenotyping, ex vivo responses to KLH-specific antibodies, antibodies used, cell cultures, and procurement of human tissue samples can be found in the Supplementary Methods.
Data availability
The data generated in these studies are available within the article. The X-ray diffraction coordinates and structure factors are publicly available from the Research Collaboratory for Structural Bioinformatics Protein Data Bank under PDB ID 7UM3. The raw data underlying the included surface plasmon resonance (SPR) results were generated by a core facility and may not be available.
References are included with respect to animal models and reagents used in the study as appropriate. Relatlimab is commercially available as a therapeutic agent in the United States. Other anti-LAG-3 antibodies mentioned in the article can be made available upon reasonable request to the lead author and review by BMS, in compliance with BMS compound, technology, and data sharing policies (further details can be found at https://www.bms.com/researchers-and-partners/indepen dent-research/compound-and-technology-requests.html and https:// www.bms.com/researchers-and-partners/independent-research/datasharing-request-process.html).
Hybridoma clone 25F7 was selected for expansion and further characterization based on its activity profile in biochemical and cell-based binding and blocking assays, as well as in functional T-cell activation assays. The variable region sequences of this antibody were cloned and subsequently grafted onto human k and IgG4 constant region sequences for reduced Fc receptor engagement and a reduced potential for effector T cell-mediated cytotoxicity. The resulting antibody LAG3.1-G4P also incorporated an S228P stabilizing hinge mutation to prevent in vivo and in vitro IgG4 Fab-arm exchange (26).
The sequence of antibody LAG-3.1-G4P contained two potential deamidation sites in the CDR2 region of the heavy chain at residues N54 and N56, which were confirmed by biophysical analysis under forced deamidation conditions. These potential sequence liabilities were addressed by mutating these sites (N54R and N56S), with no deleterious effect observed on the functional activity of the resulting antibody designated LAG-3.5, later renamed BMS-986016 (relatlimab).
Relatlimab-binding specificity
Relatlimab displayed saturable and selective binding to immobilized hLAG-3-hFc by ELISA, with a half-maximal effective concentration (EC 50 ) of 0.49 nmol/L compared with an EC 50 of 1.46 nmol/L for binding to CHO cells expressing hLAG-3 ( Supplementary Fig. S1A). To confirm that relatlimab recognized native LAG-3, the binding of relatlimab to primary activated human and cynomolgus CD4 þ T cells was measured and found to be substantially higher for binding to human (mean EC 50 , 0.11 nmol/L) than monkey LAG-3 (mean EC 50 , 29.11 nmol/L; Fig. 1A and B). The parent hybridoma of relatlimab (clone 25F7) did not bind to cells expressing full-length murine LAG-3 ( Supplementary Fig. S1B); by extension, relatlimab is also expected to not bind murine LAG-3.
The kinetics of relatlimab binding to LAG-3 were determined by SPR for both intact bivalent relatlimab and its Fab fragment. The apparent affinity of the bivalent antibody was found to be 0.12 nmol/L at pH 7.4 under the experimental conditions detailed in the Supplementary Methods. The monovalent affinity of the Fab fragment measured with SPR was 10 nmol/L ( Fig. 1C and D). Binding at pH 6, acidic conditions that likely exist in the TME (27), showed faster dissociation kinetics with a modest effect on affinity, indicating that relatlimab would still bind under these conditions ( Supplementary Fig. S1C).
Disruption of LAG-3 receptor/ligand interactions by relatlimab
The functional potency of relatlimab in blocking the interaction of LAG-3 with MHC II and FGL1 was examined in a series of biochemical and cell-based assays.
MHC II binding
The interaction of LAG-3 with MHC II was confirmed by comparing the binding of mouse and hLAG-3-hFc to the wild-type Raji B lymphoid cell line and to an MHC II-low variant, RJ225 (24), with substantially less binding observed to the mutant cell line compared with the wild-type line ( Supplementary Fig. S2). Relatlimab completely blocked detectable binding of hLAG-3-mFc to MHC II þ Daudi B lymphoid cells, exhibiting a half-maximal blockade (IC 50 ) of 0.67 nmol/L compared with isotype control antibody (Fig. 2). Blockade was confirmed by BLI measurement of the binding of LAG-3-Fc to HLA-DR1 in the presence of either bivalent relatlimab or a Fab fragment of relatlimab (Fab 3.5). The interaction was equivalently blocked by both relatlimab and its Fab, but not by other antibody clones that bind within the D3-D4 extracellular domains ( Supplementary Fig. S3).
FGL1 binding
We next evaluated whether relatlimab can block the interaction between LAG-3 and the recently identified ligand, FGL1, using ELISA. Recombinant hLAG-3-hFc fusion protein bound immobilized (mFc-hFGL1) protein (3 mg/mL), with an EC 50 of 0.085 nmol/L (Fig. 3A) and was blocked by relatlimab, with an IC 50 of 0.019 nmol/L, but not by the isotype control antibody (Fig. 3A and B).
Similar studies were conducted using BLI, which confirmed that both full-length FGL1 (mFc-hFGL1), as well as the fibrinogen domain alone (mFc-hFGL1-FD), bound hLAG-3-hFc in a dose-dependent, albeit nonsaturable, manner up to 2,000 nmol/L FGL1. Notably, the binding measurement of dimerized FGL1 to captured LAG-3-hFc is reflective of an avidity-driven interaction. Our data, as well as that from Wang and colleagues in supplement (12), indicated that the interaction between FGL1 and LAG-3 was relatively weak (>1 mmol/L; Fig. 3C and D). This interaction was inhibited by relatlimab and its Fab (Fab 3.5), but not by hLAG-3 antibodies specific to epitopes within the D3 or D4 domains of LAG-3 (clones 23C9, 1C2, and 10G7; Fig. 3E; Supplementary Fig. S3B). Although the mechanism by which soluble FGL1 may productively engage LAG-3 to drive T-cell inhibition is unknown, these results corroborate those of Wang and colleagues, suggesting that the FGL1 fibrinogen domain alone is sufficient to bind LAG-3 (12).
Functional blockade of ligand interaction with LAG-3
The ability of hLAG-3 to inhibit T-cell responses was studied using an antigen-specific T-cell hybridoma, 3A9, specific for hen egg lysozyme peptide (HEL48-62), presented by the MHC II-matched antigenpresenting mouse cell line LK35.2 (see Supplementary Methods: Functional cell-based bioassay of LAG-3/MHC II interaction and blockade by relatlimab; ref. 28). The expression of full-length hLAG-3 on the T cells (3A9-hLAG-3; Supplementary Fig. S4) resulted in attenuated T-cell peptide responsiveness, as demonstrated by lower murine IL2 secretion when cocultured with LK35.2 cells that could be enhanced by relatlimab in a dose-dependent manner (Fig. 4A).
Using an alternate format of the assay where cells were stimulated with a suboptimal concentration of HEL48-62 peptide in the presence of titrated mAb, relatlimab displayed potent blockade of LAG-3mediated inhibition (IC 50 , 1.05 nmol/L) compared with the isotype control antibody (Fig. 4B). Collectively, these results suggest that the observed functional T-cell inhibition seen in coculture with LK35.2 cells and antigen, which is likely mediated by murine MHC II present on the APCs interacting with hLAG-3 on the T cells, can be reversed by relatlimab in a dose-dependent manner. In contrast to published results (12), we observed only modest T-cell inhibitory activity by soluble mFc-hFGL1 that had been purified to ensure a homogeneous dimeric preparation (Supplementary Fig. S5) and, consequently, we next evaluated whether higher order oligomers of FGL1 could elicit enhanced inhibition. We generated an ordered oligomer of FGL1 by fusion to hexamer-forming hFc (E345R/E430G/ S440Y; FGL1-RGY; refs. 29, 30). Although increased T-cell inhibition was observed with this variant relative to the wild-type Fc fusion protein alone, the impact on activity was still modest ( Supplementary Fig. S5). For this reason, we sought to determine whether the expression of a membrane-tethered version of FGL1 on the cell surface could allow for a higher avidity interaction of FGL1 with LAG-3, thereby revealing functional inhibitory engagement. A membrane-tethered version of FGL1 (FGL1 MT ) was generated by fusing the fibrinogen domain of FGL1 with the transmembrane and intracellular domain of type II membrane fibrinogen C domain-containing protein 1 (12), and was expressed on LK35.2 cells (LK35-FGL1 MT ) ( Supplementary Fig. S4). 3A9-hLAG-3 cells were more attenuated in their peptide responsiveness in coculture with LK35-FGL1 MT cells compared with parental LK35.2 cells expressing MHC II alone (Fig. 4C). This suggests that FGL1-mediated inhibition in this setting is additive to the inhibition resulting from MHC II/LAG-3 engagement. Relatlimab reversed the T-cell inhibition resulting from coculture with either parental LK35.2 or LK35-FGL1 MT cells with similar potency (IC 50 of 1.39 nmol/L and 0.95 nmol/L, respectively) and to equivalent maximal levels of cytokine production, suggesting that relatlimab can effectively block the simultaneous engagement of LAG-3 by both MHC II and FGL1 ( Fig. 4C; Supplementary Fig. S4).
The functional activity of relatlimab in the context of primary T cells was evaluated in human healthy donor peripheral blood mononuclear cell (PBMC) cultures stimulated with superantigen Staphylococcal enterotoxin B. Cultures from 15 of 18 donors showed enhanced IL2 secretion in the presence of relatlimab alone compared with the isotype control and, in most instances, the stimulation was less than that observed for treatment with nivolumab. The combination of relatlimab and nivolumab resulted in higher levels of stimulation compared with a combination of nivolumab and isotype control ( Fig. 4D and E). Finally, relatlimab binding to activated T cells did not mediate significant levels of ADCC compared with the isotype control, whereas binding with positive control nonfucosylated human IgG1 anti-CD30 resulted in robust cell lysis (Supplementary Fig. S6).
These results are consistent with the synergistic in vivo antitumor activity observed from the combination antibody blockade of mouse LAG-3 and PD-1 in syngeneic tumor models. Similar to previously published reports (9), the antitumor activity of anti-mouse LAG-3 mAbs in combination with anti-mouse PD-1 monoclonal antibody 4H2 showed enhanced efficacy compared with LAG-3 or PD-1 singleagent blockade in both MC38 colon carcinoma tumors (Supplementary Fig. S7) and in Sa1N fibrosarcoma tumors ( Supplementary Fig. S8). Substantial antitumor activity from the blockade of LAG-3 alone was observed only in the immunogenic fibrosarcoma Sa1N model using the C9B7W mAb. In the MC38 colon carcinoma model, enhanced efficacy was observed for all doses of anti-mouse LAG-3 mAb 19C7 > 1 mg/kg when combined with 4H2 compared with 4H2 alone.
Characterization of the relatlimab-binding site
Previous studies by Triebel and colleagues (31) demonstrated that an extra insertion-loop sequence in the N-terminal extracellular D1 of LAG-3 potentially mediates the interaction with MHC II (32). Our initial analysis of hybridoma clones revealed that the most potent antibodies, including the parent clone for relatlimab (LAG3.1-G4P), were bound within the N-terminal D1-D2 domain region of LAG-3 ( Supplementary Fig. S3C). A peptide consisting of the full-length D1 insertion-loop sequence (P 60 GPHPAAPSSWGPRPR 75 ) was synthesized to test for antibody reactivity and LAG3.1-G4P was observed to bind strongly to this peptide by ELISA, with an EC 50 of 0.44 nmol/L (Supplementary Fig. S9A and S9B). Next, LAG3.1-G4P was assessed by ELISA for binding a set of overlapping peptides spanning the 30 residues of the LAG-3 insertion loop (Supplementary Fig. S9A) and indicated that the antibody binds residues within the peptide H 63 PAAPSSW 70 (Fig. 5A). Carbene chemical footprinting of a complex consisting of the LAG-3 D1-D2 domains and relatlimab Fab (Fab 3.5) confirmed that the epitope was contained within the peptide A59-W70 because this peptide showed the largest decrease in chemical labeling following complex formation compared with LAG-3 D1-D2 alone (Fig. 5B, top). Residue-level labeling of peptide A59-W70 identified multiple amino acids undergoing protection following complex formation contained in the region H63-W70 (Fig. 5B, bottom). Additional peptides were observed with labeling protection that was localized to individual amino acids in the D1 domain. Because the protein structure of LAG-3 with the insertion loop has not been determined, we speculate that these residues are likely to be structurally located near the A59-W70 peptide. Comparison of the insertion-loop sequences of human and cynomolgus monkey LAG-3 proteins showed 75% sequence identity for the epitope (H63-W70), likely accounting for the lower affinity of relatlimab binding to nonhuman primate LAG-3 ( Supplementary Fig. S9C).
Structural characterization of relatlimab in complex with the LAG-3 insertion loop
To further characterize relatlimab interactions with LAG-3, the structure of the Fab of LAG3.1-G4P (Fab 3.1) in complex with a 16residue peptide (Ac-PGPHPAAPSSWGPRPR-amide) derived from the insertion-loop sequence of the LAG-3 D1 domain was determined Table S3). The threedimensional structure, captured at 2.4 Å resolution, revealed distinct density for 12 of the 16 peptide residues (called out sequence in Fig. 5C) in the Fab 3.1 binding groove, with CDR-H3 and CDR-L3 pushed apart (Fig. 5C). Half of the accessible surface area of the peptide was buried by Fab 3.1, with light chain interactions more prevalent over heavy chain interactions (380 Å 2 vs. 160 Å 2 , respectively). Roughly half (680 Å 2 ) of the total accessible surface area of the peptide (1,350 Å 2 ) was buried by Fab 3.1. Protein-binding assays using LAG-3 domain variants further confirmed this core peptide sequence as essential for the interaction between relatlimab and LAG-3, with deletion of the H63-W70 sequence resulting in the abolishment of relatlimab binding ( Fig. 5D; Supplementary Fig. S9D). Overall, the peptide binding, BLI, and carbene chemical footprinting data are in good agreement and were corroborated by the X-ray crystal structure data and LAG-3 domain variant-binding data to collectively identify a linear epitope of relatlimab centered on residues H63-W70 of the insertion loop.
Relatlimab tissue-binding properties in normal human tissues To confirm LAG-3 expression in immune cells, and to assess any unexpected tissue binding, tissue cross-reactivity of relatlimab Table S4). The tissue-binding patterns by relatlimab and LAG3.1-G4P were very similar. In hyperplastic tonsil tissue, strong positive staining was revealed in a small subset of lymphocytes primarily distributed in the interfollicular area (T-cell region), with few in the mantle zone, and only extremely rarely in the germinal center (Fig. 6A). This lymphocyte staining was expected and consistent with published observations (33). In pituitary tissue, rare to occasional moderate/strong immunoreactivity was displayed in the adenohypophysis. No specific staining was observed in the other tissues examined (Supplementary Table S4). Dual-color immunofluorescence analysis with LAG3.1-G4P confirmed the specificity of this staining localization of LAG-3 in pituitary gonadotroph cells, which included follicular-stimulating hormone-and luteinizing hormone (LH)-producing cells. Pituitary expression of LAG-3 was also confirmed by PCR ( Fig. 6A and B; Supplementary Fig. S10 and S11; Supplementary Tables S5A and S5B).
Preclinical toxicity assessment in cynomolgus monkeys
Because relatlimab was shown to bind to cynomolgus LAG-3, we evaluated the potential toxicity of relatlimab AE nivolumab in cynomolgus monkeys. Additionally, splenic T-cell phenotyping, T-cell phenotyping, and TDARs were analyzed (Supplementary Methods).
Repeat-dose toxicity
In a 4-week repeat-dose toxicity study (Supplementary Table S1), relatlimab was clinically well tolerated by cynomolgus monkeys when administered intravenously q.w. at 30 or 100 mg/kg, with no adverse findings. Relatlimab, when administered at 100 mg/kg in combination with nivolumab at 50 mg/kg, was generally well tolerated in eight out of nine monkeys, with no adverse clinical signs; the exception being moribundity in one male monkey attributed to central nervous system (CNS) vasculitis ( Supplementary Fig. S12A). Additional minimal histopathologic findings in the combination group were likely the result of enhanced immunostimulatory effects of nivolumab in combination with relatlimab because no treatment-related histopathologic changes were noted with relatlimab alone (Supplementary Table S6). Systemic exposures of relatlimab were evaluated and showed circulating half-lives of 490, 460, and 740 hours for the (relatlimab/nivolumab) 30/0, 100/0, and 100/50 mg/kg dose groups, respectively (Supplementary Table S7; Supplementary Fig. S12B). The CNS vascular findings may have been a result of a loss of tolerance to self-antigens based on the synergistic role of PD-1 and LAG-3 in maintaining selftolerance. Given the long half-lives of relatlimab and nivolumab, the irreversibility of relatlimab plus nivolumab-related findings is likely a result of continued exposure to the test articles throughout the duration of the recovery period. In a 3-month toxicity study (Supplementary Table S2), relatlimab was generally well-tolerated by mature cynomolgus monkeys (4-7 years old) when administered intravenously q.w. up to 100 mg/kg.
In vivo pharmacodynamic effects of relatlimab administration As part of the 4-week repeat-dose toxicity study of relatlimab and nivolumab in monkeys, all animals were immunized intramus-cularly on day 1 of the study with 10 mg of KLH to permit assessment of in vivo antibody responses, ex vivo recall responses to KLH, and immunophenotypic analyses of peripheral blood and splenic T-lymphocyte subsets. There were no observed relatlimabor nivolumab-related changes in TDARs to KLH among any of the study groups, and no measurable ex vivo T-cell recall responses to KLH. The comparatively low affinity of relatlimab for cynomolgus LAG-3 may have resulted in incomplete receptor blockade, leading to attenuated lymphocyte responses, including TDARs. Nevertheless, drug-related changes in ex vivo T-cell recall responses to KLH were indicative of an enhanced antigen-specific response. Increases in cytokine responses, especially IFNg, were measured at day 22 in the mean percentage of both double-positive (CD69 þ TNFa þ ; CD69 þ IFNg þ ) and triple-positive (CD69 þ TNFa þ IFNg þ ) CD4 þ CD8splenic T cells among both male and female study groups, which received nivolumab alone, high-dose relatlimab (100 mg/kg) alone or in combination with nivolumab. Although the differences among these groups did not reach statistical significance, it was nevertheless interesting that the highest increases in both double-positive ( Fig. 7A and B) and triple-positive (Fig. 7C) responding CD4 þ T cells were observed in the relatlimab plus nivolumab combination group compared with the groups receiving either drug alone. These increases, which waned by day 57 across all groups, are consistent with the pharmacologic mechanisms of action of relatlimab and nivolumab. Phenotypic analysis was performed for splenic and peripheral blood memory T-cell subsets and for peripheral blood total lymphocytes (T, B, and NK cells). Overall, comparing pretest, day 15, and day 30 phenotypic profiles did not show identifiable trends in drug-related lymphocyte population changes across time points and female and male groups. Statistically significant differences among splenic memory T-cell subsets relative to the vehicle-control group were observed in CD4 þ T regulatory cells in the nivolumab and combination groups (Supplementary Fig. S13; Supplementary Table S9). Among peripheral memory T-cell subsets, notable differences were observed mainly in na€ ve and central memory CD4 þ and CD8 þ T cells in the combination study group animals, as well as in activated CD25 þ CD4 þ T cells among males, and in na€ ve and memory CD8 þ T cells among females (Supplementary Table S10). No substantial changes were observed in total T, B, and NK cell numbers (Supplementary Table S11).
Discussion
Continuous exposure of T cells to cognate antigen leads to an exhausted, hyporesponsive phenotype characterized, in part, by the expression of multiple inhibitory receptors including PD-1, CTLA-4, LAG-3, and T-cell immunoglobulin and mucin domain-3 (34). LAG-3 has been shown to be expressed in TILs of several tumor types, including melanoma, hepatocellular carcinoma (HCC), and nonsmall cell lung cancer (NSCLC), often in parallel with increased PD-1 (35)(36)(37). Combination immunotherapies can result in improved Flow-cytometric intracellular cytokine staining analysis of splenic CD4 þ T-cell populations in male and female monkeys (n ¼ 5/group, gender) from a 1-month toxicity study treated with a single dose of KLH at study initiation and 4 weekly doses of relatlimab AE nivolumab as indicated.
. Horizontal bars represent group means.
clinical benefit compared with single checkpoint blockade, as exemplified by the combination of anti-PD-1, nivolumab, and anti-CTLA-4 (ipilimumab) approved in the United States and other countries for multiple indications, including melanoma, NSCLC, renal cell carcinoma, HCC, colorectal cancer, and pleural mesothelioma (38,39).
Targeting additional checkpoints, such as LAG-3, is a promising approach for overcoming resistance to ICB by targeting multiple inhibitory targets. LAG-3 acts in a nonredundant manner from that of PD-1 to suppress T-cell stimulation and presents an exciting new opportunity for combination ICB that may improve clinical responses. Preclinical data presented in recent years illustrate a clear synergy between the inhibitory receptors LAG-3 and PD-1 in controlling immune homeostasis, preventing autoimmunity, and enforcing tumor-induced tolerance (9,16). Although the focus of this report is the characterization of relatlimab itself, our in vivo results corroborate a large body of existing literature that demonstrates the combined treatment of mice with blocking antibodies against LAG-3 and PD-1 receptors results in more robust immune responses than either single-agent treatment (9,40). There is a well-developed understanding that the coblockade of LAG-3 and PD-1 acts to reinvigorate exhausted T cells more robustly than single ICB and results in enhanced polyfunctionality (IFNg and TNFa production and cytotoxic killing) to potentiate antitumor activity (9,11,(17)(18)(19)(41)(42)(43). Similarly, in the in vitro primary T-cell superantigen stimulation assays reported here, only modest activity was observed from LAG-3 singleagent blockade with relatlimab compared with the substantially enhanced responsiveness in the context of coblockade of LAG-3 and PD-1 with relatlimab and nivolumab, respectively. Although our studies did not specifically interrogate the individual contributions of CD4 þ and CD8 þ T cells to the overall T-cell responses observed, superantigens have been demonstrated to stimulate both CD4 þ and CD8 þ T cells, and our data suggest that combined PD-1 and LAG-3 blockade likely potentiates both subsets (44). Moreover, it has previously been demonstrated that LAG-3 blockade, particularly in combination with PD-1 blockade, can enhance tumor killing by CD8 þ T cells in vitro (17). As expected for an IgG4 isotype antibody null for Fc receptor engagement, there was no measurable ADCC mediated by relatlimab. Toxicologic assessment of relatlimab in cynomolgus monkeys showed it to be generally well tolerated, alone and in combination with nivolumab. Relatlimab displayed no unexpected normal tissue cross-reactivity by IHC, except for observed on-target binding in the pituitary. The role of LAG-3 in the pituitary has not been determined.
Herein, we have demonstrated in vitro functional binding and blocking activity of relatlimab. The antibody binds with a higher affinity to activated human T cells than to activated cynomolgus monkey T cells, most likely due to species differences in the sequence of the D1 domain that is targeted by the mAb. Relatlimab reversed the functional inhibition of T-cell activation by LAG-3 in an in vitro antigen-specific T-cell hybridoma assay, demonstrating that the antibody can functionally block the inhibition of T cells in the context of LAG-3 engagement by peptide-loaded MHC II.
Wang and colleagues recently reported the identification of FGL1 as a new putative ligand of LAG-3, presenting compelling evidence for its functional role in LAG-3 signaling and its potential clinical relevance in certain cancers (12). In our investigation of FGL1-mediated T-cell suppression, we showed that the engineered coexpression of a cell membrane-tethered FGL1 in the context of a mouse MHC II-positive APC resulted in more potent inhibition of T-cell responsiveness compared with the engagement of the receptor by MHC II alone. In biochemical assays, relatlimab blocked the interaction of LAG-3 with both MHC II and FGL1. Consistent with these observations, relatlimab also strongly blocked the enhanced T-cell suppression observed in cocultures with MHC II/FGL1 coexpressing APCs. In these assays, relatlimab restored T-cell cytokine production to a level equivalent to that observed for antibody treatment of T cells in coculture with APCs expressing MHC II alone. Additional work is needed to fully understand the nature of the interactions between these three molecules, but it is possible that a ternary complex of LAG-3, MHC II, and FGL1 could exist that may promote potent T-cell inhibition. Our data indicate that the FGL1/LAG-3 interaction is relatively weak (>1 mmol/L), but in the artificial context of APCs engineered for surface expression of FGL1, the increased valency likely fosters an avidity-driven enhancement of FGL1-mediated T-cell inhibition, as we observed. The mechanism by which soluble FGL1 can functionally engage LAG-3 in the periphery or intratumorally to inhibit T-cell responses remains unclear and we note a recent report that contradicts several findings from Wang and colleagues (45). FGL1 is produced by the liver and secreted into the bloodstream, where it plays a role in hepatocyte regeneration and metabolism to suppress environmentally induced inflammation (12,(46)(47)(48)(49). FGL1 has been reported to promote invasion and metastasis in gastric cancer and to mediate drug resistance in lung and liver cancers (50)(51)(52). It has been suggested that FGL1 may associate with extracellular matrix components to facilitate LAG-3 interactions, similar perhaps to the interactions of the latent transforming growth factor (TGF)-b complex with a v integrins and the subsequent release of active TGFb (53), but this has yet to be demonstrated. There is some evidence that FGL1 may form highmolecular-weight oligomers with FGL2 (54), but it is unclear whether FGL1 is able to assemble into homogeneous high-molecular-weight complexes alone. Dimeric Fc-tagged FGL1 protein, carefully processed to ensure a homogeneous dimer preparation, failed to inhibit in the 3A9 T-cell hybridoma assay as previously reported (12), but we observed some evidence of 3A9 cell inhibition when cells were treated with FGL1 that contains an Fc tag containing mutations to promote a hexamer. These differences in results may reflect differences in the biochemical properties of the purified proteins used in the different assays. It remains to be determined what the physical oligomerization state of FGL1 is in the periphery and the TME, and it remains unclear what effect soluble LAG-3 in either compartment may have on the ability of tumor-expressed FGL1 to functionally engage LAG-3 on T cells.
Biochemical and biophysical analyses demonstrated that the epitope of relatlimab resides in the insertion loop of the D1 domain of LAG-3 and is centered on the peptide H63-W70. Wang and colleagues showed evidence that the Y77F mutation in the C' strand of D1, which has been shown to abrogate MHC II interaction with LAG-3, does not perturb the interaction with FGL1 (12). These results support the hypothesis that FGL1 and MHC II interaction sites on LAG-3 are independent of one another. Although the mechanism of soluble FGL1-mediated inhibition remains to be determined, our data demonstrate that relatlimab can block its interaction with LAG-3, supporting the potential utility of relatlimab in cancer indications where FGL1 expression is high (e.g., HCC), or where its expression correlates with poor prognosis (e.g., lung adenocarcinoma; refs. 12, 52).
The efficacy and manageable safety profile of relatlimab combined with nivolumab has been demonstrated in the phase II/III clinical trial RELATIVITY-047, where prolonged PFS benefit was observed with relatlimab combined with nivolumab compared with nivolumab monotherapy in patients with previously untreated metastatic or unresectable melanoma (20,21). Collectively, these data support the development of a combination of relatlimab plus nivolumab as a promising therapeutic strategy in clinical oncology that has the potential to enhance antitumor responses and broaden the range of responding cancer types compared with nivolumab monotherapy. | 2022-01-28T16:07:20.777Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "cb1c3b5772a1283672471f1567bd340b719be5d7",
"oa_license": "CCBYNCND",
"oa_url": "https://aacrjournals.org/cancerimmunolres/article-pdf/doi/10.1158/2326-6066.CIR-22-0057/3207560/cir-22-0057.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4cd5669cb6c00e026fdf5c4acf38885bedd3d91a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
213328651 | pes2o/s2orc | v3-fos-license | Knowledge Recombination and Inventor Networks: The Asymmetric Effects of Embeddedness on Knowledge Reuse and Impact
Inventors are triply embedded. They are embedded in a network of knowledge components that they can reuse in future inventions. They are embedded in an inventor network, where internal embeddedness (the strength of relationships between focal inventors and their colleagues upon whose knowledge the team builds) and network centrality influence access to information. Finally, they are embedded in the firm, with its specific routines that favor external or internal knowledge search, what we call search orientation. Using a sample of 39,785 semiconductor patents, we study the pattern of knowledge reuse, or the recombination of technologically similar components, on invention impact. We propose that reuse of internal knowledge affects invention impact in a concave manner and posit that internal embeddedness steepens this relationship while network centrality leads to an inflection point shift. We examine whether these effects differ for subsamples of firms with inward- or outward-looking search orientation. We find that inward-looking firms’ optimal pattern of internal knowledge reuse does not differ markedly from that of outward-looking firms. We find that inward-looking firms are more susceptible to internal embeddedness and that centrality in the collaborative network flattens rather than shifts the relationship between reuse and impact. These findings elevate the theoretical discourse of embeddedness from the effects of network positions on innovation outcomes to similar network positions having asymmetric effects that vary with the firm’s search orientation. Our results contribute to an emergent area in innovation research on how inventor networks shape the inventive process and its outcomes.
The development, acquisition, management, and transfer of knowledge within and across firms has occupied scholars for decades (Appleyard, 1996;Grant, 1996;Kogut & Zander, 1992;Matusik, 2002;Polanyi, 1966Polanyi, , 2009. In general, knowledge evolves through recombinant processes and the exchange of ideas (Johnson, 2011;Nelson & Winter, 1982;Schumpeter, 1934), both within and beyond organizational boundaries, by individuals who are embedded in knowledge and collaborative networks that support the pooling of resources (Guan & Liu, 2016;Uzzi, 1996). During the recombinant process, inventors exploit their social networks in order to gather insights, validate information, and challenge their own vantage points. We focus on a subset of recombinations, namely, those that reuse technologically similar components, and hypothesize that internal knowledge reuse relates curvilinearly (inverted U shape) to invention impact. We examine how these concave relationships are influenced by network characteristics and a firm's search orientation.
The relationship between the reuse of technologically similar components and invention impact is explained by two counteracting latent mechanisms. 1 Absorptive capacity increases with the degree of reuse because the prior use of components creates knowledge in which absorptive capacity is grounded (Zahra & George, 2002;Zou, Ertug, & George, 2018). At the same time, an increasing degree of reuse reduces the novelty creation potential, because teams that rehash similar component combinations exhibit less exploration, which negatively correlates with novelty and eventually with impact. Both absorptive capacity (which influences an invention's usefulness) and novelty are required to create a patentable invention. Thus, we suggest that the concave shape is explained through the multiplicative effects of increasing team absorptive capacity and decreasing novelty creation as the degree of reuse increases (Gilsing, Nooteboom, Vanhaverbeke, Duysters, & van den Oord, 2008;Nooteboom, van Haverbeke, Duysters, Gilsing, & Van den Oord, 2007).
Recombinant processes, however, do not happen in a vacuum. Prior research has established that inventors are doubly embedded in knowledge networks and in networks of collaborative ties (Guan & Liu, 2016;C. Wang, Rodan, Fruin, & Xu, 2014). We investigate the moderating role of two dimensions of the collaborative tie network. Internal embeddedness is the quality of being ingrained in an intraorganizational network of social relationships that enable knowledge exchange through the routinization and stabilization of linkages among organizational members (Gulati, 1998;Uzzi, 1996). This specialized form of embeddedness captures the relative ease (in terms of tie strength) with which the focal team has access to colleagues with domain knowledge. We posit that as internal knowledge reuse increases, internal embeddedness will steepen the concave relation between reuse and impact. Next, we look at the moderating effects of a team's network centrality, which we operationalize through the team members' mean degree centrality in the industry collaboration network. We argue that network centrality will enhance a team's potential for novelty creation thanks to increased access to information. This leads us to suggest that the moderating effect of network centrality will consist of an inflection point shift of the relationship between internal knowledge reuse and invention impact.
Finally, we add a third layer of embeddedness by recognizing inventor teams are also embedded within their own firms. Such firms, even within an industry, can be highly heterogeneous, and much-studied differentiators are the firm's knowledge base and search behavior (Hoopes, Madsen, & Walker, 2003;March, 1991;H. Wang, Choi, Wan, & Dong, 2016). Search orientation is a characteristic of the firm's knowledge base that reveals the firm's historical tendency to search internally or externally, which could affect the effectiveness with which firms reuse knowledge. We split our sample into two separate groups of internally and externally oriented firms and ask whether the hypothesized effects would differ for either.
This research contributes to an emergent area in innovation research that examines the interplay of the knowledge network from which teams select and reuse components and the collaboration network in which they are embedded (Guan & Liu, 2016;C. Wang et al., 2014). Prior work finds that both networks are decoupled because collaborative patterns of researchers differ from co-occurrence patterns of components because an industry's knowledge component combinations at least partially precede the current community of active researchers (C. Wang et al., 2014). By investigating interactions between knowledge reuse and collaborative networks at the level of the invention, our study advances our understanding of how knowledge and inventor networks are interlinked and jointly influence the impact of inventions.
Our findings shine new light on the "paradox of embeddedness," which suggests that embeddedness may facilitate as well as hinder knowledge transfer (Asakawa, Park, Song, & Kim, 2017;Uzzi, 1997). While studies have shown that different types of embeddedness can influence knowledge-related outcomes in diverse ways (e.g., Asakawa et al., 2017), we find that the same type of embeddedness (network centrality) can have diverging consequences depending on the firm's search orientation. Thus, we expand the notion that inventors are doubly embedded in networks of knowledge components and knowledge holders (C. Wang et al., 2014) to a third layer of embeddedness in the firm with its idiosyncratic search orientation. Given that we expose diverse moderating effects of network centrality on reuse, our findings suggest that purely structuralist network arguments are insufficient to explain innovation success. This opens avenues for research into the influence of network structure on actor behavior.
Theory Development
We follow Nelson and Winter (1982), who argued that the inventive process "consists to a substantial extent of a recombination of conceptual and physical materials that were previously in existence" (p. 130). The "conceptual materials" of interest are knowledge components that an inventor team uses as inspiration, or as source material, for a new invention. An invention is then the outcome of a process of recombination of a number of knowledge components. We define reuse as a subset of recombination, namely, the extent to which a current invention builds on similar knowledge domains as did its source materials. Source materials that refer to domains that are distinct from the focal invention's domains are also used in the recombinant process, for inspiration, but are not reused. Figure 1 summarizes our hypotheses and guides our theoretical narrative.
Internal Knowledge Reuse and Invention Impact
Invention impact reflects the number of times a specific invention has been recombined in the creation of other inventions. Inventions that inspire many other inventors are influential, much like highly cited academic papers (Keijl, Gilsing, Knoben, & Duysters, 2016). We explain the effects of internal knowledge reuse on invention impact through a multiplicative combination of two mechanisms with opposing effects, leading to a concave (inverted U) relationship (Haans, Pieters, & He, 2016). 2 The two explanatory mechanisms are absorptive capacity, which correlates positively with reuse and enables teams to come up with useful inventions, and the potential for novelty creation, which correlates negatively with reuse and is evidently linked to novelty. Because both absorptive capacity and novelty creation are positively correlated to invention impact (Arts & Veugelers, 2014;Cohen & Levinthal, 1990;Kaplan & Vakili, 2015), these counterbalancing effects jointly create a concave relationship.
Reusing technological components enhances the usefulness of technologically related knowledge, improving a team's domain-specific absorptive capacity and spurring innovation (L. Kim, 1998). Knowledge reuse is associated with fewer mistakes and higher quality (Argote & Miron-Spektor, 2011;Fleming, 2001), which leads to an improvement in the ability to value, assimilate, and apply the reused knowledge (Cohen & Levinthal, 1990). Moreover, reusing internal knowledge components builds component competence (Henderson & Cockburn, 1994) and is indicative of combinative capabilities that help firms generate inventions from existing knowledge (Kogut & Zander, 1992). This suggests that reusing internal knowledge drives incitive relation with absorptive capacity.
Figure 1 Model Overview and Hypotheses
Three complementary arguments explain why increasing internal knowledge reuse also lowers the potential for novelty creation and thus invention impact. At the component level, high reuse suggests teams may face idiosyncratic constraints because there may objectively be less novelty to explore (Dosi, 1982), due to the creative potential of component combinations being largely exhausted (D.-J. Kim & Kogut, 1996) or because the inventive process exhibits little explorative search (March, 1991). At the team level, when a "new project is like a prior one" (Skilton & Dooley, 2010: 122), the likelihood that teams start following firm-specific, task-related mental models rises with reuse. As these mental models constrain what a team sees or does (D. H. Kim, 1993), they relate negatively to novelty creation and could reduce impact. At the knowledge base level, teams that reuse the firm's existing knowledge necessarily start from a smaller base than teams that search the entire industry knowledge base. Because the technological search landscape is rugged (Fleming & Sorenson, 2004;Levinthal, 1997), starting from a smaller knowledge base (e.g., strong reuse) reduces the team's possible vantage points from which to see new peaks, which will also reduce impact. These three arguments imply a negative relationship between reuse and impact through the novelty creation mechanism.
When combining these mechanisms, we see that at low reuse, the potential to do something novel is great but the capacity to do so is low, resulting in low overall impact. On the opposite side, when a team's invention reuses only technologically similar components, absorptive capacity is high but the potential of doing something creative is reduced due to the tendency to create incremental innovations in familiar domains, leading to lower average impact (Singh & Fleming, 2010;Sørensen & Stuart, 2000). In the middle, when some components are reused while other ones are added, the team has the best chance of creating high impact. All parts of the curve are likely to exist: Teams may apply knowledge components to a technologically dissimilar (reuse = 0) or similar (reuse = 1) domain or anything in between. Moreover, firms may differ in their preferred, and even optimal, levels of reuse. Reuse should thus generally relate curvilinearly to invention impact.
Hypothesis 1: Internal knowledge reuse is curvilinearly (∩ shape) related to invention impact.
Embeddedness in Inventor Networks
The embeddedness perspective recognizes that economic action is contained within a social structure that constrains and facilitates action (Gulati, 1998;Uzzi, 1996). Connectivity within a collaborative network facilitates knowledge search and transfer and opens up knowledge-related opportunities (Carnabuci & Operti, 2013;Savino, Messeni Petruzzelli, & Albino, 2017), which explains why embeddedness is also referred to as an "opportunity structure" (Uzzi, 1996). We proffer that the effects of knowledge reuse are influenced by the inventor network within which inventor teams are embedded. Following Haans et al. (2016), we theorize the moderation of a curvilinear effect by explaining how the latent mechanisms that jointly shape invention impact (absorptive capacity and novelty creation) are influenced by the moderator.
Internal embeddedness. We define internal embeddedness as a team's collaborative relationships that enable specialized knowledge exchange within the boundaries of the firm (Gulati, 1998;Uzzi, 1996). Like relational embeddedness, internal embeddedness reflects a history of prior interactions and the strength of collaborative ties among firm colleagues (Nahapiet & Ghoshal, 1998). Inventors can also access internal indirect ties relatively easily because firms facilitate knowledge exchange (Grant, 1996;Zander & Kogut, 1995). Through a combination of direct ties that enable resource and information provisioning and indirect ties that facilitate knowledge transfers, internal embeddedness can contribute to innovation output (e.g., Ahuja, 2000). We argue that internal embeddedness pivots the slope of the absorptive capacity effect upward while it pushes the slope of the novelty creation effect downward such that the net effect is a steepening of the ∩ shape.
Internal embeddedness increases absorptive capacity via facilitating the flow of relevant knowledge. "Knowledge is grounded in the experience and expertise of individuals" (Mabey & Zhao, 2017: 41; see also Gulati, 1998), and much of the knowledge underpinning inventions remains tacit (i.e., not explained by the knowledge components) because codification is hard (Cowan, David, & Foray, 2000;Gore & Gore, 1999). Internal embeddededness facilitates socialization and enhances the willingness to share information (Nonaka, 1994;Reagans & McEvily, 2003), which increases the probability that internally embedded teams can access their colleagues' tacit knowledge. While such knowledge is often difficult to articulate, it can be shared through conversation and shared experiences with knoweldgeable colleagues (Zack, 1999). Because internally embedded teams have collabored with their colleagues before, they already share mututal knowledge, which facilitates knowledge exchange (Kotha, George, & Srikanth, 2013). This suggests that an internally embedded team that is reusing internal knowledge will have the chance and the capacity to exchange ideas with knowledgeable colleagues, thereby improving the team's absorptive capacity.
Aside from its effect on absorptive capacity, internal embeddedness also influences a team's ability to create novelty by reinforcing the mental barriers in the team, which will be stronger as reuse increases (C. Wang et al., 2014;Yayavaram & Ahuja, 2008). Mental models are shared through internal embeddedness, shape and steer perception and search (Gore & Gore, 1999), and can lead to rigidity and reduced creative capacity (Skilton & Dooley, 2010), among other ways by altering search behavior (Knudsen & Srikanth, 2014). This will be especially salient when internal knowledge reuse and internal embeddedness are both high, as the colleagues with which the team is connected are domain experts who are known to struggle with novelty (Schillebeeckx, Lin, & George, 2019). As team members internalize their colleagues' cognitive barriers, their own creative thinking is hampered. Internal embeddedness may then increase knowledge insularity and trap teams "in a negative spiral of selfaffirming, marginal innovations that become narrowed in scope" rather than generating more useful inventions (George, Kotha, & Zheng, 2008: 1451. Internal embeddedness will thus pivot the novelty creation mechanism downward. Combined with the upward pivot for absorptive capacity, the expected moderation effect is then a steepening of the concave relationship between internal knowledge reuse and invention impact. Hypothesis 2a: Internal knowledge reuse's curvilinear (∩-shape) relationship with invention impact will be steepened as internal embeddedness increases.
Network centrality. We make a related argument for the moderating effect on a team's centrality in a network of collaborative ties. Network centrality boosts a team's potential for novelty creation by enabling access to people with complementary domains of expertise that can be useful in the team's ongoing recombinant process. Centrality in external networks or in the interunit network is known to positively influence invention-related outcomes (Tsai, 2001) because social network connectivity improves the quality of ideas (Björk & Magnusson, 2009). Being central is likely to be associated with having boundary-spanning ties that could serve as diverse information sources, and this will enhance the team's potential for novelty creation. It is possible that this effect would strengthen as reuse increases simply because the central teams can reap larger benefits from their network as they are reusing internal knowledge and look outside for creative ideas. If this holds, we would expect steepening. However, it is perhaps more likely that the contribution network centrality makes to novelty creation is not contingent on whether the team is reusing internal knowledge or recombining other knowledge. This would imply a mere upward shift of the novelty creation mechanism, which is associated with a shift of the concave curve's inflection point. We deem the latter more plausible; therefore, we propose the following: Hypothesis 2b: The inflection point of internal knowledge reuse's curvilinear (∩-shape) relationship with invention impact will move to the right as network centrality increases.
Search Orientation
The knowledge base of the firm, or "the set of information inputs, knowledge, and capabilities that inventors draw on when looking for innovative solutions" (Dosi, 1988(Dosi, : 1126, exposes a firm's search orientation. A highly specific knowledge base exemplifies strong local search, whereas a broad knowledge base indicates more distant search tendencies. By looking at what H. Wang and Chen (2010: 146) term "backward-based firm specificity," a firm's search orientation reveals whether it has relied more or less on its internally developed knowledge than its industry peers. Given the importance of search in the inventive process, we investigate whether the processes, routines, and practices in place in a firm (to search chiefly inward or outward) could alter the aforementioned relationships.
To provide some insight into this question, we revisit our three prior hypotheses through the lens of a firm's search orientation. Because of experiential learning's positive effect on innovation (Argote & Miron-Spektor, 2011) and firms' tendencies to develop and focus on core competences (Prahalad & Hamel, 1990), we expect that firms will strategically play to their idiosyncratic competences. If practice indeed makes perfect, inward-looking firms should be better at reusing internal knowledge than outward-looking firms, which are more used to relying on external knowledge. We would anticipate a higher optimum (inflection point) for inward-looking firms than for outward-looking firms: Hypothesis 3: Ceteris paribus, teams in inward-looking firms will exhibit a higher level of optimum internal knowledge reuse than teams in outward-looking firms.
When investigating the moderation effect of internal embeddedness separately in inwardand outward-looking organizations, we also anticipate a difference. Inward-looking firms strongly support the interactions among their inventors in order to foster future collaborations and discoveries. In comparison to outward-looking firms, teams in inward-looking firms are more likely to work on inventions that build on their colleagues' prior inventions so that the same level of internal embeddedness is more useful, simply because the quality of connections between inventors who tend to rely on their own firm's knowledge base is likely to be better. These closer interactions will boost both absorptive capacity as well as exacerbate the negative effect on novelty creation, because when it comes to novelty creation, teams in inward-looking firms can be thought of as being embedded in a nonbenign environment (MacAulay, Steen, & Kastelle, 2017) that worsens the problems with rigid mental models. Therefore, we anticipate that the moderation effect of internal embeddedness will be stronger for inward-looking firms than for outward-looking firms.
Hypothesis 4a: Inward-looking firms are more sensitive to the effects of internal embeddedness than outward-looking firms.
Finally, we take a closer look at how search orientation may influence the moderation effect of network centrality. Hypothesis 2b abides by the structuralist perspective on networks, which suggests that particular network constellations have positive or negative consequences. While there may be disagreements about which constellations are more conducive to innovation-related outcomes (Burt, 2004;Coleman, 1988), the structuralist perspective leaves limited room for contingency arguments. Although Coleman (1988) recognized that any type of social capital could be at once useful for certain actions but harmful for other actions, he never said that structuralist network characteristics could be at once beneficial for certain actors while being harmful to others. The dominant belief is thus that "social capital increases the efficiency of action" (Nahapiet & Ghoshal, 1998: 245).
Yet, recent findings have started to question this noncontingent view on structural network effects. Guan, Zhang, and Yan (2015), for instance, found that the relationship between the intercity collaboration network structure and innovation is moderated by the structure of intercountry collaboration networks. Schillebeeckx et al. (2019) found that expert teams that occupy structural holes create less impactful inventions, thereby providing some counterweight to the established structuralist perspective on the benefits of structural holes (Burt, 1994;Guan & Liu, 2016;Paruchuri & Awate, 2017, C. Wang et al., 2014. Therefore, we explore if firm characteristics can alter how teams benefit from structurally similar network positions. We theorize that for inward-looking firms, network centrality will not initiate the previously hypothesized inflection point shift but may instead flatten the concave curve. For Hypothesis 2b, we relied strictly on the novelty creation mechanism to explain the inflection point shift. When considering inward-looking firms, we anticipate effects on absorptive capacity as well as novelty creation. Regarding absorptive capacity, being central in an inward-looking firm is likely to be associated with a higher incidence of attention diffusion. High-status individuals are frequently called upon by colleagues and thus more likely to be distracted, are more likely to be complacent, and often need to help others, reducing cognitive bandwidth, which eventually could weigh on their own performance (Bothner, Kim, & Smith, 2012). We anticipate that this attention diffusion effect is weaker in outward-looking firms because colleagues are less likely to make significant demands on their central colleagues in such environments. These cognitive influences are associated with a downward intercept shift in absorptive capacity's ability to drive impact.
For novelty creation, individuals in central positions are likely to be experts on the firm's internal knowledge, which may decrease their tendency to explore new ideas, simply because their organizational standing has been acquired through the exploitation of internal knowledge (C. Wang et al., 2014). This is likely to be more salient in inward-looking firms that have historically developed more exploitative routines and practices than their outward-looking counterparts. Central inventors in inward-looking firms are likely to be well connected with their colleagues. This strongly embeds them in the firm's ways of thinking and mental maps, which lowers their tendency to think differently.
Central actors in outward-looking firms are more likely to have obtained their position from being more broadly connected in the industry in general. Centrality in an inward-looking firm may also resemble an echo chamber in which the central members' ideas are not challenged. Specifically, it is likely that the central inventor is connected to people who have an incentive to filter out information they think would not appeal to the central inventor. Thus, while the network structure of a central inventor team in an inward-and in an outwardlooking firm may in theory be identical, we propose that the quality of the nodes (types of information they can share) and how they process and transmit information (filtering) are likely to be distinct. These three reasons are all linked to a decrease in novelty creation, such that the novelty creation mechanism should move downward. Together, this leads us to our final hypothesis: Hypothesis 4b: For inward-looking firms, network centrality will flatten the concave relationship between internal knowledge reuse and invention impact.
Data and Methods
We investigate our hypotheses using patent data from the U.S. semiconductor industry. Because this industry relies heavily on research and development (R&D) and is known to have high invention and patenting rates since the 1980s, this industry is an appropriate context to test our hypotheses (Alcácer & Zhao, 2012;Mathews & Cho, 1999;Stuart, 2000). We limit ourselves to a single industry because vicarious learning through embeddedness differs across industries (Srinivasan, Haunschild, & Grewal, 2007), and we focus only on U.S. firms to control for institutional variation in patenting behavior (Cohen, Goto, Nagata, Nelson, & Walsh, 2002). While using patent data has known limitations, patent documents do provide "a reasonably complete description of the invention" (Griliches, 1998: 291) and offer the following benefits: (a) independent categorization into a technology structure called the International Patent Classification (IPC) system, (b) explicit incorporation of knowledge upon which the previous invention builds (prior art citations), and (c) identification of the focal inventors and, through prior art citations, the knowledge holders upon whose ideas they relied. These three characteristics of patent data make our hypotheses testable.
Our initial data set is built by merging the list of U.S. semiconductor firms provided by Hall and Ziedonis (2001) with all U.S. firms with Standard Industrial Classification code = 3674 (i.e., semiconductor industry) in Compustat and adding all firms listed in the ranking of semiconductor firms published by iSuppli Corporation. In doing so, we developed a list of 171 semiconductor firms with a Compustat record (Alnuaimi & George, 2016). Then, we compare our 171 firms to the 247,309 assignees that were granted patents by the U.S. Patent and Trademark Office (USPTO) between 1975 and 2008. Because of the variation in the naming of patent assignees (see Kogan, Papanikolaou, Seru, & Stoffman, 2012), we improve the matching of parent firm to assignee by (a) using the numerical identifiers provided by National Bureau of Economic Research patent projects and (b) using the Directory of American Firms Operating in Foreign Countries to isolate subsidiaries (Alnuaimi & George, 2016). Once we completed this matching exercise, we developed our sample of inventions.
Our focal sample consists of inventions made by semiconductor firms between 2000 and 2004. This window was characterized by significant inventive activity in the semiconductor industry, and its relative short time span has the advantage of keeping variations in the patenting process, which could affect our invention impact measure, small (e.g., Hall & Ziedonis, 2001). For each of the 39,785 firm patents in our 5-year window, we collect the cited prior art, its IPC classifications, and the names and affiliations of the cited inventors. This allows us to create detailed knowledge component and collaboration networks. Our component knowledge network connects the focal patent's IPC classifications to the classifications of the prior art, and the inventor network allows us to connect citing inventors (the focal invention team) to the cited inventors.
Response
Invention impact. We extract a sliding 10-year window of forward citations from Google Patents. This gives every patent the same length of time to be cited, increasing comparability. Many studies have shown that forward citations are related to economic importance of inventions, patent quality, and patent value (Agarwal, Ganco, & Ziedonis, 2009;Albert, Avery, Narin, & McAllister, 1991;Harhoff, Narin, Scherer, & Vopel, 1999). For robustness, we add a measure for a 10-year window excluding self-citations.
Predictors
Internal knowledge reuse is the redeployment of technologically similar components in the process of invention. Inventing teams reuse internal knowledge, which is proxied by prior art citations. Besides firm self-citations, we also consider team member self-citations as internal knowledge, even if a team member developed those inventions when they were working for a different company. Like Gruber, Harhoff, and Hoisl (2013), we understand the technological classifications of the cited prior art as proxies for knowledge components. We believe these classifications serve as proxies for the underlying knowledge domains with which inventors are more or less familiar. Although we know that some prior art citations are added by USPTO officials (Giuri et al., 2007), citations and their technological classifications remain useful to demarcate the knowledge upon which new inventions build, and this remains true even if the focal firm did not add the prior art citations itself. 3 We look at reuse as a continuum between 0% and 100% with 0% (100%) reflecting an invention with no (perfect) overlap between the classifications of its cited prior art and the focal patent's classifications. Reuse thus increases as similarity between the classifications to which prior art is assigned and the classifications to which the focal patent is assigned goes up. This allows us to differentiate recombination (i.e., all the knowledge components that are inputs, i.e., cited, in the invention) from reuse (only the technological components that overlap with the components of the focal invention). To determine the similarity between the focal patent and a backward citation, we create binary vectors of length 129 (total number of classes in our sample) for each patent and each prior art citation. We then calculate the cosine similarity between the focal patent's "class vector" and each prior art citation, after which we determine averages for internal knowledge reuse. The cosine similarity is a proximity measure in vector space and is preferred over the alternative Jaccard index from the perspective of graph theory (Leydesdorff, Kogler, & Yan, 2017). Internal knowledge reuse is then the average of the cosine similarity of the team members' selfcitations and the cosine similarity of the team members' current firm citations.
While every backward citation provides evidence of recombination and influence, our operationalization for reuse is more granular. First, by linking the focal patent's classifications to the classifications of the cited art, we create a proxy for the salience of a prior art citation in the recombination process: IPC classes are discretely attributed to a patent and it is therefore impossible to exactly know to what degree the focal team indeed relied on the knowledge captured in a specific patent document. Second, we see the technological landscape as rugged with unknown peaks and troughs (Baumann, Schmidt, & Stieglitz, 2019;Levinthal, 1997). In such a landscape, patents demarcate areas of legal exclusion, granted to the assignee. Under the assumption that the IPC categorization schema is meaningful and useful, in that patent officers categorize related inventions under the same class or subclass, a single IPC class (e.g., A01) represents a coherent area in the technological landscape. Then, prior art assigned to the same technology class(es) as a focal patent is more likely to closely relate to the focal patent (i.e., it is reused). By relying on prior art that is categorized in the same IPC classes, the focal patent is essentially more constrained by the prior art because its "area of exclusion" is closer to (and thus more strongly limited by) a patent with high technological similarity than by a patent with weak technological similarity. Our operationalization of reuse excludes those prior art citations that are sought and included but are assigned to entirely different IPC classes.
Internal embeddedness. Knowledge exchange is easier when inventors are proximate so that prior connections between focal inventors and the inventors of cited prior art serve as meaningful proxies for embeddedness. To measure our specialized form of internal embeddedness, we look at the direct and indirect ties between the focal team members and the members of all internal prior art citations, excluding team self-citations (Balland, Belso-Martínez, & Morrison, 2016). We weigh these collaborative ties so that each tie indicates how many patents two inventors collaborated on before the application date of the focal patent. This is consistent with Uzzi's (1996) finding that embedded ties develop primarily from existing personal relations. It also acknowledges the notion that "the existence of common third-party ties around a focal bridge substantially changes the nature of the bridging relationship through which knowledge flows," so that the sharing of a third-party tie is more likely to lead to successful innovation (Tortoriello & Krackhardt, 2010: 168). While much network research differentiates between direct and indirect ties, we subsume them in one measure because in our invention-specific micronetworks, their correlation is .87. Figure 2 shows an example. Inventors T and D are directly connected through two prior collaborations, while R and A are indirectly connected through M and L. Let internal embeddedness be represented by Em(p f ). We define a patent pair index for each <p f , p i >, where p i represents a prior art citation from within the firm (without overlapping inventors). Let T f and T i be sets of inventor team members of the focal and a cited patent, respectively. Em 1 (p f , p i ) captures direct prior collaborators across the teams, while Em 2 (p f , p i ) considers indirect connections among inventors. Em(p f ) is defined as follows: To determine the values for Em 1 (p f , p i ) and Em 2 (p f , p i ), we need to define inventor pair variables Em 1 (t a , t b ) and Em 2 (t a , t b ) for each pair <t a , t b >, where t a is from the focal team T f and t b is from a cited team T i . The patent pair indexes Em 1 (p f , p i ) and Em 2 (p f , p i ) are calculated as averages over all inventor pair indexes. To calculate inventor pair indexes Em 1 (t a , t b ) and Em 2 (t a , t b ), we take into account both the number of paths connecting t a and t b and the strength of those paths. Em 1 (t a , t b ) is calculated as the number of patents t a and t b worked on together before the application date of the focal patent. Em 2 (t a , t b ) is calculated as follows: In this formula, M is the number of indirect paths between inventors t a and t b , x 1 and x 2 are the weights (i.e., number of patents) of the first edge and second edge of each path. Figure 3 shows an example inventor collaboration network for a focal patent and a single prior art citation. Em 2 (R, A) is 0.5*(5 + 3) + 0.5*(10 + 2) = 10; the first part is for path "R-L-A" and the second part is for path "R-M-A." Em 2 (S, D) is 0.5*4 + 0.5*3 = 3.5, based on path "S-O-D." Em 1 (p f , p i ) is 2/15 = 0.133, where 2 is the summation of weights of direct paths connecting any inventor pair of focal and cited patent and 15 is the number of possible inventor pairs (3 in the focal team times 5 in the cited team). Finally, Em(p f , p i ) is then 2/3*2/15 + 1/3*(10 + 3.5) / 15 = 4 + 13.5/45 = 0.3. We winsorized this variable at mean plus three standard deviations to control excessive skewness.
Network centrality is operationalized as the team members' average degree centrality in the annually expanding 1990 -(t -1) collaboration network G S = <V S , E S >, where V S is a node set (each node represents an inventor) and E S is an edge set (formed when two inventors collaborated on an invention). Degree centrality is a useful measure for a situated knowledge construction process (like invention) and defined "as the number of ties incident upon a
Figure 2 Example Inventor Network
node" or "the number of paths of length one that emanate from a node" (Borgatti, 2005: 62). Following C. Wang et al. (2014), we then operationalize degree centrality of a team as the sum of the number of collaborators each team member has, divided by team size.
Search orientation is operationalized in the following steps. First, we determine for each invention in our sample the fraction of firm self-backward citations over the total number of prior art citations. Next, we aggregate and average those fractions per firmyear to give us an idea of how heavily a firm in any given year relies on self-citations. In the following step, we compare the firm-year average to the industry average and define an inward-looking firm as a firm that relied more on self-citations in a specific year than the industry average and an outward-looking firm in the opposite way. We then create a simple dummy for inward or outward looking, which facilitates the split sample approach used in our analysis. For example, Micron Technologies and Qualcomm are well-known firms that are consistently inward looking in our sample, while Texas Instruments and Intel are consistently outward looking.
Controls
We add controls at the level of the firm, the knowledge network, the team, and the patent. First, we create variables for external knowledge reuse in the same way as we created internal knowledge reuse and control for both the main and the quadratic effect. At the firm level, we control for size (assets) and productivity (number of patents applied for per year). We also use firm fixed effects to control for differences in patenting behavior between firms. In addition, we control for network measures studied by Guan and Liu (2016) and C. Wang et al. (2014),
Figure 3 Moderating Effects of Internal Embeddedness (K) and Network Centrality (Z) on Invention Impact
which are constructed in a way similar to the previous description. We determine the structural holes' value and degree centrality of the patent's knowledge elements in the 1990-to-1999 knowledge component network but include only the former to reduce multicollinearity. We also add a team's average degree centrality in the knowledge component co-occurrence network.
At the level of the team, we control for team size (Singh & Fleming, 2010), team diversity, team similarity (omitted to reduce collinearity), and team mutual knowledge. We use the following steps to determine team diversity: First, we create a binary vector of length N with N ≤ N max = 129 (maximum number of IPC classes) for each team member based on their own historical patent portfolio's subclass assignments. If the team member has patents assigned to 1 of the N classes, this vector element will be marked as 1; otherwise, 0. We call the number of classes in which a team has invented before N ≤ N max (N = 9 in Table 1). Next, we calculate how many team members have experience in each subclass Sc and divide this number by M, which equals the sum of all team members' experience across all patent classes (M = 13 in Table 1). We then calculate a Herfindahl-Hirschman index (HHI) to capture the concentration of knowledge in specific subclasses (HHI = i N i s = ∑ 1 2 and s = vertical sum per subclass divided by N). Our eventual measure is 1 − HHI so that a higher value indicates that the team's knowledge is more diversified, while a lower value indicates that the team's knowledge is more concentrated in specific subclasses.
Team Diversity
HHI.
= − 1
For clarity, consider Table 1, which provides a simple example of this measure for a team of three inventors that have a joint portfolio breadth M = 9. Inventor 1 (Inv1) has one prior patent that is assigned to Subclasses 1 and 3. Inventor 2 (Inv2) has 10 prior patents that are assigned to all subclasses except Subclass 6. Finally the third team member (Inv3) has four prior patents and all are assigned to Subclasses 1, 6, and 8. Note that we exclude the number of different patents each inventor has in a particular subclass and merely focus on the diversity. HHI is determined by 6*(1/13) 2 + 2*(2/13) 2 + 1*(1/13) 2 = 0.136.
Team similarity is measured using the same inventor vectors as described earlier. Now, we average the pairwise cosine similarity value for each unique member pair in the team. Thus, for each team consisting of K members whose individual portfolio breadth is characterized by a vector V k (k: 1 → K) of length M, the averaged cosine similarity measure is captured by Team mutual knowledge is measured by the members' collaboration strength in a collaboration network. The measurement is similar to embeddedness, but here we look only at the focal team, its prior direct ties to one another, and the indirect prior ties that broker the relationships of the team. At the patent level, we control for number of claims, self and non-self prior art citations, breadth (number of subclasses to which the patent is assigned), and time lag between application and grant date (Fleming, 2001). We add dummies for application and grant year and technological category effects that could influence the incidence of forward citations (Marco, 2007).
Results
Our response is a count variable, which calls for a nonlinear regression technique. We analyze our data in Stata using Poisson regression, which is preferred because it is more robust than the negative binomial (e.g., clustering of standard errors) and because the overdispersion of our dependent variable is moderate (µ = 9.84, σ = 14.49). In choosing to conduct within-firm fixed-effects or random-effects regressions, we need to control for firmspecific aspects that could influence knowledge transfer (Levin & Cross, 2004). Because some of our explanatory variables are possibly correlated with firm effects, a random-effects regression would be inconsistent. The Hausman (1978) test confirmed this suspicion; hence we deploy fixed effects (note that running negative binomial regressions would disable the use of real fixed effects). To check for collinearity, we ran an ordinary least squares (OLS) regression without indicator variables, quadratic terms, and interactions as they artificially inflate the variance inflation factors (Allison, 2012). We removed a few controls (knowledge stock, number of employees, team similarity) with variance inflation factors above 4 (Wooldridge, 2014). Including those variables in the regression did not affect our results. Table 2 presents descriptive statistics, including quadratic terms and interactions (Haans et al., 2016).
Hypothesis 1 suggests a concave, inverted U-shaped relationship between knowledge reuse and invention impact. Model 1 in Table 3 suggests this cannot be rejected, as all coefficients are significant in the expected directions. To verify the ∩ shape, the simplest way is to rerun the models as OLS regression (with the natural logarithm of the number of forward citations + 1 as the response) and then run a U test and a Fieller test (Haans et al., 2016;Lind & Mehlum, 2010). 4 Running this test supports the hypothesized inverted U shape.
Hypothesis 2a proposes that the curvilinear relationship between knowledge reuse and invention impact is moderated by internal embeddedness as depicted in Figure 2c. Model 2 depicts the results for the entire sample, and the significant interactions between internal knowledge reuse and internal embeddedness appear to be supportive of a steepening. Model 2 also shows significant interactions with network centrality, suggesting support for
Table 3 Internal and External Knowledge Reuse for Inward-and Outward-Looking Firms
(1) (2) (3) (4) (5) To assess what actually happens with the concave relationship between internal knowledge reuse and invention impact, we need to determine the entire curve at different values of K and Z and look at the resulting effects. In addition, we need to determine the inflection point to verify whether or not it shifts as predicted in Hypothesis 2a. The inflection X tp is derived by setting the derivative of Y equal to zero (dY/dX = 0). The resulting formula clearly shows that the inflection point (and consequentially the entire curve) depends on both K and Z. Keeping all controls fixed, we determine the shape of the curve for four permutations of both moderators at mean values and mean plus one standard deviation. For each permutation, we calculate the inflection point and then graph the entire figure by allowing X to move from 0% reuse to 100% reuse. Figure 3 therefore depicts four curves. Let us start by looking at the black lines. The dotted black line gives the base scenario of both internal embeddedness (K) and network centrality (Z) at mean value. The solid black line shows what happens when K increases. Consistent with Hypothesis 2a, an increase in internal embeddedness will steepen the concave relationships between internal knowledge reuse and invention impact. We can derive the same insight from the gray lines. For high network centrality, an increase in internal embeddedness (solid gray line) leads to a steepening of the curve.
For Hypothesis 2b, the picture is less clear. We need to look at either the solid or the dotted lines to investigate the effect of a change in network centrality. Looking at the solid lines means we start from high internal embeddedness. The gray solid line is below the black one, suggesting a flattening of the curve. Calculations show that the inflection points for both solid lines are virtually identical (0.50 and 0.51), suggesting a negligible shift in the inflection point. When we look at the dotted lines (for mean internal embeddedness), however, a clearer image emerges. Again, there is some flattening, but the inflection point now jumps from 0.68 to 1.47 (beyond the actual values of internal reuse), which is consistent with our hypothesis. This suggests we find partial support for Hypothesis 2b. The real effect therefore depends on the value of internal embeddedness: The inflection point is shifting to the right as predicted, but the size of that shift is less meaningful for high values of internal embeddedness.
Models 3 to 6 compare the results, sampled on search orientation. Hypothesis 3 suggested that inward-looking firms would be better at internal knowledge reuse (higher apex). However, we find that the inflection points for both samples are virtually identical at X tp = 0.48, suggesting that inward-looking firms are not better at reusing internal knowledge than their counterparts. Nonetheless, it is also clear that inward-looking firms have the optimum within reach as it is about half a standard deviation away from the mean (0.48 ≈ 0.31 + 1/2*0.35). For outward-looking firms, however, the optimum is about 1.44 standard deviations away from the mean (0.48 ≈ 0.14 + 1.44*0.25). Hypothesis 3 is not supported.
When comparing Models 4 and 6, it appears that inward-looking firms are indeed significantly more sensitive to the effects of internal embeddedness than outward-looking firms, for which the interaction effects are insignificant, while the betas in Model 4 are higher in absolute value than in Model 2. This provides some support for Hypothesis 4a, but given the nonlinearity of the model, even nonsignificant interaction terms are no guarantee of nonsignificant effects. To further investigate these results, we follow Haans et al. (2016) and determine the slopes at different distances (between 0 and 0.30 in increments of 0.05) from the inflection point for each sample. For ease of comparison, we keep network centrality at mean value and look at what happens with the slopes (derivatives of the concave relationship) as internal embeddedness moves from mean value to a high (mean + one standard deviation) value. Figure 4 shows that for outward-looking firms, an increase in internal embeddedness has virtually no effect on the slopes (and thus no steepening effect), while there is a clear steepening effect for inward-looking firms (the light gray line is significantly steeper than the dark gray line). This supports Hypothesis 4a.
Finally, we investigate whether inward-looking firms experience a stronger flattening of the reuse-impact relationship when their teams are highly central. In our discussion of Hypothesis 2b, we found that for all firms, some flattening was indeed visible while the inflection point shifted to the right as hypothesized. Using the same visualization approach, this time holding internal embeddedness at its mean, we check the slopes at mean and high values of network centrality in both samples. Figure 5 shows the results and shows significantly flatter slopes for inward-looking firms (the solid lines) compared to the outwardlooking firms. An increase in network centrality even leads to the slopes becoming negative,
Figure 4 Comparison of Inward-and Outward-Looking Firms at Mean and High Internal Embeddedness
from which we can infer at least significant flattening and possibly even shape flip, something we did not expect. However, the flattening effect of a one-standard-deviation increase in centrality seems rather similar based on the graph such that we cannot rule out that Hypothesis 4b may be rejected. We conduct a couple of additional tests to find more clarity regarding the likelihood of asymmetric effects in the two subsamples. A coarse approach can be based on the nested model comparison approach proposed by Clogg, Petkova, and Haritou (1995). These authors propose to create a Z value for different betas for nested models as follows: In this formula, β 1a and SEβ 1a represent the coefficient and standard error of the full model, whereas β 1b and SEβ 1b represent the coefficients of the subsample. If our hypotheses are correct, we should observe differences in the Z values. Of course, we cannot directly compare the outward-and inward-looking firms with this approach because they are not nested, so we can only compare each subsample with the entire sample. Using this approach reveals significant differences in the Z values for the main effect, the interaction effects with internal embeddedness, and the independent effect of network centrality but not for the interaction effect between internal knowledge reuse and network centrality, suggesting the latter moderation may not be significantly different.
A second approach is to run the full regression but differentiate all moderation effects for inward-and outward-looking firms so that we can find separate coefficients to establish differential impact. These results are consistent with our findings and available upon request. Finally, a third approach is to run the full model and add all relevant interaction effects for
Figure 5 Comparison of Inward-and Outward-Looking Firms at Mean and High Network Centrality
either inward-or outward-looking firms so that the betas for these effects represent a difference between both types of firms. We can then run a likelihood ratio test (although this means we cannot cluster the error terms) and a Wald test on the coefficients of the added interaction effects, both of which are highly significant. These findings add support for our hypotheses, although the complexity of hypothesizing between samples makes it impossible to simply use a p value to determine statistical difference-a practice that is coming under increasing scrutiny (Amrhein, Greenland, & McShane, 2019;Wasserstein, Schirm, & Lazar, 2019).
Robustness and Limitations
To ensure that our results are not spurious or driven by how we split the sample, we conducted robustness checks. First, we ran the analysis again but determined search orientation this time based on the absolute number of self-citations rather than the fraction of firm selfcitations. While this reduced the number of inward-looking observations to 10,318, the results held. We also excluded from the sample all the firms that were not consistently inward or outward looking over the focal 5-year period. The results did not change. Results also remained consistent when standard-normalizing network centrality for each subset of inwardor outward-looking firms. We also ran OLS regression on the log-transformed number of forward citations and ran two negative binomial regressions in Stata, one with quasi fixed effects (xtnbreg, fe) and one with robust standard errors (nbreg i.firm, robust), all of which gave substantively similar results (available from the authors).
We wanted to verify whether our findings would hold if we excluded self-citations from the impact variable. Given our interest in inward-looking firms, it is relevant to find out whether these inward-looking firms also achieve outside impact. With the exception of the interaction effect between internal knowledge reuse and network centrality, all results are consistent when excluding self-citations from the impact variable. As expected, for outwardlooking firms, the results remain the same. Finally, there are some endogeneity concerns. In particular, it is possible that embeddedness drives reuse, as teams with strong connections to knowledgeable colleagues may be more likely to build on the prior inventions of those colleagues. To control for this, we regressed internal knowledge reuse on a variety of predictors that capture selection variables, including team similarity; team size; the number of team, firm, and external backward citations; the average number of inventors on those team, firm, and external prior art citations; time; technology class; and search orientation dummies (results available upon request). We also regress the quadratic term for internal knowledge reuse on the same predictors. This is required because the linear projection of the square is not the same as the square of the linear projection (Haans et al., 2016). The residuals of these two regressions capture variance in internal knowledge reuse that is not driven by selection. Using these residuals in our regression instead of the original variable gives us consistent results with the ones in Models 2, 4, and 6 in Table 3.
Finally, we attempt an instrumental variable approach. We use team mutual knowledge, team diversity (both insignificant in Model 2), and the average size of cited inventor teams in which team members or firm colleagues were involved as instruments for both internal knowledge reuse terms, after which the instrumented variables are replaced. In this generalized-method-of-moments regression, the F tests are above the critical value of 12; the Hansen-J statistic is rejected, suggesting the instruments are coherent; and the curvilinear effect of internal knowledge recombination is found with β 1 = 0.73 (p ≤ .05) and β 2 = -0.73 (p ≤ .10). This provides support that endogeneity may not be detrimental in our analysis (results available upon request). While these are imperfect solutions to endogeneity, they provide reasonable support for the validity of our findings.
Discussion and Implications
This study expands our understanding of how inventor teams recombine and reuse knowledge components to create impactful inventions and how this process is influenced by the team's embeddedness in collaboration networks and the firm.
Doubly Embedded: Knowledge and Inventor Networks
Like C. Wang and colleagues (2014: 508), our study highlights the multilevel nature of inventors' embeddedness in networks of both social relationships (internal embeddedness and network centrality) and knowledge components (reuse). To this, we add a third form of embeddedness by recognizing that teams are deeply embedded within their own firms and that firms with opposing search orientations (inward or outward looking) can derive divergent benefits from their position in the inventor network. By using a longitudinal, nondichotomous design and by focusing on the complementarities among the networks rather than their decoupled nature, we extend previous work by C. Wang and colleagues and posit that knowledge component reuse drives invention impact in an inverse U-shaped way.
We theorize the ∩ shape as a multiplicative combination of two latent mechanisms, thereby following best practices to ground the observed quadratic effects (Haans et al., 2016). We proffer that increasing knowledge reuse generally is associated with higher absorptive capacity, because reuse implies components have been tried and tested before, which reduces mistakes while it also enhances the usefulness of technologically related knowledge (Argote & Miron-Spektor, 2011;L. Kim, 1998). Reusing internal knowledge moreover evidences component competence and combinative capabilities (Henderson & Cockburn, 1994;Kogut & Zander, 1992). These arguments support a positive relationship between reuse and absorptive capacity, leading to higher impact.
Yet, increasing internal knowledge reuse is also associated with lower novelty creation, as there may be objectively less novelty to explore or because high reuse suggests a tendency to favor marginal over radical improvements (Dosi, 1982;March, 1991). Reusing such knowledge can also be hampered by rigid mental models and the relatively small knowledge base of the firm, which limits the team's vantage points in the technological landscape (Fleming & Sorenson, 2004;D. H. Kim, 1993). These arguments support a negative relationship between reuse and novelty creation, leading to lower impact. Combined, these mechanisms result in an inverted U-shaped effect. While theorizing in terms of latent mechanisms takes up more journal space, we nevertheless do so because it enables better predictions and facilitates falsification at the level of the mechanism.
While we did not form explicit hypotheses about external knowledge reuse, in unreported regressions we found that most of our results hold there, as well. It is particularly interesting that inward-and outward-looking groups reuse on average the same amount of external knowledge and that the mean degree of actual external knowledge reuse is almost perfectly on the apex. This suggests both groups are close to optimal in their external knowledge reuse, but both fall short when it comes to internal knowledge reuse.
Next, we showed that access to colleagues who are domain experts, a specialized form of internal embeddedness, generally improves the team's capacity to create high-impact inventions, which we attribute to improved information flow and content, which strengthens the team's absorptive capacity. The effect of internal embeddedness on novelty creation is more complex. While internal embeddedness may give access to recent knowledge (Katila, 2002), it also exacerbates the mental barriers that reduce explorative search and increase insularity (George et al., 2008;Knudsen & Srikanth, 2014). The net result is a steepening of the concave relationship between reuse and invention impact. We found only partial evidence for our hypothesis that network centrality will shift the inflection point of the concave reuse-impact relationship to the right and explained that this is a consequence of the nonindependence of the two moderators in a nonlinear model. By demonstrating that there are contingencies between the two moderators that become clear only through an indepth analysis of the inflection point, we contribute to more granular testing as well as theory formation.
Triply Embedded: Inward-and Outward-Looking Firms
Perhaps our most interesting insights are rooted in our separation of inward-and outwardlooking firms. A key contribution is that findings for the average firm (by investigating the entire sample) can obfuscate fundamental differences in how teams create successful inventions. First, despite long-standing beliefs that firms favor local search, our data reveal that the majority of inventions does not rely sufficiently on internal knowledge. Although we find, to our surprise, that the optimum level of reuse for inward-looking firms is identical to that of outward-looking firms, we do find evidence that the two types of firms have asymmetric benefits to knowledge reuse when considering team collaborative ties.
Inward-looking firms are very sensitive to internal embeddedness, while outward-looking firms are not. This implies that the capacity to create high-impact inventions of teams in inward-looking firms is strongly influenced by the team's connections with colleagues who are domain experts, while the same does not hold for firms with an outward-looking search orientation. For inward-looking firms, our empirical results could even imply that if internal knowledge reuse becomes high, R&D managers may benefit from reducing communication between the team and expert colleagues, to diminish the downsides of transferring mental maps and imposing search limits (i.e., the dark side of embeddedness) that may be experienced by engaging with knowledgeable colleagues. Such effect is absent for outward-looking firms.
Regarding centrality in the collaborative network, the findings are less clear and require further study. The first surprising finding is already that overall in this sample, centrality does not seem to have strong positive effects on invention impact, which goes against much prior work. More specifically, teams in inward-looking firms actually become less successful if their centrality increases. We postulate that teams can take up structurally equivalent positions in a collaborative network but that the quality and diversity of the nodes to whom they are connected differs. This provides a counterweight to structuralist network perspectives that argue structural network characteristics in and of themselves explain invention outcomes. On the contrary, we find that the same network position in terms of degree centrality may have diverging effects on the success of knowledge reuse, depending on the firm's historical search orientation. Our explanation is that the same structural characteristics may imply either a distracting echo chamber or a rich pool of diverse and useful ideas, depending on the search orientation of the firm in which the node is embedded. Future work on the idiosyncratic properties of nodes (inventors, teams), firms (beyond search orientation), and their connections and structural network characteristics (beyond centrality) could shine further light on the results here.
Patterns of Knowledge Reuse
A puzzling implication of our study is that all teams, even those in inward-looking firms, do not reuse sufficient internal knowledge while they do use sufficient external knowledge. Empirically, we established an ∩-shaped relationship that peaks at a cosine similarity value of about 0.48 (for external knowledge reuse), while mean internal knowledge reuse is around 0.20 (0.31 for inward-looking firms and 0.14 for outward-looking firms). In our sample, 63.1% of the inventions do not reuse internal knowledge at all, and this effect is not primarily driven by small firms that lack internal knowledge to recombine. Firms like Broadcom, Texas Instruments, National Semiconductor, and Intel possess large knowledge stocks and reuse significantly less internal knowledge than the sample average, while Qualcomm, Advanced Micro Devices, and Micron Technologies reuse significantly more internal knowledge than average.
We do not find similar differences in external knowledge reuse. When comparing Micron Technologies and Intel, for instance, we find that the former searches significantly more in general (an average of 29.5 vs. 12.9 backward citations) and that over 25% of those citations are self-citations, while for Intel that percentage is only 13.5. While we explored how these differences can influence invention impact through network characteristics, future research could look deeper into how search orientation influences other determinants of the inventive process. Such research could investigate whether different firms are associated with different optimal search strategies and could consider the possibility that when it comes to recombination, reuse, and/or technologically local and distant search, teams with similar expertise may have different comfort zones in terms of exploration depending on their firm's search routines, habits, and competencies.
One possible explanation may be that inward-looking firms favor personalization over knowledge codification (Hansen, Nohria, & Tierney, 1999). Such firms rely more heavily on person-to-person knowledge transfer, and thus for them, the influence of embeddedness would be expected to be more significant than for firms preferring codification strategies (Prencipe & Tell, 2001). Future research can investigate more deliberately whether specific firm characteristics, such as the search orientation, warrant separate analyses and theorizing, as they did in this case. Not only would this open up research avenues, but it would also make our findings relevant for managerial practice. The dominant "single sample approach" inevitably makes it hard to uncover whether theoretically convincing relationships hold for all or only for a, perhaps small, majority of cases.
In an unreported regression, we created a dummy variable for firms with an ambivalent search orientation (i.e., the 21 firms-good for 2,784 inventions-that were not consistently above or below the industry average over the 5-year period we investigated). We found that these ambidextrous firms on average created higher-impact inventions, supporting claims that ambidexterity is important for innovation (Gupta, Smith, & Shalley, 2006;March, 1991). These ambidextrous firms did not, however, benefit more or less from knowledge reuse than their nonambidextrous competitors. It would be interesting if other researchers could explore in more detail whether inward-looking, outward-looking, and ambidextrous firms develop networks of different type and quality and how they use those networks to boost performance.
Finally, we invite other researchers to investigate whether our findings hold at the level of the individual inventor. Prior literature that looked at the interplay of the knowledge component and the collaboration network has focused on the individual or the firm (e.g., Guan & Liu, 2016;Paruchuri & Awate, 2017;C. Wang et al., 2014). In this article, we have taken the invention as unit of analysis and focused on the team that creates that invention as a driver of its impact. It is possible that when looking at an inventor's annual productivity and success, embeddedness in the firm and the collaborative network play different roles than at the invention-team level. This could be studied specifically for sole inventors or for teams from which members are randomly drawn for inclusion in the sample. We hope our study inspires researchers to explore how inventions and inventors are triply embedded in knowledge component networks, collaborative networks, and firm practices and how these jointly affect the invention process and its outcomes.
Conclusion
Our theory and findings contribute to explanations of knowledge recombination (Galunic & Rodan, 1998;Messeni Petruzzelli, & Savino, 2014), tacit knowledge and knowledge transfer (Ancori, Bureth, & Cohendet, 2000;Cowan et al., 2000), and embeddedness in social and knowledge component networks (Guan & Liu, 2016;Uzzi, 1996Uzzi, , 1997C. Wang et al., 2014) within the broad literature on organizational learning and innovation (Cohen & Levinthal, 1990;Levitt & March, 1988). We find that internal knowledge reuse relates curvilinearly to invention impact, regardless of the firm's search orientation. For inward-looking firms, this relationship is reinforced by a specialized form of internal embeddedness and slightly weakened by network centrality. Outward-looking firms, on the contrary, are less susceptible to these network effects, suggesting that outward-looking firms are less reliant on their network when reusing internal knowledge. Our study illuminates a previously unstudied aspect of the paradox of embeddedness by suggesting that the effects of embeddedness may depend on node attributes in such a way that the same structural network characteristic can have asymmetric effects. Finally, we show that teams are truly triply embedded in networks of knowledge components, inventor networks, and their own firm with its idiosyncratic search orientation and that these three forms of embeddedness are all important to understand how teams create high-impact inventions.
1.
A latent mechanism can be understood as a theoretical explanation for why the relationship between an explanatory variable and a response is the way it is. To properly theorize a curvilinear shape, one is required to rely on two latent mechanisms that cannot be measured separately: Ang (2008), for instance, theorizes an inverted U shape between competitive intensity and collaboration that is driven by two latent mechanisms, a negative opportunity function and a positive motivation function (see also Haans, Pieters, & He, 2016).
2. We explicitly follow the advice of Haans et al. (2016) and theorize along the lines of two latent mechanisms that jointly form an ∩ shape. While this is still quite uncommon in the management literature and prolongs theorizing, it is the recommended approach to substantiate theoretical arguments for curvilinear effects. We explain which of the two latent mechanisms (novelty creation or absorptive capacity) we expect to be influenced by the moderators.
3. Our epistemological stance is that prior art citations and their classifications are an imperfect proxy for the actual knowledge that inspired an inventor. Relevant instantiations of that knowledge can be added by U.S. Patent and Trademark Office examiners (or the firm's patent attorneys) to facilitate the delineation of the current invention and to narrow down patent claims. To us, this does not imply that the focal inventor was not aware of the knowledge encapsulated in the added prior art. They may have known about it through different means not captured in the patent documentation. We do not see recombination (and reuse) of knowledge as perfectly corresponding to a "Lego-like" process in which an inventor reads through prior art, gets inspired, and decides to invent something in the domain of that prior art (although this is a possible form of search used by some organizations). We acknowledge the process is more complex and that the available measures are incomplete and imperfect proxies for the inventive process. We understand a prior art citation as a reference to (an instantiation of) one or multiple knowledge domains within the real knowledge structure that is also accessible through different means (conferences, conversations, and sometimes even luck). When we discuss recombination and reuse, it is in reference to these parts of the real knowledge structure, for which prior art citations and their classification form useful, yet imperfect, proxies. Note also that even if an inventor was ex ante not aware of a specific instantiation (e.g., a patent) of the knowledge the inventor recombines (the ideas encapsulated in the knowledge domain of the patent), and becomes aware of the patent during the patent application process, they will thus become formally knowledgeable about the specific knowledge instantiation ex post. The parallel in academic publications is clearly illustrative. It is imminently possible for an author to write a paper in the field of recombination and to be informed by a reviewer about a specific publication the author is ex ante not familiar with but that seems relevant to the domain of the paper. We would thus claim that an author can recombine (and even reuse) knowledge embedded in publication X, even if they were not aware of the existence of that publication prior to the publication process. This is simply because publication X is also an instantiation of an underlying real knowledge structure that may have been accessed by the author through different means. | 2020-03-05T11:10:32.680Z | 2020-02-27T00:00:00.000 | {
"year": 2020,
"sha1": "c8af3ab99003f84cd1993cfcc7ef70b60a7cfb48",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0149206320906865",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "0df3729d67bceca33251b345b7d5b3c8c4fa9846",
"s2fieldsofstudy": [
"Business",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
211723262 | pes2o/s2orc | v3-fos-license | A Typha Angustifolia-Like MoS2/Carbon Nanofiber Composite for High Performance Li-S Batteries
A Typha Angustifolia-like MoS2/carbon nanofiber composite as both a chemically trapping agent and redox conversion catalyst for lithium polysulfides has been successfully synthesized via a simple hydrothermal method. Cycling performance and coulombic efficiency have been improved significantly by applying the Typha Angustifolia-like MoS2/carbon nanofiber as the interlayer of a pure sulfur cathode, resulting in a capacity degradation of only 0.09% per cycle and a coulombic efficiency which can reach as high as 99%.
INTRODUCTION
Lithium-sulfur (Li-S) batteries attract considerable interest due to their high energy density (2,600 Wh/kg). As well as the cathode material, sulfur is cost-effective, naturally abundant, and environmentally friendly (Gu and Lai, 2019). However, Li-S batteries are plagued with various challenges. Among these, serious lithium polysulfide (LiPSs) shuttling-inducing large capacity degradation, severe polarization, sluggish reaction kinetics, and inefficient self-discharge-is one of the most significant issues (Liu et al., 2019;Xu et al., 2019).
In view of such a serious situation, tremendous efforts have been made to suppress polysulfide shuttling with physical confinement and chemical absorption by constructing various kinds of nanostructures, such as the non-polar porous carbon (Rehman et al., 2016;Guo et al., 2018), graphene (Yin et al., 2016), carbon nanotubes , as well as the polar metal oxides Song et al., 2018), metal sulfides Lin et al., 2019), metal carbide Dong et al., 2018;Song et al., 2019), metal nitride (Jiao et al., 2019;Wang et al., 2019), etc. Accordingly, LiPSs shuttling has alleviated to some extent. Recently, researchers focused on the electrocatalysis of reducing sulfur to LiPSs and oxidizing Li 2 S 2 /Li 2 S to LiPSs or even to sulfur during the charge-discharge process, which is important for achieving high reversible capacity and coulombic efficiency. By applying the electrocatalysis concept of enhancing the redox reactions of polysulfides, increasing numbers of catalysts suitable for redox conversion of lithium polysulfides have been reported (Jeong et al., 2017;Liu et al., 2018;Hao et al., 2019;He et al., 2019;Jiao et al., 2019;Lin et al., 2019;Yuan et al., 2019).
In this work, we synthesized a new 1D nanostructure: a Typha Angustifolia-like MoS 2 /carbon nanofiber composite as both a chemical trapping agent and redox conversion catalyst for LiPSs, to enhance the sulfur cathode performances. The sulfur cathode with the MoS 2 /carbon nanofiber interlayer illustrates an initial capacity as high as 926.1 mAh/g at a charge-discharge current of 0.5 C. Even after 300 cycles a reversible capacity of 661.5 mAh/g could maintain.
Materials Preparation
Bamboo carbon fiber (BCF) preparation: the bamboo stick was immersed in 8 M KOH solution and hydrothermal reaction for 12h. Then the resultant bamboo fiber was dried and annealed at 800 • C for 2h under Ar atmosphere. Finally, the BCF was obtained by washing with distilled water and drying overnight.
BCF/MoS 2 preparation: 114 mg Ammonium molybdate tetrahydrate [(NH 4 ) 6 Mo 7 O 24 •4H 2 O] and stoichiometric overdose thiourea were dissolved in 60 mL distilled water, then 40 mg BCF dispersed in the mixture solution by ultrosonication. Next, the solution was transferred into the Teflon autoclave and reacted for 12h at 200 • C. At that time, a black composite was obtained. After washing with distilled water and ethanol and then drying, the composite was annealed in H 2 /N 2 (5% volume percent of H 2 ) atmosphere at 800 • C for 1h to finally obtain the Typha Angustifolia-like BCF/MoS 2 composites.
Electrochemical Measurements
Sulfur, carbon black and polyvinylidene fluoride (analytical reagent, Sigma-Aldrich), in a weight ratio of 80:10:10, were mixed with solvent of 1-methyl-2-pyrrolidinone (analytical reagent, Sigma-Aldrich). After stirring for 12 h, the electrode slurry was obtained. Then the slurry was pasted on the Aluminum foil via the blade-coating method. After drying at 60 • C in a vacuum oven overnight, the electrode was cut into wafers with a size of 0.5 cm 2 and a weight of ∼1.5 mg. The interlayer was made by BCF/MoS 2 , carbon black, and polytetrafluoroethylene in a weight ratio of 80:10:10 with solvent of 1-methyl-2-pyrrolidinone to form a flexible film. After drying at 60 • C in a vacuum oven overnight, the film was cut into wafers with a diameter of 11 mm, thickness of 150 µm, and a weight of approximately 1.2 mg.
Then batteries were assembled in a glove box (Vigor, China), using lithium metal as the counter electrode, polypropylene (Celgard 2300) as the separator, and 1 M lithium bis (trifluoromethane)sulfonimide (LiTFSI) in 1,3-dioxolane/1,2dimethoxyethane (DOL/DME) (1:1, v/v) containing 0.2 M LiNO 3 as the electrolyte. And the BCF/MoS 2 wafer could be placed between the separator and the electrode as the interlayer during the battery assembling process. Finally, the charge and discharge performances of the coin cells were tested with a LAND CT-2001A instrument (Wuhan, China) and the cyclic voltammetry (CV) curves were performed on a CHI 660D electrochemical workstation (CHI Instrument, Shanghai, China); in both the potential range was controlled between 1.5 and 3.0 V at room temperature. The capacities were calculated based on the sulfur mass. Additionally, the electrode impedance spectrums (EIS) were tested on CHI 660E (frequency range from 100 kHz and 10 mHz).
RESULTS AND DISCUSSIONS
Firstly, the XRD was used to examine the crystallization structure of the synthesized product. As shown in Figure 1, The BCF/MoS 2 has been successfully synthesized using a simple hydrothermal method. On the XRD spectrum of BCF, there is a wide peak at around 2 theta of 23 • , which belongs to the partial graphitization of carbon, implying the good conductivity of BCF (Gu et al., 2015). While on the spectrum of BCF/MoS 2 , the peak belonging to the graphitization carbon has been covered by other strong peaks. All these peaks could be ascribed to the MoS 2 , and the crystal phase could match well with the MoS 2 stand PDF card (37-1492).
Following the morphology information of BCF and BCF/MoS 2, they have been investigated by SEM. As shown in Figure 2a, the bamboo carbon with unique fiber structure has successfully synthesized. While in Figures 2b,c, the BCF as a core, and the MoS 2 grown in the direction of the nanofiber line as a shell, has been observed. Such a unique one-dimensional structure is very much like Typha Angustifolia as shown in Figure 2d.
Then the electrochemical performances of the sulfur cathode with and without the BCF/MoS 2 interlayer have been investigated. As shown in Figure 3A, there are two obvious and stable redox peaks for the sulfur cathode with the BCF/MoS 2 interlayer. While in Figure 3B, pure sulfur electrode (BCF/MoS 2 interlayer) illustrates deformed and widened redox peaks in the CV curves, suggesting a sluggish kinetic process (Li et al., 2017;Liu et al., 2018). Comparing the peak potentials ( Figure 3C) during the redox reactions, it is evident that the sulfur cathode with the BCF/MoS 2 interlayer shows higher reduction potential and lower oxidation potential than that without the BCF/MoS 2 interlayer, indicating that the BCF/MoS 2 interlayer significantly lowers electrode polarization (Gu et al., 2015;Wang et al., 2018;He et al., 2019). This can be attributed to the catalysis effect of MoS 2 on the oxidation/reduction of lithium polysulfides/Li 2 S He et al., 2019). In terms of the onset potentials shown in Figure 3D, the onset potential of the sulfur cathode with the BCF/MoS 2 interlayer in the oxidation reaction is ≈2.23 V, compared with ≈2.21 V for the pure sulfur cathode without the BCF/MoS 2 interlayer. With respect to the reduction reaction, the onset potentials for sulfur cathode with the BCF/MoS 2 interlayer are ≈2.42 and ≈2.12 V, compared with ≈2.4 and ≈2.1 V for the pure sulfur cathode without the BCF/MoS 2 interlayer, which are lower by ≈20 mV. These results demonstrate that by inserting a conductive BCF/MoS 2 interlayer, the redox kinetics are accelerated and the polarization losses significantly reduced for the Li-S battery (Gu et al., 2015;Li et al., 2017;He et al., 2019).
Finally, we carried out the long cycling performances and rate capabilities of the sulfur cathode with and without the BCF/MoS 2 interlayer. As shown in Figure 4A, the sulfur cathode with the interlayer shows a high initial specific capacity of 926.1 mAh/g. After cycling 300 cycles, it can still maintain a high reversible capacity of 661.5 mAh/g, and the capacity degradation rate is only 0.09% per cycle. However, the pure sulfur cathode without the BCF/MoS 2 interlayer only demonstrates an initial capacity of 510 mAh/g and an extremely low reversible capacity of 56.3 mAh/g after 300 cycles. By contrast, the initial average discharge capacity of the pure sulfur cathode without the interlayer is ≈400 mAh/g lower than the sulfur cathode with the BCF/MoS 2 interlayer, indicating significant dissolution and loss of LiPSs into the electrolyte during the initial cycles. Such severe dissolution and loss continues throughout the whole charge and discharge process because the ultimate reversible capacity is also extremely low. Additionally, from Figure 4B, it is clearly observed that the sulfur cathode with the BCF/MoS 2 interlayer shows far better rate capabilities compared to the one without the BCF/MoS 2 interlayer. Even if the charge-discharge current increases to 2 C a reversible capacity of around 456 mAh/g could still be reserved, and after the current switch to a low density of 0.2 C a recoverable capacity of approximately 900 mAh/g could be reached. Therefore, the BCF/MoS 2 is highly effective as a polysulfide immobilizer for enhancing cycling life and rate capabilities (Gu et al., 2015).
What's more, it can be observed that the sulfur cathode with the BCF/MoS 2 interlayer demonstrates an excellent coulombic efficiency (∼99%), but the sulfur cathode without the interlayer shows an obvious weaker coulombic efficiency, particularly in the tens of cycles ahead. The coulombic efficiency results indicate that the BCF/MoS 2 as electrocatalyst could significantly accelerate the redox reaction in Li-S batteries and improve coulombic efficiency (Gu et al., 2015;Jeong et al., 2017;Wang et al., 2018).
CONCLUSIONS
In summary, the Typha Angustifolia-like MoS 2 /carbon nanofiber composite has been successfully employed as the interlayer in Li-S batteries. The BCF/MoS 2 interlayer bestows Li-S batteries with excellent long-term cycle stability (only 0.09% capacity fade per cycle) and high coulombic efficiency (99%) even when the sulfur content is as high as 65% in the electrode. The exceptional performance can be attributed to: (1) the resultant conductive fiber networks, providing conductive skeletons for the electrons transfer; (2) abundant gaps and pores to store the sulfur; (3) polar MoS 2 shell chemically trapping the LiPSs as well as catalyzing the LiPSs redox reaction. Therefore, the unique Typha Angustifolialike MoS 2 /carbon nanofiber interlayer has shed a light on the development of high-performance Li-S batteries.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material. | 2020-03-03T14:02:27.150Z | 2020-03-03T00:00:00.000 | {
"year": 2020,
"sha1": "c6d1f1808d85bd080be3c61e07f6957930536d5e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2020.00149/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6d1f1808d85bd080be3c61e07f6957930536d5e",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
244492693 | pes2o/s2orc | v3-fos-license | Perfusion CT detects alterations in local cerebral flow of glioma related to IDH, MGMT and TERT status
Background The aim of this study was to investigate the relationship between tumor biology and values of cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), time to peak (TTP), permeability surface (PS) of tumor in patients with glioma. Methods Forty-six patients with glioma were involved in the study. Histopathologic and molecular pathology diagnoses were obtained by tumor resection, and all patients accepted perfusion computed tomography (PCT) before operation. Regions of interests were placed manually at tumor and contralateral normal-appearing thalamus. The parameters of tumor were divided by those of contralateral normal-appearing thalamus to normalize at tumor (relative [r] CBV, rCBF, rMTT, rTTP, rPS). The relationships of the parameters, world health organization (WHO) grade, molecular pathological findings were analysed. Results The rCBV, rMTT and rPS of patients are positively related to the pathological classification (P < 0.05). The values of rCBV and rPS in IDH mutated patients were lower than those IDH wild-type. The values of rCBF in patients with MGMT methylation were lower than those MGMT unmethylation (P < 0.05). The MVD of TERT wild-type group was lower than TERT mutated group (P < 0.05). The values of rCBV were significant difference in the four molecular groups divided by the combined IDH/TERT classification (P < 0.05). The progression free survival (PFS) and overall survival (OS) were significant difference in the four molecular groups divided by the combined IDH/TERT classification (P < 0.05). Conclusions Our study introduces and supports the changes of glioma flow perfusion may be closely related to its biological characteristics. Supplementary Information The online version contains supplementary material available at 10.1186/s12883-021-02490-4.
Background
Glioma constitutes a systemic disease of the brain with tumor cells spreading far beyond the macroscopically visible lesion and form networks throughout the whole brain [1]. The WHO grading system classified it as I-IV grades, with worst prognosis in grade IV gliomas. However, with the discovery and research of glioma gene targets in recent years, the subtypes of glioma are restratified. On the basis of molecular pathology diagnosis results, previous researches have suggested that the tumor markers such as IDH mutation status [2], MGMT promoter methylation [3] and TERT promoter mutation status [4] and so on are independently or interactively associated with the disease free survival and the overall survival in glioma patients, even affecting the operation Open Access *Correspondence: zj000718@yeah.net † Ke Wang and Yeming Li contributed equally to this work and should be regarded as co-first authors. 4 Neurological Diseases and Brain Function Laboratory, Luzhou, China Full list of author information is available at the end of the article and concurrent chemo-radiotherapy of glioma patients. Studies found that IDH mutations are considered to be an early event in glioma development, and IDH mutations appear to lead to cell state permissive of transformation, possibly leading to blocked cell differentiation and promoting cell proliferation [5], which is closely related to prognosis. MGMT may lead to drug resistance of tumor cells to alkylating agents by allowing DNA repair to glioma cells. MGMT promoter methylation may reduce MGMT activity, thus inhibiting the repair of DNA damages after radiation and chemotherapy [6]. TERT encodes the catalytic subunit of telomerase, and telomere length in normal cells is usually shortened after each cell cycle, leading to cell senescence and apoptosis. Mutations in the TERT promoter can improve gene transcription, leading to increased TERT mRNA levels [7], which predict a poor prognosis in glioma patients [8]. These viewpoints provide us with certain clues to the role of each gene in the development and development of glioma. However, the mechanism of these genes' influences on glioma is still not completely clear.
Glioma is characterized by abnormal vasculature with angiogenesis, which is a typical tumor hallmark participating in multiple biological behaviors such as tumor progression, invasiveness, and therapy resistance [9]. In recent years, with the development of non-invasive brain perfusion imaging technology, previous studies have reported that relevant parameters can obtain the hemodynamic information of glioma, summarize its microvascular environment [10], and characterize gliomas of different grades [11,12]. Early PCT studies of glioma often focused on the differential diagnosis and tumor pathological grade [13][14][15]. However, there are few studies on the relationship between the status of IDH, MGMT, TERT and perfusion indicators in glioma patients. Our study aims to explore the correlation between tumor grade, the status of IDH, MGMT, TERT and tumor perfusion indicators in glioma patients. Furthermore, our study firstly detects the differences of perfusion parameters, PFS, OS in the utility of molecular classification based on the IDH and TERT statuses in newly diagnosed WHO grade II-IV diffuse gliomas and in the utility of molecular classification based on the MGMT and TERT statuses in newly diagnosed glioblastoma (GBM).
Subjects
We reviewed the records of 46 consecutive patients that underwent preoperative PCT for newly diagnosed glioma from January 2018 to November 2018 in the Department of Neurosurgery, the Affiliated Hospital of Southwest Medical University. The inclusion criteria were: (1) had complete clinical data, included preoperative PCT.
Grouping methods
According to the results of latest studies of clMPACT-NOW and studies have been carried out to classify glioma subtypes based on the combined IDH/TERT status in patients with II-IV diffuse glioma [16][17][18], forty-four patients in our study were divided into four groups. Group A was grade II-IV diffuse glioma with IDH wild-type and TERT mutation, Group B was grade II-IV diffuse glioma with IDH wild-type and TERT wildtype, Group C was grade II-IV diffuse glioma with IDH mutation and TERT mutation, Group D was grade II-IV diffuse glioma with IDH mutation and TERT wild-type. Moreover, based on the combined MGMT/TERT status in patients with GBM, nineteen patients were divided into four groups. Group A was GBM with MGMT methylation and TERT wild-type, Group B was GBM with MGMT methylation and TERT mutation, Group C was GBM with MGMT un-methylation and TERT wild-type, Group D was GBM with MGMT un-methylation and TERT mutation.
Regions of interests (ROIs) selection
The ROIs of glioma were placed manually at multidimensional parenchymal areas of tumors [11,14,15,19], which the CBV, CBF, MTT, TTP, PS of tumor parenchyma were measured at different levels and multiple points, and the final perfusion parameters of tumors were averaged. Meanwhile, the perfusion parameters of the contralateral normal-appearing thalamus were used as normal control. The averaged parameters of tumor were divided by those of contralateral normal-appearing thalamus to normalize at tumor (relative [r] CBV, rCBF, rMTT, rTTP, rPS). These ROIs were not excluded, which were closed to the tumor necrosis, tumor cyst, edema, and difficult to distinguish the anatomical structures.
PCT protocol
Firstly, a noncontrast-enhanced CT scan was processed in all patients by Philips Briliancei 256-slice spiral CT scanner after patients had iodine anaphylactic test and the result was negative. Sceondly, the contrast agent (Iobitridol, 350 mgI/ml) was given rapidly (6 ml/s) through an elbow intravenous bolus injection with an automatic injector (2 mL/kg). Thirdly, normal saline (30 ml) was injected with the same speed. After 5 s delay, scanning was performed at the parameters of 80 kVs, 100 mAs, 0.4 s/cycle, 4.1 s interval, 13 cycles totally, 5 mm slice thickness, 512*512 matrix, 54.4 s contrast agent tracking time and 12.8 cm coverage. At last, the reorganized dynamic images transmitted to the workstation, which were processed in Philips Extended Brilliant Workstation using CT brain perfusion software.
PCT data processing and analysis
Two experienced radiologists were responsible for measuring perfusion parameters in Philips Extended Brilliant Workstation using CT brain perfusion software, who were blinded to the clinical results of patients. If two radiologists had conflicted opinions, a third radiologist was involved in the evaluation. The input artery was the ascending petrous segment of the internal carotid artery, and the output vein was the superior sagittal sinus. Combined with preoperative magnetic resonance imaging of patients, radiologist and neurosurgeon manually draw ROIs (21mm 2 ), avoiding the necrotic or cystic parts of the tumor and cortical vessels, to generate the time density curve, the false-color images and perfusion parameters of ROIs, including CBF, CBV, MTT, TTP, and PS. The ROIs parameter values were corrected by the value of hematocrit.
Microvessel density (MVD) data processing and analyses
a. Pathological section preparation: Tumor paraffinembedded tissue blocks were taken, and 4 sections were made successively, with a thickness of 4 μm. b. Reagents and immunohistochemical staining: Antibodies and detection systems: The antibodies and detection system used in this study were all products of Beijing Zhongshan Jinqiao Biotechnology Co., Ltd., and were CD34 monoclonal antibody, EnVision (Polymer) two-step PV-9000 reagent, 0.01 mol/L phosphate buffer (PBS, pH 7.2 ~ 7.4), 0.01 mol/L citrate buffer (pH 6.0). Immunohistochemical staining was performed PV6000 system. Immunohistochemical staining of CD34 was performed. c. MVD counting: Weidner method was used to determine the positive results. First, the area with the highest vascular density of the tumor was found at low magnification (× 100), and then the number of microvessels in the five areas with the highest vascular density was counted at high magnification (× 400), and the mean value was taken to represent MVD.
Statistical analyses
SPSS 22.0 statistical software was used for statistical analysis. The number of count data cases (percentage) was expressed, and the measurement data was expressed as mean ± standard deviation (x ± s). Measurement data were analyzed by independent-sample t test, or Wilcoxon test or One-way ANOVA. Comparison of survival curves was used Log-rank (Mantel-Cox) test. Correlation analysis was used spearman correlation analysis. Drawing using Graphpad Prism 8.0. P ≤ 0.05 was considered statistically significant.
Forty-four patients with WHO II-IV diffuse glioma were divided into four distinct subgroups based on IDH and TERT status. GBM was the most common in the group with mutation in TERT but not IDH (Group A) and the group with no detectable IDH or TERT mutations (Group B), accounting for 66.7 and 62.5% respectively. The group with mutations in both IDH and TERT (Group C) mainly consisted of oligodendroglioma (OL) or anaplastic oligodendrogliom (AO) (85.8%). The group with mutation in IDH but not TERT (Group D) mostly consisted of DA (71.4%). (Supplementary Table 2).
Niniteen GBM patients were divided into distinct subgroups based on MGMT and TERT status. The group with MGMT methylation but TERT wild-type included four patients (Group A). The group with MGMT methylation and TERT mutation included seven patients (Group B). The group with MGMT un-methylation and TERT wild-type included three patients (Group C). The group with MGMT un-methylation and TERT mutation included five patients (Group D). (Supplementary Table 3).
Relationship between pathological grade and perfusion parameters
In this study, there was only one patient with grade I glioma. So, the patients with grade I and grade II glioma were combined with analysis. Glioma grades was positively correlated with rCBV, rMTT and rPS in perfusion CT parameters (P < 0.05). (Table 2) With the increase of glioma grades, rCBV, rMTT and rPS showed an increasing trend. (Fig. 1).
Comparisons of perfusion parameters between different tumor biology markers
The group of IDH mutated and IDH wild-type show differences in rCBV and rPS on perfusion CT. The rCBV and rPS of IDH mutated group were lower than IDH wild-type group. (Table 3) The rCBF of MGMT methylation group was lower than un-methylation group. (Table 4) The MVD of TERT wild-type group was lower than TERT mutated group. (Table 5) The differences above were statistically significant (P < 0.05). (Fig. 2).
Comparisons of perfusion parameters, FPS and OS in the four molecular groups divided by the combined IDH/ TERT classification in WHO II-IV diffuse glioma
The results showed that the mean values of rCBV were statistically significant difference in the four molecular groups divided by the combined IDH/TERT classification (P < 0.05). PS, however, had a value close to statistical significances (P = 0.057). (Table 6, Figs. 3 and 4) Moreover, the results showed that the FPS and OS were statistically significant difference in the four molecular groups (P < 0.05). (Fig. 5 and 6) IDHwt/TERTmut group had the highest rCBV and the worst FPS and OS, and IDHmut/ TERTmut group had the best FPS and OS. However, IDHmut/TERTwt group had the lower rCBV and higher rPS compared with IDHmut/TERTmut group.
Comparisons of perfusion parameters, FPS and OS in the four molecular groups divided by the combined MGMT/TERT classification in GBM
There were no statistically significant differences of the perfusion parameters in the four molecular groups divided by the combined MGMT/TERT classification in GBM. (Supplementary Table 3) But the results showed that the FPS and OS were statistically significant difference (P < 0.05). (Supplementary Fig. 1-2) The group with MGMT un-methylation and TERT mutation (Group D) had the worst FPS and OS.
Discussion
This study was, to our knowledge, the first to analysis perfusion parameters evaluated by combining IDH mutation status and MGMT methylation status and TERT mutation status in glioma gene detection results. Previous studies have found that there is a significant difference in CBV, PS between LGGs and HGGs [20][21][22]. PCT parameters can relatively well distinguish patients with different histopathological grades of glioma. This was consistent with the increase in glioma malignancy with the increase in tumor grades. However, histopathological grading of the tumor had considerable limitations on distinguishing the malignancy of glioma. With the discovery and development of the genotyping methods of glioma pathological results, it had been proved that the genotyping results of glioma can more accurately judge the malignant degree of tumors than the results of histopathological grading [23,24]. In this study, PCT related parameters were used to investigate the genotyping of pathological results based on the latest standards [16], hoping to provide more accurate biological markers for the diagnosis and treatment of glioma. According to the literature, perfusion CT had several advantages compared with MRI biomarkers [13,25,26]: a. PCT was a widely available and cost effective neuroimaging method which was easy to perform on most new CT units. B. It required short scanning times, so it could be conducted without sedation, which was very important to the case of patients with severe symptoms, who were often uncooperative. C. The attenuation values and the contrast concentration of PCT were a more linear relationship and delivered a "superior quantitative accuracy" by providing absolute With only one acquisition, PCT provides access to the usual parameters (CBV, CBF, MTT) as well as the permeability data. According to our research, IDH wild-type tumors had higher rCBV, which may be related to the signaling pathways about a distinct transcriptome signature induced by upregulation of tumor cell hypoxia, and angiogenesis [21]. The IDH gene may promote signals related to tumor cell hypoxia, blood vessels and angiogenesis, accelerating the microangiogenesis of tumor cells. These signaling pathways are prerequisites for Table 6 Comparisons of perfusion parameters in the four molecular groups divided by the combined IDH/TERT status in WHO II-IV glioma Group A IDH wild-type-TERT mutated, Group B IDH wild-type-TERT wild-type, Group C IDH mutated-TERT mutated, Group D IDH mutated-TERT wild-type, * P<0.05 aggressive tumor behavior [27]. It may be related to the promotion of microvascular proliferation of tumor cells. Compared with IDH mutant gliomas, gliomas carrying the IDH wild-type gene are more aggressive.
The MGMT unmethylation glioma patients have higher rCBF in our study, which means the MGMT unmethylation glioma patients have more blood supply and faster blood flow in the tumor area. Previous findings showed that MGMT regulates angiogenesis in tumor cells by changing the levels of different vascular endothelial growth factor receptors [6]. Ahn et al. [28] had found that K trans of perfusion MRI is associated with MGMT methylation status in glioblastoma, which indicating that MGMT methylation may be involved in glioma-associated angiogenesis characterized by high endothelial permeability vasculatures. Our results are consistent with previous results on the effect of MGMT on the prognosis of patients with glioma. Whether the prognosis of glioma patients with MGMT methylation is related not only to temozolomide sensitivity, but also to the decrease of vascular endothelial permeability needs further study. Gliomas with the TERT mutation had higher MVD compared with TERT wild-type in our study. This is consistent with the latest study in which TERT promoter mutations improved gene transcription and resulted in increased TERT mRNA levels, which leads to a corresponding poor prognosis in patients [8].
Due to tumor was regulated by different genes at the same time, our study firstly attempted to group a number of glioma genes with different tumor grades, and explore the difference of perfusion results, PFS, and OS between different groups. Previous study [20] had showed that IDHwt/TERTmut group has the worst FPS and OS, and IDHmut/TERTmut group had the best FPS and OS. Our results were consistent with it. Moreover, in our study, IDHwt/TERTmut group had the highest rCBV, and IDHmut/TERTwt group had the lower rCBV and higher rPS compared with IDHmut/TERTmut group, which may imply the IDH/TERT status and prognosis could be predicted by rCBV. In previous study [20,29], CBV and PS reflected vascular density and vascular permeability, respectively, and therefore the two components of tumor neovascularity, which had an additive and not an exclusive effect on the prognosis of glioma. We speculated that the better prognosis of IDHmut/TERTmut group may be related to vascular permeability and other effects, which need to be further studied in a larger series of patients. There were no significant differences in the four molecular groups by grouping based on the combined MGMT/ TERT status in GBM. However, the PFS and OS showed statistically significant differences. The group with MGMT un-methylation and TERT mutation had the worst FPS and OS. Due to the samples were relatively small, this needed to be further studied in a larger series of patients with GBM.
Clinical implications
This study was, to our knowledge, the first to analysis perfusion parameters were evaluated by combining IDH mutation status and TERT mutation status in glioma gene detection results. The findings of our study were: 1) with the increase of glioma grade, rCBV, rMTT and rPS showed an increasing trend. 2) the rCBV and rPS of IDH mutated group were lower than IDH wild-type group. 3) the rCBF of MGMT methylation group were lower than un-methylation group. 4) In WHO II-IV diffuse gliomas, the rCBV was closely related to IDH combined with TERT status, and the higher rCBV could indicate the worse prognosis.
Study limitations
There were some limitations in our study. Firstly, the samples of this study were relatively small because relatively few patients had perfusion CT scans, and which could result in a possible bias. Secondly, our study may have sampling bias becauses the the specimen might not have corresponded to the intended area of PCT map. Future studies should replicate this study in larger samples, and combined with the use of other advanced neuroimaging techniques.
Conclusions
To summarize, perfusion parameters of glioma maybe related to the degree of tumor malignancy and the status of IDH, MGMT and TERT. The rCBV maybe an important predictive imaging marker of the combined IDH/ TERT status and prognosis in WHO II-IV diffuse glioma. | 2021-11-24T14:31:03.064Z | 2021-11-24T00:00:00.000 | {
"year": 2021,
"sha1": "caa82ca9e6fbf6cb18ba75204df57a89b86feac2",
"oa_license": "CCBY",
"oa_url": "https://bmcneurol.biomedcentral.com/track/pdf/10.1186/s12883-021-02490-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efe60073141f8a8e793a0a119a5f5d0e327e8b98",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261409399 | pes2o/s2orc | v3-fos-license | Comprehensive Lipid Profiling Recapitulates Enhanced Lipolysis and Fatty Acid Metabolism in Intimal Foamy Macrophages From Murine Atherosclerotic Aorta
Lipid accumulation in macrophages is a prominent phenomenon observed in atherosclerosis. Previously, intimal foamy macrophages (FM) showed decreased inflammatory gene expression compared to intimal non-foamy macrophages (NFM). Since reprogramming of lipid metabolism in macrophages affects immunological functions, lipid profiling of intimal macrophages appears to be important for understanding the phenotypic changes of macrophages in atherosclerotic lesions. While lipidomic analysis has been performed in atherosclerotic aortic tissues and cultured macrophages, direct lipid profiling has not been performed in primary aortic macrophages from atherosclerotic aortas. We utilized nanoflow ultrahigh-performance liquid chromatography-tandem mass spectrometry to provide comprehensive lipid profiles of intimal non-foamy and foamy macrophages and adventitial macrophages from Ldlr−/− mouse aortas. We also analyzed the gene expression of each macrophage type related to lipid metabolism. FM showed increased levels of fatty acids, cholesterol esters, phosphatidylcholine, lysophosphatidylcholine, phosphatidylinositol, and sphingomyelin. However, phosphatidylethanolamine, phosphatidic acid, and ceramide levels were decreased in FM compared to those in NFM. Interestingly, FM showed decreased triacylglycerol (TG) levels. Expressions of lipolysis-related genes including Pnpla2 and Lpl were markedly increased but expressions of Lpin2 and Dgat1 related to TG synthesis were decreased in FM. Analysis of transcriptome and lipidome data revealed differences in the regulation of each lipid metabolic pathway in aortic macrophages. These comprehensive lipidomic data could clarify the phenotypes of macrophages in the atherosclerotic aorta.
INTRODUCTION
Atherosclerosis is a major chronic inflammatory cardiovascular disease that leads to heart attack, myocardial infarction, and ischemic stroke (1).The low-density lipoprotein (LDL) particles that accumulate on the arterial wall due to hyperlipidemia are vulnerable to oxidative modification, triggering inflammatory processes, including the accumulation of monocytes/macrophages in the subintimal space (2).These accumulating macrophages play essential roles in the progression and regression of atherosclerosis via the production of inflammatory cytokines, clearance of dead cells, antigen presentation, and lipid uptake or efflux (3).It is well-known that the development of atherosclerotic plaque is associated with the formation of macrophage foam cells with excessive cellular lipid accumulation (4).
Lipids are the main components of cellular membranes and are classified into 8 different categories: glycerophospholipids (GPs), glycerolipids, sphingolipids, sterols, fatty acids (FAs), prenols, saccharolipids, and polyketides (5).Lipids not only structure the membrane of cells, but play important roles in energy storage, signal transduction, cell growth, apoptosis, etc. (6,7).Alterations of lipid profiles are significantly associated with the development of various metabolic diseases including atherosclerosis, obesity, diabetes, and cancer (8)(9)(10)(11).Lipid metabolism has also been known to control the phenotype of macrophages (12).A transcriptome analysis of intimal foamy macrophages (FM) and intimal non-foamy macrophages (NFM) found that foamy macrophages have enhanced lipid metabolic pathway including cholesterol uptake, efflux, and processing related genes, but attenuated inflammatory pathway compared to non-foamy macrophages (13).These results strongly suggest that the lipid accumulation in macrophages can reprogram the lipid metabolism leading to alteration of immunological function during atherosclerosis.Therefore, it is important to analyze the lipid composition of macrophages to understand the exact lipid metabolism in macrophages during atherosclerotic processes.Recently, lipidome analysis showed that the levels of phosphatidylinositol (PI), phosphatidylglycerol, and phosphatidylserine (PS) were increased in M1-like macrophages; however, the levels of lysoglycerophospholipids were increased in M2-like macrophages (14).Moreover, the lipid composition of macrophages can be differently altered upon stimulation with each TLR, and these compositional changes of cellular lipids have an important role in the inflammatory response of macrophage (15).Although previous studies have analyzed the lipids targeting atherosclerosis using cultured mouse and human macrophages (14), human atherosclerotic plaques (8), and oxLDL treated in vitro foam cells (16), a comprehensive lipid profiling of intimal macrophages is important to understand macrophage lipid metabolism during atherosclerosis since the tissue microenvironment can affect the metabolic phenotype of macrophages.However, a detailed lipidomic analysis of intimal macrophages has not been accomplished due to the difficulty in obtaining sufficient cells for lipid analysis.
In this study, 3 types of macrophages including FM, NFM, and adventitial macrophages (AM), were isolated from atherosclerotic mouse aortas using a flow cytometric approach with fluorescence lipid staining (13) and their lipid profiles were analyzed using nanoflow ultrahigh-performance liquid chromatography-electrospray ionization-tandem mass spectrometry (nUHPLC-ESI-MS/MS).Lipid analysis of the 3 macrophages was performed with non-targeted identification of lipid molecular structures based on data-dependent collision-induced dissociation, followed by high-speed targeted quantification using the full MS scanning method.To understand the characteristics of intimal foamy and non-foamy macrophages, alterations in their lipid profiles (in comparison to AM) have been investigated at the levels of lipid classes and individual lipid species.In addition, transcriptomic analyses related to lipid metabolism pathways were performed to provide evidence and enable a deeper understanding of the lipidomic outcomes.
MATERIALS AND METHODS
Detailed experimental procedures of lipidomic analysis can be found in the Supplementary Data 1.
Mouse and peritoneal cells
Ldlr −/− mice (C57BL6/J background) were obtained from Jackson Laboratory (Bar Harbor, ME, USA) and maintained under specific pathogen-free conditions.To induce atherosclerosis, 8-week-old male Ldlr −/− mice were fed a western diet for 16 wk.Western diet (49.9% carbohydrates, 19.8% protein, 21% fat, and 0.15% cholesterol) was purchased from a local vendor (D12079B; Research Diets, New Brunswick, NJ, USA).To achieve an appropriate single-cell number, 39 mice were sacrificed.For in vitro test, male C57BL/6 mice (n=7) were injected intraperitoneally with 1 ml of 3% thioglycolate broth medium.3 days later, peritoneal macrophages were isolated with 5 ml of RPMI1640 containing 10% FBS.The cells were cultured in 6 well plate for 3 h to wash away with PBS and then incubated in the incubator at 37°C with 5% CO 2 .After overnight incubation, commercial oxidized LDL (20 ug/ml; cat.770202;Kalen Biomedical, Montgomery Village, MD, USA) were treated to the cells for 48 h.All animal procedures were approved by the local authorities and Institutional Animal Care and Use Committee of Hanyang University, Seoul (permission number: 2020-0120A), and performed in accordance with the relevant guidelines and regulations.The study complied with Animal Research: Reporting of In Vivo Experiments guidelines.
Aortic intimal and adventitial singlet preparation
Mice were euthanized with a CO 2 which fill rate of 30%-50% of the chamber volume per minute and perfused via the left cardiac ventricle with approximately 10 ml of fresh cold PBS to eliminate blood contamination before isolating the aorta.The entire aorta (including the aortic sinus, arch, thoracic aorta, and abdominal aorta) was carefully dissected, and the perivascular fat and cardiac muscle were removed.Single-cell suspensions of the aorta were prepared as previously described (13).In brief, isolated aortas were opened longitudinally and washed again with PBS.The total aortic tissue was incubated at 37°C for 8 min with gentle rotation in a PBS solution containing calcium, magnesium, collagenase II (400 U/ml: cat.C6885; Sigma Aldrich, St. Louis, MO, USA), and hyaluronidase (90 U/ml) to separate the intima-media layer from the adventitia.After physically separating the aortic layers, the aortic intima and adventitia were digested in a PBS solution containing calcium, magnesium, collagenase I (675 U/ml), collagenase XI (187.5 U/ml), DNase I (90 U/ml), and hyaluronidase (90 U/ml), at 37°C for 60 min and 20 min, respectively, with rotation.After digestion, the intimal and adventitial cells were filtered through a 70 µm cell strainer and centrifuged at 385 × g for 7 min at 4 °C and resuspended in 2% FBS in PBS to obtain single cell suspensions.To load an adequate number of cells into the flow cytometer, the cell number was counted using a hemocytometer, and the cell suspensions were aliquoted into tubes for FACS.To distinguish dead cells, the cells were pre-stained with propidium iodide solution just before loading the cells into the flow cytometer according to the manufacturer's instructions.
Isolation of aortic macrophages
The Fc receptor was blocked to prevent non-specific binding by TruStain FcX antibody (clone 93; cat.101319) for 15 min at 4°C and the cells were incubated with the antibody mixture at 4°C for 30 min.The antibody mixture included the CD45 (clone 30
Oil Red O staining of sorted aortic macrophages
Sorted aortic macrophages were placed on positively charged slide glasses and fixed with 4% paraformaldehyde for 30 min.After washing with distilled water, 100% propylene glycol was added to the slides and incubate for 2 min.And then the macrophages were stained with Oil Red O working solution for 15 min and were rinsed with 60% propylene glycol and distilled water sequentially.Finally, the stained macrophages were covered with aqueous mounting medium and observed under light microscope.
Lipid analysis
A fused silica capillary tube with an inner diameter of 100 μm and outer diameter of 360 μm from Polymicro Technology, LLC (Phoenix, AZ, USA) was used to prepare the capillary columns.Watchers ® ODS-P C-18 particles (3 μm and 100 Å) from Isu Industry Corp. (Seoul, Korea) and BEH Shield C18 particles (1.7 μm and 130 Å), unpacked from an ACQUITY UPLC BEH Shield RP18 column (2.1 mm × 100 mm) purchased from Waters (Milford, MA, USA), were used as packing materials for capillary columns.
Lipid extraction
A mixture of internal standards (IS) was added to each group of samples prior to lipid extraction for lipid quantitation.For lipid extraction, each group of samples was tip sonicated for 2 min, and 1,300 μl of MeOH/MTBE/CHCl 3 mixture (1.33:1:1, v/v/v) was added to each group.The mixture was vortexed for 10 min at 40°C and then centrifuged at 1,000 × g for 10 min.The resulting organic layer was transferred to a different Eppendorf tube and dried under N 2 gas using an Evatros mini evaporator (Goojung Engineering, Seoul, Korea).Dried lipid extracts were dissolved in 150 μl of MeOH/CHCl 3 /H 2 O (18:1:1, v/v/v) for both AM and NFM and 135 μl for foamy macrophages to obtain the same concentration for all groups.Each sample was stored at −80°C until nUHPLC-ESI-MS/MS analysis.Details of lipid analysis by nUHPLC-ESI-MS/MS are described in Supplementary Data 1.
Lipid and nUHPLC-ESI-MS/MS analysis
A mixture of IS was added to each group of samples prior to lipid extraction for lipid quantitation.Fifty lipid standards were purchased from Avanti Polar Lipids Inc. (Alabaster, AL, USA) and listed in Supplementary information.Qualitative and quantitative lipid analyses were carried out using a Dionex Ultimate 3000 RSLCnano System coupled with Q Exactive Orbitrap MS from Thermo Fisher Scientific (San Jose, CA, USA).Details of nUHPLC-ESI-MS/MS run conditions are also in Supplementary Data 1.
Bulk RNA-sequencing analysis
We reanalyzed our previous bulk RNA-sequencing data (13) of intimal foamy, non-foamy and AM.The sequencing data is used for listing up the significant lipid metabolic pathways and related genes: FA biosynthesis and β-oxidation; triacylglycerol (TG) biosynthetic and catabolic pathways; cholesterol esterification; GP biosynthesis; and sphingomyelin (SM) and ceramide (Cer) conversion.To confirm the substantial number of genes, box plots were Lipidomic Analysis for Aortic Macrophages https://doi.org/10.4110/in.2023.23.e28 4/20 https://immunenetwork.orgused to represent differentially expressed genes between intimal foamy and non-foamy macrophages (fold ratio ≥1.5, p<0.05).Data was analyzed by ROSALIND ® (https://rosalind.bio/), with a HyperScale architecture developed by ROSALIND, Inc. (San Diego, CA, USA).Reads were trimmed using cutadapt (17).Quality scores were assessed using FastQC (FastQC: a quality control tool for high throughput sequence data, Andrews S., 2010).Reads were aligned to the Mus musculus genome build mm10 using STAR (18).Individual sample reads were quantified using HTseq (19) and normalized via Relative Log Expression using DESeq2 R library (20).Read Distribution percentages, violin plots, identity heatmaps, and sample MDS plots were generated as part of the QC step using RSeQC (21).DEseq2 was also used to calculate fold changes and p-values and perform optional covariate correction.
Data availability
Bulk RNA sequencing datasets are available in the international public repository, Gene Expression Omnibus, under accession identification as GSE116239 (13).
Targeted quantification of lipids in aortic macrophages from atherosclerosisinduced mice
To investigate the role of lipid metabolism in aortic macrophages in atherosclerosis, we isolated 3 types of macrophages from the aorta of high-fat diet-fed Ldlr −/− mice (n=39).The AM (CD45 + CD11b + CD64 + BODIPY low SSC low , cell number=50,000), FM (CD45 + CD11b + CD64 + BODIPY high SSC high , cell number=45,000), and NFM (CD45 + CD11b + CD64 + BODIPY low SSC low cell number=50,000) were sorted out using lipid probebased flow cytometry (Fig. 1A).The sorted FM had abundant cytoplasm with Oil Red O-positive lipid droplets, whereas other cells did not (Fig. 1B).Lipid extracts of each macrophage were analyzed by nUHPLC-ESI-MS/MS under run conditions that were optimized in both positive and negative ion modes using a mixture of lipid standards (Supplementary Fig. 1).The base peak chromatograms of the lipid extract sample (AM, NFM, and FM) are shown in Supplementary Fig. 2. A total of 557 lipids from 21 lipid classes were identified with their molecular structures from data-dependent MS/MS experiments, of which 218 species were quantified because the targeted quantification was based on lipids with different numbers of total carbon and double bonds (Supplementary Table 1).The identified chain structures of each quantified species are presented in Supplementary Table 2. Owing to the use of a limited number (45,000 cells for FM) of macrophages, it was not possible to determine the absolute concentrations of individual lipids based on calibration, which requires sufficient amount of lipid extracts to prepare a series of standard solutions with varying concentrations of external standards.Instead, quantification of lipids was carried out with the pooled macrophages for each group by comparing the corrected peak area, which was the relative peak area of an individual lipid species, to the peak area of an IS inserted into each lipid class.Therefore, it was focused to compare the relative profile of lipids within each lipid class between FM and NFM groups in comparison to that of AM.Supplementary Table 3 shows the calculated fold ratio of each lipid species in the NFM group compared to the AM group (hereinafter referred to as NFM/ AM), FM/AM, and FM/NFM together with that in in vitro oxLDL-treated peritoneal macrophages (OPM) compared to control peritoneal macrophages (CPM).at least one of the 3 comparisons (NFM vs. AM, FM vs. AM, and FM vs. NFM) (Fig. 1C).Levels of lipid classes such as FA, phosphatidylcholine (PC) including lysophosphatidylcholine (LPC) and etherPC, SM, and cholesteryl esters (CEs) were found to be increased in FM compared to those in both AM and NFM, as sorted in the left part of the heat map; however, levels of ether phosphatidylethanolamine (PE), PS, Cer, and TG were decreased in FM sorted on the right side.While most lipid classes in the right column of the heat map showed the highest levels in AM, levels of TG showed significant differences depending on their chain lengths, with long chain structures (total carbon number >50) increasing in the NFM group and decreasing in the FM group.For the visual demonstration of relative changes in amounts with heat map, variance of each individual species level was used with that of 5 repeated measurements.
TG and FAs levels of 3 aortic macrophage populations
Changes in individual TG levels are plotted in Fig. 2A with the relative fold ratio of individual TG species in FM vs. AM and NFM vs. AM.Levels of TG with shorter acyl chains (total carbon number <50, hereafter, ScTG) exhibited a decreasing pattern from AM to NFM and FM; however, those with longer acyl chains (total carbon number >50, hereafter as LcTG) appeared with an increased pattern in NFM (vs.AM) but a decreased pattern in FM.This suggests that ScTG species did not accumulate in either of the intimal cells, whereas LcTG species accumulated in NFM to a greater extent than in FM, and their accumulation was largely reduced in FM.
The overall FA level as shown in Fig. 2B was increased about 2-folds in FM, compared to those of the other 2 groups.FA levels were expressed as the corrected peak area, which is the peak area of each species relative to that of the IS of each lipid class.The amounts of individual lipid species in each lipid class in Fig. 2B are visualized with stacked bar graphs, and the relative levels of each lipid class were compared between the different macrophage types.Lipid species provided with acyl chain information on the right side of each bar in Fig. 2B belongs to the highly abundant species in each class, and "low" refers to the summation of peak area of the rest of lipid species.Highly abundant species were defined as the lipid species with which the abundance was higher than 100%/(number of lipid species in each class).The concentration of each IS is listed in Supplementary Table 4.In the case of detecting FAs from plastics or extraction solvents that could interfere with FA analysis, background correction was applied to calculate endogenous FA signals and the concentrations of FAs in the blank sample are listed in Supplementary Table 5.The overall levels of hydroxylated fatty acids (HFAs) in both intimal macrophages were similar to each other but largely different from that in AM, showing a large contrast between intimal macrophages and AM.It is noted that the relative ratio of HFA 18:1_OH to HFA 22:1_OH is reversed in both FM and NFM groups.Individual FA levels in the 3 macrophages are plotted in Fig. 2C, together with the total levels of ScTG, LcTG, and HFA.The TG molecular types of each FA are listed in Supplementary Table 6.FA 18:1 level was higher in FM by 3.8-fold than in NFM, whereas it was not detected in AM.However, the level of TG species containing 18:1 acyl chain was lowest in the FM group for both ScTG and LcTG groups.A similar pattern was observed for FA 18:2.In both cases, LcTG levels were elevated in NFM.A similar observation was made for long FAs (20:3, 22:5, and 22:3), which were not detected in either the AM or NFM groups, but their LcTG levels were higher in the NFM group than in the FM group.In the case of HFA, FA 18:1_OH and FA 18:0_2OH, both hydroxylated forms of FA 18:2, showed opposite trends in their levels in the NFM and FM groups, with the monohydroxyl form of FA 18:2 being higher in the FM group, whereas the dihydroxyl form was lower in FM than in NFM.A similar trend was observed for FA 18:3 (Supplementary Fig. 3), where the remaining plots of FA species (FA 12:0, 18:3, 20:2, 20: decreased TG levels are associated with the release of FA from TG through lipolysis and fatty acid oxidation (FAO) to produce ATP in mitochondria (22).
Arachidonic acid (AA or FA 20:4) is a precursor of eicosanoids that are known to have proinflammatory and immunoregulatory roles (23).In particular, PI 38:4 (or PI 18:0/20:4) has been reported as a highly abundant PI species that controls the inflammation process and is a major source of AA (24).While free AA was not detected from macrophages in this study, phospholipids (PLs) containing arachidonoyl acyl chains were examined, since AA is released from PLs by phospholipase A2 and is the precursor of lipoxin A4 and PGE2, which are involved in the activation of macrophages (25,26).Fig. 2D shows the comparison of the relative amounts of 6 PLs containing an arachidonoyl chain between the FM and NFM, with the levels of all 6 PL species higher in the FM than in the NFM.Interestingly, PLs with an 18:0/20:4 acyl chain, corresponding to PC 38:4, PE 38:4, and PI 38:4, were more accumulated in the FM group by about 2-fold compared to those in the NFM.
Differential expression of genes related to metabolic process of FA and TG
To validate the genes that contribute to the TG and FA metabolic processes, which is a characteristic of FM, we analyzed the previously performed bulk RNA-sequencing data (13).
The pathways of FA biosynthesis, FA beta-oxidation, and biosynthesis and lysis of TG were primarily investigated because the products of FA biosynthesis are used as the source of FA beta-oxidation, and their products are associated with TG synthesis (22).Lysis of TG also positively regulates the FA synthesis.The pathways are represented in a schematic diagram and the genes regulating each process are indicated in Supplementary Fig. 4. According to previous research, Elovls and Scds are involved in the FA biosynthetic process ( 27) and produce acyl-coenzyme A. RNA-seq data showed that Elovl1, Scd1, and Scd2 were upregulated in FM (Fig. 3A).Palmitoyl-CoA, produced by the FA synthesis process, is oxidized by multiple genes, such as Echs1, Hadh, and Acads (27).The RNA levels of these genes are higher in FM than in NFM (Fig. 3B), which coincide with the lipidomic results.Agpat1 and Agpat3 increased in FM to enhance the transformation of lysophosphatidic acid (LPA) to phosphatidic acid (Fig. 3C).Lpin2 and Dgat1, associated with TG synthesis (28) are decreased whereas Pnpla2 and Lpl, which are related in lipolysis of TG (29,30), are increased in FM (Fig. 3C).These results can explain why the TG level was lower in FM than in AM and NFM and suggest that metabolic pathways are differently regulated between FM and NFM and induce their own distinct lipidomes affecting atherosclerosis development and progression.
The levels of CEs in aortic macrophages
CEs is major component of lipid droplet in FM (31).In our lipidomic results, there were marked differences in CE species between FM and the others.The species (20:4, 18:1, 22:6, 22:4, 22:5, and 16:1) are detected in FM, but not in AM or NFM.The relative amounts of CEs were also augmented in FM (Fig. 3D).Gene expression related with cholesterol biosynthetic process including Hmgcr, Cyp51, Dhcr24, Srebf2 and Lbr were downregulated in FM (Fig. 3E), however, FM had genes related with cholesterol esterification were increased in FM (Fig. 3F).
Ldlrap1, the adaptor of LDL:LDL receptor complex in the cell membrane (32), which increased in FM.In the process of cholesterol esterification, Npc1 and Npc2 transported cholesterol to endoplasmic reticulum (ER) from lysosomal membrane (33), which are increased in FM.Soat1 also increased in FM which esterified the excess cellular cholesterol to CE at the ER membrane (34).However, Ch25h and Cyp27a1 are downregulated in FM, which transfer cholesterol to hydroxycholesterol.The cholesterol esterification pathway with related genes were represented in Supplementary Fig. 5.
Differential PL profiles of 3 aortic macrophage populations
Fig. 4A shows that the overall levels of some lipid classes including PC, PI, PE, and LPC in FM were higher than those in NFM.Lipid classes plotted in Fig. 4A The GP pathways are intertwined in a complicated manner, thus representing significant genes (Supplementary Fig. 7A).This shows that the bulk RNA-seq data almost coincide with those in the lipidomic analysis.For example, the level of PC is higher in FM than in others because Pemt, which enhances the transformation of PE to PC, is increased in FM, and Pla2g4a, which hydrolyzes PC to LPC, is decreased in FM (Fig. 4B).The overall SM level in FM was higher than that in NFM, while the overall Cer levels appeared as an opposite trend, supporting that the SM/Cer ratio was reversed in 2 intimal macrophage groups (Fig. 4C).Because SM is synthesized from Cer, and vice versa, the opposite trend is a reasonable result.Cer is known as an athero-prone lipid because Cer promotes the FM formation via impairing the digestion of aggregated LDL (35).Conversion between SM and Cer is represented by schematics with regulatory genes (Supplementary Fig. 7B).Even though both genes (Smpd1 and Sgms2) used for converting SM and Cer were increased in FM, Cer may be decreased in FM because of the increased levels of Samd8, which catalyzes Cer to Cer phosphoethanolamine (Fig. 4D).
Lipid analysis of OPM
Peritoneal macrophages treated in vitro with oxLDL, and the untreated controls were analyzed under the same conditions employed for aortic macrophages.The quantified lipid data for the peritoneal macrophages are listed in Supplementary Table 3.The amounts of individual lipid species in each lipid class are plotted using stacked bar graphs in Fig. 5A, and the levels of PC, SM, PI, PE, and CE were found to be much higher in OPM than in CPM, as observed in the FM group.Individual TG levels are plotted using the fold ratio (OPM/CPM) in Fig. 5B.While the levels of ScTGs and most LcTGs in OPM were reduced to some degree or not significantly varied, the 2 TGs (52:5 and 56:5) were highly accumulated by 3-to 8-fold in the OPM group.To compare the changing patterns of each lipid class between the FM and OPM groups in comparison with the NFM and CPM groups, respectively, fold ratios of each lipid class as FM/NFM and OPM/CPM are plotted in Fig. 5C.
DISCUSSION
Macrophage polarization, such as M1-like and M2-like, is highly associated with the development and regression of atherosclerosis (36,37).The phenotypic conversion of macrophage is continuum and the M2 activation requires lipolysis of TG and consequent FAO, which relies on long chain FAs (22,37,38).Since mitochondrial oxidation requires TG lipolysis prior to FAO, it is known to reduce the accumulation of lipid droplets, leading to prevention of foam cell formation (39).In our experiments, the overall level of TG was not significantly changed in NFM, while it was significantly lower in FM than in adventitial cells.The comparable levels in NFM and AM were due to the decrease in the levels of most ScTGs, whereas those of most LcTGs were highly increased in NFM, offsetting each other.Most of the LcTGs showed approximately 1.5-folds higher levels in the NFM group (vs.AM), whereas those in FM were mostly reduced or not significantly changed compared to the AM group.
Among the 24 LcTG species, 2 LcTG species (52:6 and 54:2) were greatly (>2.5-fold) accumulated in both intimal macrophages compared to AM, while 3 other species (TG 56:4, 56:3, and 56:0) were largely (>2-fold) accumulated in the NFM group.In the case of peritoneal macrophages, LcTG species of the same chain lengths but with a higher degree of unsaturation (TG 52:5, 56:5, and 56:4) were observed to be largely accumulated in OPM compared to those in the control (CPM).This suggests that LcTG species with Lipidomic Analysis for Aortic Macrophages https://doi.org/10.4110/in.2023.23.e28 12/20 https://immunenetwork.orgpolyunsaturated fatty acyl chains are highly accumulated in both FM and OPM.The decreased levels of LcTG in the FM group can explain why the lipolysis of TG and the subsequent FAO of long-chain FAs in relation to the activation of the M2-like phenotype were more efficient in FM than in NFM.However, there are differences between TG species enriched in foamy macrophages and peritoneal macrophages.Considering the difference between culture medium and in vivo plaque milieu, OPM appears not to perfectly represent the characteristics of foamy macrophages in vivo.
The levels of FA species in general were largely increased (<2-folds) in FM, along with the detection of various FA species.In particular, long polyunsaturated fatty acid species, such as FA (20:3, 20:5, 22:3, and 22:5), were exclusively found in FM, which are preferred by FAO.This evidence suggests that TG lipolysis is more favorable in FM than in NFM.
FA can be taken up by macrophages through phagocytosis and the CD36 receptor, which is the receptor of both oxLDL and FA (40).The relatively upregulated expression of the CD36 gene and activated phagocytosis might contribute to the increase in FA level in FM (13,40,41).Moreover, an increased FA level is known to activate the oxidation mechanism in mitochondria because FA serves as the ligand of peroxisome proliferator-activated receptors (PPAR) (42), resulting in the M2 polarization of macrophages (36).The expression of genes related to PPAR signaling pathways was reported to be upregulated in FM (vs.NFM) in a recent study (13).It was reported that levels of FA 16:1 and 18:1 significantly increased in highfat diet Ldlr −/− mice compared to those of wild type (43), which is similar to the present result that both FA species increased by more than 3-folds in the FM (vs.NFM).In the case of OPM, FA and HFA were not detected, showing that TG lipolysis and subsequent oxidation of FA in relation to M2 polarization may not have been processed during the in vitro treatment.
Since LPC levels in oxLDLs are largely increased compared to unmodified normal LDLs, LPC levels of macrophages could be expected to be the highest in the FM and the lowest in AM (44); however, they were lowest in NFM as shown in Fig. 4A.While LPC has been widely known as pro-inflammatory lipid class, recent studies reveal its anti-inflammatory roles with dominant activities (45).Especially, the 2 LPC species (16:0 and 18:0) were reported to be associated with the increase in the efflux of cholesterol in the foam cells (45).These 2 LPC species, occupying more than 80% of total LPC amount in this study, were increased more than 3-folds in FM (vs.NFM); however, the total LPC level was even lower in NFM than AM group.In particular, LPC 18:0 has been reported to decrease pro-inflammatory cytokine level (46) and was significantly accumulated in FM group in our study.It is known that LPC not only inhibits cholesterol biosynthesis but also reduces cellular uptake of oxLDL, resulting in the attenuation of foam cell formation and atherosclerosis progression (47,48).A recent report revealed that the expression of the inflammatory gene TLR2, in which TLR-mediated signaling was inhibited by LPC, which otherwise induces M1 polarization in macrophages, was suppressed in FM but elevated in NFM (13), which is consistent with our LPC data.In the case of OPM, the LPC level was lower than that in the CPM group, which is similar to the decreasing pattern of LPC in NFM compared with that in AM.Based on this evidence, NFM, which has the lowest levels of LPC among the 3 groups, seems to be less anti-inflammatory than FM.
In summary, this study introduced a comprehensive lipidomic analysis of 3 different types of aortic macrophages extracted from mice with hyperlipidemia using nUHPLC-ESI-MS/ MS (Fig. 6).TG species with long fatty acyl chains showed different patterns in FM and NFM, while the level of their lipolysis product, FA, was the highest in FM.Lipid classes, including CE, SM, and PC, which are positively associated with the uptake of LDLs, were highly accumulated in intimal macrophages, with higher levels in FM.Levels of PE, LPC, and AA containing lipid species were lower in NFM when compared between the 2 intimal macrophages.As a result, we observed different levels of lipid species closely related to Lipidomic Analysis for Aortic Macrophages https://doi.org/10.4110/in.2023.23.e28 14/20 https://immunenetwork.orgatherosclerosis or inflammation in each group of macrophages, which could provide strong evidence for the transcriptomic outcome that NFM rather than FM are pro-inflammatory in atherosclerosis (13).Our results appear to be helpful to understand the phenotypic changes of macrophages during hyperlipidemic condition and to address unsolved questions related to the development and regression of atherosclerosis.
However, there are several issues that need to be elucidated by further study.Firstly, it is necessary to define the specific lipid mediators responsible for the less inflammatory phenotype of FM.Previously, it has been shown that lipid-laden macrophages exhibit down-regulated expression of genes related to inflammation through the activation of the liver X receptor pathway by desmosterol.Additionally, increased FAs may induce antiinflammatory programming via activation of PPARγ.Therefore, further investigation is needed to identify and analyze the lipid species present in foam cells and evaluate their effects on the phenotypic changes of macrophages.Secondly, the lipidomics analysis in this study was based on repeated measurements of a single lipid extract, which only demonstrates technical reproducibility.Although we observed a strong correlation between the enriched lipid species in FM and gene expression related to lipid production, it is necessary to conduct targeted lipid analysis for FAs, eicosanoids, and docosanoids.This analysis will not only confirm the differential levels of FAs among macrophages but also identify more precise lipid mediators responsible for the cellular phenotype of foam cells.Lastly, it is crucial to perform lipidomics analysis on foam cells and non-foam cells sorted from human atherosclerotic plaques and compare the results with the murine data.Since the duration of atherosclerotic lesion formation differs significantly between humans and mouse models, the lipid species accumulated in foam cells may vary, and their metabolic processes may differ as well.
Figure 2 .
Figure 2. Decreased TG level but increased FAs in FM.The analysis was repeated 5 times in polarity switching mode for quantification with the amount of lipid extract equivalent to 200 cells injected per each run.(A) Fold ratio of TG species in NFM and FM macrophages with respect to those of AM.Vertical dashed line refers to the point of total carbon number 50.ScTG indicates TG group with short chains, and LcTG is for TG group with long chains.(B) The cellular levels of FA and HFA (relative to IS) compared between each macrophage group.High abundance lipid species in each class are represented with acyl chain information and the "low" represents the summed amount of all low abundance species.(C) Corrected peak areas of 6 FA showing significant differences in FM (vs.NFM) compared with those of TG species containing each corresponding fatty acyl chains along with their oxidized products (HFA).(D) Comparison of PLs containing arachidonoyl(20:4) acyl chain between the FM and NFM.ScTG and LcTG refer to TG groups with total carbon number below 50 and above 50, respectively.Numbers inside the parentheses refers to the counted number of TG species in each short chain and long chain TG groups.
Figure 4 .
Fig.4Ashows that the overall levels of some lipid classes including PC, PI, PE, and LPC in FM were higher than those in NFM.Lipid classes plotted in Fig.4Aclearly show significant increases in all classes except LPA in FM compared to NFM.Plots of the remaining 11 lipid classes are shown in Supplementary Fig.6, with the exception in the diacylglycerol and TG.
LipidomicFigure 5 .
Figure 5. Lipid analysis of OPM.(A) Stacked bar graphs showing the total amount of each lipid class compared between in vitro cultured peritoneal macrophage groups.(B) Fold ratio of TG species in in vitro OPM with respect to those of CPM.Vertical dashed line refers to the point of total carbon number 50.ScTG indicates TG group with short chains, and LcTG is for TG group with long chains.(C) Fold ratio of the total amounts of each lipid class in foamy to non-foamy aortic macrophages, FM/NFM, compared with those of peritoneal macrophages treated with oxLDL to controls, OPM/CPM.LPG, lysophosphatidylglycerol; LPE, lysophosphatidylethanolamine; PA, phosphatidic acid; DG, diacylglycerol.
LipidomicFigure 6 .
Figure 6.Schematic diagram representing the lipid species in aortic macrophages.The abundance and heterogeneity of lipid species in FM, NFN, and AM isolated from murine atherosclerotic aorta demonstrated that each macrophage population has a distinct proportion of lipid species.WD, Western diet; KO, knockout; LPE, lysophosphatidylethanolamine; PA, phosphatidic acid; PG, phosphatidylglycerol; LPG, lysophosphatidylglycerol. | 2023-09-01T15:12:30.783Z | 2023-06-15T00:00:00.000 | {
"year": 2023,
"sha1": "98b9893f6af2d6929ce505448329b39216a02312",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4110/in.2023.23.e28",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68d562190848118a2da82da15109b3958df09062",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
2649981 | pes2o/s2orc | v3-fos-license | Truth-telling to the patient, family, and the sexual partner: a rights approach to the role of healthcare providers in adult HIV disclosure in China
Patients’ rights are central in today's legislation and social policies related to health care, including HIV care, in not only Western countries but around the world. However, given obvious socio-cultural differences it is often asked how or to what extent patients’ rights should be respected in non-Western societies such as China. In this paper, it is argued that the patients’ rights framework is compatible with Chinese culture, and that from the perspective of contemporary patient rights healthcare providers have a duty to disclose truthfully the diagnosis and prognosis to their patients, that the Chinese cultural practice of involving families in care should – with consent from the patient – be promoted out of respect for patients’ rights and well-being, and that healthcare providers should be prepared to address the issue of disclosing a patient's HIV status to sexual partner(s). Legally, the provider should be permitted to disclose without consent from the patient but not obliged to in all cases. The decision to do this should be taken with trained sensitivity to a range of ethically relevant considerations. Post-disclosure counseling or psychological support should be in place to address the concerns of potentially adverse consequences of provider-initiated disclosure and to maximize the psychosocial and medical benefits of the disclosure. There is an urgent need for healthcare providers to receive training in ethics and disclosure skills. This paper concludes also with some suggestions for improving the centerpiece Chinese legislation, State Council's “Regulations on AIDS Prevention and Control” (2006), to further safeguard the rights and well-being of HIV patients.
Patients' rights, including the right to truthful information, the right to informed consent and choice, the right to privacy, the right to freedom from discrimination, and the right to refuse even life-saving treatment, are now central in health law and policy around the world. The human rights approach has been promoted in such international documents as the UNESCO Universal Declaration on Bioethics and Human Rights (2005). HIV has presented a serious challenge to health and development since it was reported in 1980s. Because HIV transmissions may be associated with some behaviors viewed as morally wrong (e.g., drug use, commercial sex, etc.), HIV-related stigma and discrimination are persistent around the world. Thus, the disclosure of HIV seropositive status to a patient or related persons raises complex ethical issues in practice. In many situations, protecting patients' rights to privacy and freedom from discrimination may conflict with HIV secondary prevention purposes and others' rights and interests (e.g., sexual partners of HIV patients). The subject of patients'rights and HIV disclosure has been widely discussed and debated globally. However, despite the increasing prominence of such rights, questions persist about whether these rights should be applicable and how they can be implemented in diverse socio-cultural settings, particularly in non-Western societies such as China.
In this paper, we focus upon three major practical issues facing healthcare providers: truth-telling to the patient, the role of family, and the duty to protect the sexual partner of the patient, and argue that the patient's rights framework can serve as a meaningful guide for Chinese healthcare providers in addressing HIV disclosure and reforming certain conventional but ethically problematic practices. Specific attention is given to the provider's duty to inform the patient and protect patient privacy, and on the practical skills, care and sensitivity required for HIV disclosure.
Current norms and practices: state regulation and a case
Like other parts of the world, contemporary China has been moving toward "an age of rights" (e.g., Xia, 2000), and the progress of this movement has been particularly rapid in the fields of healthcare and biomedical research including HIV disclosure and AIDS care. More and more patients and lay people in China are becoming conscious about and taking seriously their rights in health care, and increasing numbers of medical professionals are willing to respect their patients' rights. This reflects a global shift in medical ethics over the last 50 years away from paternalistic models of care, grounded on a certain understanding of beneficence and non-maleficence, toward more patient-centered models grounded on autonomy and truthfulness.
On the national and social policy level, the patients' rights approach has already started to be embraced, even in the official discourse. In relation to patients with HIV, it incorporates the patient's right to truthful information, privacy, freedom from stigma or discrimination, and duty to prevent further infection. The State Council's "Regulations on AIDS Prevention and Control" (2006) constitutes the centerpiece public policy document on the ethical, social, and medical issues related to HIV and AIDS. On the one hand, the legislation places a number of obligations on the patient, including a duty to provide truthful information to the medical professional and concerned public health authorities, and to inform sexual partner(s) of the diagnosis in an honest and timely manner (Article 38). It stipulates that the patient has a legal duty to take necessary measures to prevent the spread of the disease and not to spread the virus to others intentionally (Article 38), and that the patient has to bear civil and criminal liabilities if he or she intentionally spreads HIV to others (Article 62). At the same time, the legislation in general aims to protect the legitimate rights and interests of HIV or AIDS patients, including the right to be free from various kinds of discrimination and stigmatized treatment (Article 3). It legalizes the patient's rights to truthful information and confidentiality. Article 42 stipulates that medical professionals have a duty to disclose the diagnosis of HIV to the patient himself or herself, or the guardian if the patient is incompetent or with diminished competence. Article 39 states: "Without the permission of the individual or the guardian, any institution or individual should not disclose the name, address, work unit, picture, medical history and other possible identifying information of the patient with HIV or AIDS." Meanwhile, HIV prevention and AIDS care in China have serious ethical and socio-cultural challenges as the following case demonstrates: Mrs. Wang (not her real name), a woman in her sixties in northern inland China, was infected with HIV from her husband who was a paid blood donor in the 1990s. Her husband died from AIDS several years ago. Wang now lives with her daughter-in-law and a 5-year-old grandson in the village. Her son, Mr. Li went to work in Guangzhou, a developed coastal city in the south in order to increase the family income. Mrs. Wang decided not to take free HIV testing partly because of her fear of the HIV-related stigma and discrimination, including the possibility of being rejected by her son and his wife if she would be diagnosed as HIV-infected, and partly because she didn't feel early diagnosis and treatment could benefit her. She remarked: "I have had a long life. I don't care of its length at all". However, her son Mr. Li was concerned about her HIV serostatus because Mrs Wang as a grandmother was taking care of Mr. Li's son. He worried that his mother might transmit HIV to his son. At the request of her son, Mrs Wang was tested for HIV and the result was positive. Following the conventional Chinese practice, neither the medical professionals nor Mr. Li told Mrs. Wang about this diagnosis. Nevertheless, although being semiliterate, Mrs Wang was aware of the seriousness of her illness because she had similar symptoms to her husband and because her son prevented her from touching her grandson. Other people in the village were gossiping about the death of her husband and Mrs Wang's illness. Mr. Li felt little compassion for his mother, partly because he believed that he had lost face in the village due to his parents. Meanwhile, Li was concerned about his own HIV status because he had unprotected sex with sex workers in Guangzhou. Li's HIV test result turned out to be positive as well. His physician in Guangzhou urged him to tell his wife, but Mr Li indicated hesitation. The physician wondered whether, despite her duty to maintain patient confidentiality, she ought to inform her patient's wife who lived far away from Guangzhou. She has no specific clinic guideline or regulations to suggest how she can deal with this situation, but relies on her past experiences and those of her colleagues. She had previously disclosed a patient's HIV positive status to his wife for secondary prevention purpose. As a consequence the wife had divorced the patient and the angry patient had threatened to kill the physician.
The case raises many issues which include state and individual responsibility for HIV prevention and stigmatization related to HIV/AIDS. In the following, we focus upon the role of healthcare providers in HIV disclosure from a patients' right approach.
Human and patients' rights as a Chinese value
Researchers on AIDS in China have observed that "Under the influence of Western culture, service providers and decision makers in China have gradually began to recognize patients' rights in decision making and the issue of confidentiality" (Li, Lin, Wu, Lord, and Wu 2008, 8, pp. 240-241, italics added). Western culture has had a positive and important role in advocating for changes in China through a number of channels. For instance, many programs and research projects relating to HIV in China are supported by Western institutions that emphasize the rights of HIV patients. However, it should be recognized that this emerging age of patients' rights in China is not solely due to such Western influences, but is rather a development motivated by Chinese patients, medical professionals, and lay people, and is endorsed by values inherent to Chinese cultural traditions. The idea that Chinese advances in human and patients' rights have been caused only by Western influencea standpoint held by many Westerners and endorsed in the Chinese official discoursehas degraded the agency of Chinese people.
In both China and the West, Chinese cultural traditions have often been characterized as collectivistic or authoritarian in nature and as thus radically different to traditions of the West, and to such notions as human and patients' rights. For example, in their empirical studies on HIV disclosure in China, researchers have employed this Chinese-Western contrast to interpret their findings and to discuss the normative dimension on how HIV disclosure should be practiced in China (Chen et al., 2007;Li, Lin, Ji, Sun, & Rotheram-Borus, 2007, 2008. For them, the norm of patients' rights represents a Western value that is not culturally compatible with Chinese values and thus should not be ethically applicable in China. It is not possible here to present an in-depth theoretical account of human and patients' rights in the Chinese milieu (e.g., Nie, 2005Nie, , 2011. Nevertheless, a few points should be made to briefly indicate the appropriate place of patients' rights within Chinese ethical traditions. The movement for human and patients' rights has been a truly global discourse, involving people from all continents and countries, including China (e.g., Lauren, 2003). Conceptions of human rights have been widely discussed, debated, and integrated into Chinese intellectual and political life since the late Qing dynasty (e.g., Fung, 2000;Svensson, 2002). They are articulated or at least implied in classical Chinese moral and political philosophy, most systematically in the work of Meng Zi (Mencius), a founder of Confucianism (see Roetz, 1993Roetz, , 1999Xia, 2002). Though privacy is currently less respected in China than in Western countries, pioneering studies demonstrate that classical texts from as early as the Warring States period (481-221 BCE) clearly expressed an acute awareness of and respect for privacy in healthcare practice. Similar examples may be found throughout Chinese history; for instance in the medical case histories of the Ming and Qing dynasties of late imperial China (McDougall & Hansson, 2002).
In other words, the thesis that a patients' rights approach is culturally incompatible to China has significantly oversimplified the great richness and future potential of indigenous Chinese cultural traditions (Nie, 2011). Admittedly, human and patients' rights are conceptually complicated and may have very different theoretical justifications and practical meanings in Chinese socio-cultural context. Our point here is that, rather than rejecting the value of patients' rights from the perceived but often stereotyped Chinese-Western cultural differences, the rights approach can be one of ethical aspiration to reform the certain conventional cultural practices and revive the vital but misrepresented traditional Chinese moral ideas and ideals.
For healthcare providers the serious ethical and practical challenge is thus not whether patients' rights matter in China, but how these rights should be respected and what ought to be done when these rights appear to be in conflict with other ethical concerns, such as the public good, the rights of others, and cultural practice.
Truth-telling to patients
The cross-cultural differences and transcultural similarities of truth-telling on a global scale and in history are far more complicated than implied by the perceived dichotomy of disclosure in the West versus non-disclosure in non-Western societies such as China (Nie & Walker, 2015). Yet it is true that, as clearly shown in the case, in contrast to the practice of direct and truthful disclosure in most Western countries today, medical professionals in contemporary China often withhold information about terminal illnesses from patients, inform family members only, and sometimes even collude with relatives in lying to patients. Treating HIV as a kind of terminal illness like cancer, some physicians and family members still practice the so-called protective treatment, i.e., hiding the HIV diagnosis from patients because they presume that truth-telling would cause psychological harms and destroy hope (Qiao, Nie, Tucker, Rennie, & Li, 2015).
However, with the development of patients' rights in China, more Chinese healthcare providers are now truthfully informing patients of their condition. Numerous sociological surveys conducted throughout mainland China have shown that the great majority of Chinese patients want truthful information about their medical condition, even in terminal cases (for a review of the related literature, see Nie, 2011, pp. 120-123). Historically, while in the West truthful disclosure regarding a terminal illness did not become the ethical norm until the 1960s and 1970s (or even AIDS Care 85 later), many primary historical materials, including the biographies of hundreds of ancient medical sages and famous physicians in various dynasties, show that there was a long (if now forgotten) Chinese tradition of truth-telling, dating back at least 26 centuries. The Confucian moral outlook mandates truthfulness as a basic ethical principle and a cardinal social virtue which physicians ought to be guided by. So, the current shift away from the practice of avoiding truthful disclosure is not so much an imitation of Western (and thus foreign) ways, but rather a return to a long-neglected indigenous Chinese tradition (Nie, 2011(Nie, , pp. 98-133, 2012. For patients with HIV, the right to information and the right to make an informed choice in a context that is free from undue fear or pressure imply timely disclosure of diagnosis and treatment options, and that the diagnosis remains confidential to the patient-physician relationship. These rights fit squarely with the therapeutic goal of medicine, as prompt HIV disclosure to an HIV-infected individual has important implications for ensuring high-quality services throughout the continuum of HIV care. By contrast, HIV testing that is not tightly linked to prompt disclosure to the infected individual (and in turn their partners) can result in further HIV transmission, delays in initiating anti-retroviral therapy, and persistent high-risk behaviors. Thus, though disclosing information of a HIV diagnosis is likely to be difficult for the patient and perhaps harmful, hiding such information is likely to be more so (Nie & Walker, 2015).
The general harms of physicians not truthfully disclosing a terminal diagnosis are vividly portrayed in literary masterpieces such as Tolstoy's The death of Ivan Ilyich and carefully argued in contemporary bioethical classics (e.g., Katz, 2002). Lying about a terminal illness may leave the patient feeling isolated, uncertain, abandoned, and prevent him from participating in the decision-making of medical treatments. Such harms can also be identified in the case set out earlier by imagining the depths of fear, isolation, and rejection felt by Mrs Wang as she is left to guess the results of her test while observing the change in Mr Li's behavior toward her. Patient-physician trust is fundamental for effective health care, and truthfulness is the foundation of trust. Indeed, the practice of systematically withholding critical information from patients may be a contributing factor to the current crisis of patient-physician trust in China today.
The role of family
The active role of the family in various social networks, including the patient-physician relationship, has been widely discussed as among the most distinctive aspects of Chinese culture (e.g., Cong, 2004). This has implications for HIV disclosure and care provision (Chen et al., 2007;Li et al., 2008Li et al., , 2009. HIV research in China has shown how kinship ties and family networks provide powerful support in a generally unsupportive local environment. Social network research from China has shown that HIV-infected individuals are more likely to disclose their HIV status to family members who provide social support (Zang, He, & Liu, 2014). Often, physicians first disclose the HIV diagnosis not to the patient but to a family head such as a parent or a spouse. They then assess the patient's condition and family situation to make a decision about how to inform him. In doing so, they assume that the patient's family should be involved in treatment as early as possible (Qiao et al., 2015).
This practice of informing family members before patients illustrates a potential tension between what has been called "Chinese familism" and patients' rights to information, privacy and independence; a tension which many physicians experience directly in their practice. However, this can be another "false dichotomy" (Nie, Smith, Cong, Hu, & Tucker, 2015), as the involvement of the family does not need to be in conflict with the patient's rights. As with health care generally, for HIV care to be fully effective it must engage with the patient's social and familial networks, and research has shown that openness about HIV and support in Chinese families positively impact on HIV patients (Li et al., 2009). This coheres with the right to support which is accorded to patients in most jurisdictions along with the other rights that have been discussed. Moreover, it does not negate the importance of those other rights, and when involving families in a patient's care the healthcare provider should be alert to the possibility that the patient may need to keep some information private, or to maintain some degree of independence. While family involvement is often very beneficial to patients, it is not always so. This is shown in Mr Li's disregard for Mrs Wang, and the way family relationships can be broken down by a sense of social shame. The provider needs to be sensitive to such dynamics, and exercise careful judgment.
Disclosure to the sexual partner
While few would now directly contest the importance of the rights to truthful disclosure, privacy, and confidentiality, it is also widely recognized that such rights should be moderated by the rights of others, and by the duties that a healthcare provider has to others in the community and to public health in general (e.g., Gillett & Walker, 2013). Perhaps the most prominent ethical question around HIV disclosure is whether health providers have a duty to disclose directly to the patient's family members, especially sexual partner(s). Like HIV care in general, this is a universal moral challenge for healthcare providers. Though an essential measure for disease control, a systematic literature review from China demonstrated that sexually transmitted infection (including HIV) partner notification has not been widely implemented, and that there is an urgent need in China for policies and guidelines regarding partner notification (Wang, Peng, Tucker, Chon, & Chen, 2012). While many Chinese medical professionals normally inform patients themselves first and then encourage them to tell their sexual partners for secondary prevention, it is common for healthcare providers to have difficulties persuading their patients to ensure their sexual partners undertake HIV testing. As a result, many health professional have to resolve the acute ethical conflict between protecting patients' confidentiality and preventing possible serious harms to another person (Qiao et al., 2015).
Ethically speaking, it is generally thought that people have a duty to inform others of serious risks and takes reasonable steps to prevent the perceived harm from occurring. However, the application of this basic rule is not always straightforward. In the context of health care, the healthcare provider must consider the impact of the information on the patient, including the effect this may have on ongoing care, the degree of benefit that an intervention could reasonably achieve, and other harms that may result as a consequence. These factors are especially salient in caring for patients with HIV, and make the ethical task of healthcare providers involved in such care particularly delicate. While it is clearly good if those at risk of infection are informed and assisted with preventive strategies, or treated if they are found to be infected, disclosure of a diagnosis of HIV without the consent of the patient may lead to other serious harms (for a summary of this debate see Beauchamp & Childress, 2009, pp. 307-309). There is, for instance, a concern that people will avoid being tested for fear of being denied privacy, and that consequently those who are infected or at risk remain outside medical help.
The healthcare professional should, whenever necessary, clearly inform patients that they (i.e., the patients) have a moral and legal duty to tell their sexual partner of their condition. If the patient refuses to do this, the healthcare professional in China (as elsewhere) should be allowed to breach confidentiality and take measures to see that those at risk are informed, so that they and others related to them can receive appropriate care. However, healthcare providers should not be required to do this in all such circumstances. The laws surrounding these actions should give weight and scope to the judgments of the responsible practitioner, because the provider is best placed to determine whether there is a clear risk to others and a definite refusal on the part of the patient to inform them of this risk, and hence a need to refer the matter to a third party without authorization from the patient. In assessing whether or not it is necessary to do this, the healthcare provider must maintain a therapeutic commitment to the patient, be highly sensitive to the nuances of the situation, and exercise refined judgment. It is appropriate that this therapeutic commitment include a concern for those associated with the patient and at risk of infection, and that this concern include an expectation that the patient will inform them of this risk. Likewise, it is appropriate that the healthcare provider offers to support the patient in delivering this information, with an awareness of the importance of social and familial networks for the patient's well-being (Gillett, 2004, pp. 151-153). It is within the context of this kind of relationship that the provider is able to make the required judgment. The law, though essential in sustaining good healthcare practice, cannot take account of these relational factors, and thus could not ensure good management of such situations apart from such judgment. A law automatically forcing disclosure would introduce a coercive element into the patient-physician relationship, and may cause as much harm as it prevents.
Recommendations for policy and healthcare education
Following the above discussion, some recommendations can be proposed to help improve policy guidelines. First, although the State Council's legislation acknowledges aims to protect rights of HIV or AIDS patient including the rights to truthful information and confidentiality and to be free from various kinds of discrimination, the general framework of patients' rights is lacking, or at best only implicit. We propose to add a statement in Article 3 that "the legislation safeguards the rights of HIV patients". Second, the cultural practice of the active role of family members in HIV disclosure for at least a significant portion of patients should be recognized. Wherever possible, healthcare professionals have a duty to facilitate the supportive role of the family and other social networks. Third, the legislation and policy guidelines should not avoid addressing the difficult issue of disclosing to the sexual partner. As we have argued above, healthcare professionals should be allowed but not necessarily obliged to disclose directly to the patient's sexual partner.
AIDS Care 87
For healthcare providers to be adequately equipped to deal with the ethical challenges involved in HIV disclosure and HIV care in general, healthcare training and continuing education needs to incorporate the ethical, psychological, and social dimensions of health care. It is critical that healthcare providers are respectful of patients' rights and that they learn how to apply them in ethically complex situations. They must be taught the ethical basis of HIV disclosure policies, and how to communicate the diagnosis in a caring and supportive manner. This in turn requires appropriate training of disclosure skills for healthcare providers and post-disclosure counseling. Equally, healthcare providers need to understand the familial and social environment of an HIV-infected individual and take this into account when disclosing information. Psychological support should be in place to address the potentially adverse consequences of provider-initiated disclosure and to maximize the benefits of the disclosure. Thus, the ethical issues described are not isolated to some aspects of clinical practice and merely concerned with high-minded ideals or abstract principles. Rather, they have to be worked out in each particular context with sensitivity to the differences each case may present, and within a system integrating the biomedical, psychological, and social elements of care.
Conclusions and limitations
This paper has discussed the importance of patients' rights in China and the practical implications for healthcare providers caring for those with HIV through focusing upon the need to disclose information directly to the patient, the issues of engaging families in care, and the ethical difficulties around informing sexual partners and others at risk. We have left out some important related issues. First, we have only touched on the duty of government in protecting the rights of HIV patients. The human and patients' rights movement has served as a powerful moral and political aspiration, partly because it limits the power of governments and of nation-states over communities and individuals, but also because it imposes duties upon governments and professionals to promote the well-being of individuals and patients. Related to this, there are potential legal barriers to implementing these approaches that should be considered in advocating for a more rights-based approach to HIV disclosure practices in China. China has historically had a weak rule of law and the current capacity of the legal system to enforce rights-based policies is relatively poor (Peerenboom, 2002).The gradual expansion of an independent legal system in China now underway in China may facilitate the use of rights-based approaches. Second, we have not touched upon the issue of whether and how parental HIV information should be disclosed to children (Qiao et al., 2013a(Qiao et al., , 2013b. Furthermore, the role of gender in patients' rights to truthful information and decision-making needs to be considered. Feminism in general and a feminist human rights approach can offer insight into and practical proposals for eradicating persistent gender-related discrimination in China (e.g., Nie, 2004Nie, , 2010. Finally, as has been widely acknowledged social stigma associated with HIV has greatly hindered the care of HIV patients in China (e.g. Chen et al., 2011). A patients' rights approach can help to address this significant problem. All these dimensions certainly deserve separate studies.
To conclude, the ethical framework of patient's rights can serve as a meaningful guide for medical professionals in addressing the ethical challenges of HIV disclosure in the Chinese socio-cultural context. In adopting this approach, China will continue to contribute to the global struggle to ensure that HIV sufferers have not only adequate health care but the rights and dignity that neither disease nor socio-cultural environment should deprive. | 2016-05-12T22:15:10.714Z | 2015-11-02T00:00:00.000 | {
"year": 2015,
"sha1": "59644ad7545b9abfddb8c3a26912f957a9b38461",
"oa_license": "CCBYNC",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4685610",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6beeead8f3cfe13c98b838db62b8aec5631e3e81",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
39410973 | pes2o/s2orc | v3-fos-license | Inflammatory and Oxidative Responses Induced by Exposure to Commonly Used e-Cigarette Flavoring Chemicals and Flavored e-Liquids without Nicotine
Background: The respiratory health effects of inhalation exposure to e-cigarette flavoring chemicals are not well understood. We focused our study on the immuno-toxicological and the oxidative stress effects by these e-cigarette flavoring chemicals on two types of human monocytic cell lines, Mono Mac 6 (MM6) and U937. The potential to cause oxidative stress by these flavoring chemicals was assessed by measuring the production of reactive oxygen species (ROS). We hypothesized that the flavoring chemicals used in e-juices/e-liquids induce an inflammatory response, cellular toxicity, and ROS production. Methods: Two monocytic cell types, MM6 and U937 were exposed to commonly used e-cigarette flavoring chemicals; diacetyl, cinnamaldehyde, acetoin, pentanedione, o-vanillin, maltol and coumarin at different doses between 10 and 1,000 μM. Cell viability and the concentrations of the secreted inflammatory cytokine interleukin 8 (IL-8) were measured in the conditioned media. Cell-free ROS produced by these commonly used flavoring chemicals were also measured using a 2′,7′dichlorofluorescein diacetate probe. These DCF fluorescence data were expressed as hydrogen peroxide (H2O2) equivalents. Cytotoxicity due to the exposure to selected e-liquids was assessed by cell viability and the IL-8 inflammatory cytokine response in the conditioned media. Results: Treatment of the cells with flavoring chemicals and flavored e-liquid without nicotine caused cytotoxicity dose-dependently. The exposed monocytic cells secreted interleukin 8 (IL-8) chemokine in a dose-dependent manner compared to the unexposed cell groups depicting a biologically significant inflammatory response. The measurement of cell-free ROS by the flavoring chemicals and e-liquids showed significantly increased levels of H2O2 equivalents in a dose-dependent manner compared to the control reagents. Mixing a variety of flavors resulted in greater cytotoxicity and cell-free ROS levels compared to the treatments with individual flavors, suggesting that mixing of multiple flavors of e-liquids are more harmful to the users. Conclusions: Our data suggest that the flavorings used in e-juices can trigger an inflammatory response in monocytes, mediated by ROS production, providing insights into potential pulmonary toxicity and tissue damage in e-cigarette users.
INTRODUCTION
E-cigarettes are gaining popularity among American youth mainly due to the availability of over 500 brands with over 7,700 uniquely flavored e-juices (Zhu et al., 2014). These flavoring chemicals are often generally recognized as safe (GRAS) classification when used in foods. E-cigarette consumption has been vastly increased over the recent years especially among American youth primarily due to flavors that are marketed with alluring names (Farley et al., 2014;Ambrose et al., 2015). With the declined consumption of cigarettes, e-cigarettes are advertised as a healthier alternative as the flavoring used in e-cigarettes are considered safe for ingestion (Berg et al., 2014;Klager et al., 2017). E-cigarette use has increased among adolescents, and the number of non-cigarette smoking youth who use e-cigarettes has tripled over the past years. This has become a serious public health concern as the non-smoking youth is twice as likely to consume conventional cigarettes (Bunnell et al., 2015;White et al., 2015). Moreover, some of the flavors used in e-liquids pose a potential health risk for its users (Allen et al., 2016;Kosmider et al., 2016;Gerloff et al., 2017).
Electronic nicotine delivery systems (ENDS), commonly known as e-cigarette is a battery-powered device that contains aerosolized nicotine delivered to its users in the form of vapor instead of smoke. It is assumed that e-cigarettes do not cause lung related diseases from toxic tobacco since e-cigarettes lack the combustion of tobacco. Therefore, it is generally thought that the effects of e-cigarettes are relatively less harmful than that of conventional cigarettes. However, the use of the ecigarette should not be taken lightly because it has been on the United States market for only 10 years and more research needs to be done on e-cigarette constituents and their potential health effects. At present, e-liquids, cartridges and other vape products undergo minimal regulation under the Food and Drug Administration, FDA (Hutzler et al., 2014). E-liquids contain propylene glycol, nicotine and flavoring chemicals including diacetyl, cinnamaldehyde, acetoin, maltol, and pentanedione and other flavors including flavor enhancing chemicals (Allen et al., 2016). E-liquids come in a myriad of flavors at various nicotine concentrations ranging from 0 mg to 36 mg/mL (Davis et al., 2015). However, e-liquid constituents and their potential adverse effects have not been well-understood, and there is much scientific uncertainty about these products postulating an unrecognized respiratory health hazard to the users ( Barrington-Trimis et al., 2014). In this study, we have only focused on the nicotine-free e-juices, as the effects and the mechanisms of nicotine are well established. These e-liquids can be categorized based on the flavor profile of the e-liquid. The categories include alcohol, berry, cake, candy, coffee/tea, fruit, menthol and tobacco ( Table 1). Some of these flavors are pineapple coconut, cherry, cinnamon roll, café latte, cotton candy, melon, and tobacco.
The e-liquid manufacturers market these liquids with alluring names, such as Cotton Candy, Oatmeal Cookie, and Tutti Frutti that are more appealing especially to young adults (Allen et al., 2016). Vaping exposes these flavoring chemicals to the lungs when the e-liquids are heated and inhaled with a similar mechanistic pathway as the inhalation of chemicals at microwave popcorn factories and coffee roasting plants (Bailey et al., 2015).
The flavors used in e-cigarettes are known to cause inflammatory and oxidative stress responses in lung cells (Baggiolini and Clark-Lewis, 1992;Aw, 1999;Lerner et al., 2015b;Gerloff et al., 2017). In this study, we assessed the inflammatory response of monocytic cells due to the exposure of nicotine-free e-liquid flavors and commonly used e-liquid flavoring chemicals, such as diacetyl, cinnamaldehyde, pentanedione, acetoin, maltol, ortho-vanillin, and coumarin. We assessed inflammation by quantifying interleukin 8 (IL-8), a major pro-inflammatory marker primarily produced by macrophages involved in neutrophil recruitment during inflammation (Moldoveanu et al., 2009). The potential to cause oxidative stress by these flavoring chemicals and e-liquids were assessed by cell-free reactive oxygen species (ROS) assay. We hypothesized that the inflammatory response due to the acute exposure of e-liquids and flavoring chemicals is mediated by oxidative stress and these responses are dose-dependent.
Scientific Rigor
We used rigorous and unbiased approach during experiments and data analysis.
Classification of e-Liquid and Flavors
We have classified the e-liquid based on their flavor characteristics ( Table 1).
Culturing U937 and Mono Mac 6 (MM6) Cells
U937 monocytic cells from human pleural tissue were obtained from ATCC. Cells were cultured and grown to reach the required density in complete RPMI 1640 medium with 5% FBS and 1% penicillin/streptomycin in T75 flasks. Passages below 10 were selected and seeded at 500,000 cells per well in 24 well plates with 1 ml of complete RPMI 1640 media with 1% FBS. After incubating the cells overnight, they were treated with flavoring chemicals or flavored e-liquids.
The human monocyte-macrophage cell line (mature monocytes-macrophages) Mono Mac 6, which was established from peripheral blood of a patient with monoblastic leukemia were grown in RPMI 1640 medium supplemented with 10% FBS, 2 mM l-glutamine, 100 μg/ml penicillin, 100 U/ml streptomycin, 1% nonessential amino acids, 1 mM sodium pyruvate, 1 μg/ml human holo-transferrin, and 1 mM oxaloacetic acid. The cells were cultured at 37 • C in a humidified atmosphere containing 5% CO 2 . When the sufficient density was reached, the cells were seeded in 6-well plates at the density of 1 × 10 6 cells in 2 ml supplemented media with 1% FBS and incubated at 37 • C with 5% CO 2 overnight, prior to the exposure of the cells to flavoring chemicals or e-liquids. Cells were incubated in low serum containing media (FBS 1%) to reduce unwanted stimulation of the cells and the background cytokine levels. Serum starvation allowed us to measure subtle changes in cytokine level due to the treatment of interest.
Cell Treatments and Collection of Conditioned Media
Serum-deprived U937 and MM6 cells were treated with flavoring chemicals diacetyl, cinnamaldehyde, acetoin, maltol, pentanedione, o-vanillin, and coumarin. Each flavoring chemical was added to designated wells at varying concentrations between 10 and 1,000 μM in triplicates. This wide range of concentration was chosen based on our earlier publication (Gerloff et al., 2017) and on the notion to assess the elicited inflammatory/oxidative stress response by macrophages with minimum cellular toxicity. Twenty-four hours post-treatment, the conditioned media was collected by centrifugation of MM6 cell suspension at 1,000 rpm for 5 min and U937 cell suspension at 125 g for 7 min. Collected supernatants were frozen at −80 • C for cytokine assessment. The viability of the cells was measured by re-suspending the cells in PBS. U937 cells were also treated with a selected number of flavored e-liquids without nicotine at 0.25 and 0.5% concentrations.
The flavored e-liquids used for treatments included Strawberry Zing, Café Latte, Pineapple Coconut, Cinnamon Roll, Fruit Swirl, Mega Melons, Mystery Mix (menthol flavor), American Tobacco, Grape Vape, Very Berry, and Mixed Flavors (an equally proportional mixture of the e-liquids). Untreated and propylene glycol treated cell groups served as the control and the solvent control groups.
Cytotoxicity via Cell Viability Assessment
Using the acridine orange (AO) and propidium iodide (PI) staining, viability was determined in U937 and MM6 cells for plating and after treatment with flavoring chemicals and e-liquids. AO/PI staining and viability determination was performed in 20 μL of cells combined with 20 μL of AO/PI staining solution. Finally, 20 μL of stained cells were then added to a Cellometer counting chamber and analyzed using a fluorescent Cellometer (Nexcelom Bioscience, Lawrence MA). At the end of the analysis, the Cellometer automatically reported live and dead cell concentration as a percentage.
Cell-Free ROS Assay for Flavoring Chemicals and Flavored e-Liquids
The relative levels of OX/ROS produced from flavoring chemicals or e-cig vapor were determined using 2 ,7 dichlorofluorescein diacetate (H2 DCF-DA) fluorogenic probe (EMD Bioscience, CA). A spectrofluorometer (Turner Quantech fluorometer Model FM109535 from Barnstead International/Thermolyne Corporation) was used to measure oxidized dichlorofluorescein (DCF) fluorescence at absorbance/emission maxima of 485 nm/535 nm. Hydrogen peroxide standards between 0 and 50 μM were created from 1 M stock and reacted at room temperature for 10 min with the prepared DCFH solution in a total of 5 ml. These standards were then used to calibrate fluorescence intensity units (FIU) which numerically match respective hydrogen peroxide (H 2 O 2 ) concentrations. Flavoring chemical concentrations for acetoin, diacetyl, 2 ,3 pentanedione, cinnamaldehyde, maltol, ovanillin, and coumarin between 10 and 1,000 μM were prepared in phosphate buffer. After mixing the dye with the flavoring chemical and incubating at 37 • C for 15 min, the fluorescence was recorded for each flavoring chemical. The DCF fluorescence data are expressed as μM H 2 O 2 equivalents referring to the concentration of the H 2 O 2 added to the DCFH solution.
To assess the ROS with a new atomizer, flavored e-liquids from Table 1 (Strawberry Zing, Strawberry Fields, Very Berry, Grape Vape, American Tobacco, Mystery Mix, and Mixed Flavors) were aerosolized with a new atomizer at each use using the Scireq inExpose (Montreal, Canada) e-cigarette system with one puff per minute for 10 minutes. "Mixed Flavors" were prepared by combining an equal amount of each of the selected flavored e-liquid (Strawberry Zing, Café Latte, Pineapple Coconut, Cinnamon Roll, Fruit Swirl, Mega Melons, Mystery Mix (menthol flavor), American Tobacco, Grape Vape and Very Berry) together. Subsequently, aerosol from flavored e-liquid was bubbled through the DCFH solution at 60 L/min. The bubbled DCF solution was then measured for ROS release.
To obtain ROS values with a used atomizer, selected eliquids from Table 1 (Café Latte, Cinnamon Roll, Chai tea, Pineapple Coconut, and Cotton Candy) were aerosolized with a previously used atomizer using the Scireq inExpose e-cigarette system as described above. In between switching different flavors, propylene glycol was aerosolized for 10 min. This exemplifies the concept of attempting to clean the atomizer in order to avoid residual carryover from one e-liquid flavor to the next. E-liquid flavor aerosol was bubbled through the DCFH solution at 60 L/min. The bubbled DCF solution was then measured for ROS release. Propylene glycol (PG) was used as a control comparison group.
To obtain cell-free ROS assay for "consecutive flavors, " 10 flavored e-liquids (Strawberry Zing, Café Latte, Pineapple Coconut, Cinnamon Roll, Fruit Swirl, Mega Melons, Mystery Mix (menthol flavor), American Tobacco, Grape Vape, and Very Berry) were aerosolized two puffs per e-liquid flavor, one flavor at a time for 10 min. Flavored e-liquid aerosols were bubbled through the DCFH solution and then measured for ROS release. Propylene glycol (PG) was used as a control when measuring ROS release.
Inflammatory Response (IL-8) Assay
Following cell treatments, conditioned media were collected 24 h post-treatment of different concentrations of flavoring chemicals. Pro-inflammatory cytokine (IL-8) release was determined using the IL-8 cytoset ELISA kit according to the manufacturer's instructions (Life Technologies).
Statistical Analysis
Statistical analyses of significance were performed by one-way ANOVA (Tukey's multiple comparison test) when comparing multiple groups and student t-test when comparing two groups using GraphPad Prism 7 (La Jolla, CA). Data are presented as means ± SEM. P < 0.05 is considered as statistically significant.
Cytotoxicity Due to Flavoring Chemicals
To assess the cytotoxicity due to exposure to flavoring chemicals U937 and MM6 cells were stained with AO/PI dye after 24 h. In U937 cells, flavoring chemical treatments with 2, 3-pentanedione, cinnamaldehyde, and o-vanillin significantly affected the cell viability compared to the untreated control group (Figure 1). Pentanedione treatment reduced the cell viability to about 62% (p < 0.001). Cinnamaldehyde treatment showed a distinct dosedependent cytotoxic response, decreasing the cell viability to 65, 15, and 2% with 100, 500, and 1,000 μM concentrations respectively (p < 0.001). Treatment with o-vanillin reduced the cell viability to approximately between 12 and 19% (p < 0.001). Other flavoring chemicals, acetoin, diacetyl, maltol, and coumarin did not affect the cell viability at the tested concentrations. To assess any effects on viability by the solvents used with the flavoring chemicals, DMSO and ethanol treatments were also performed in which no considerable effects on cell viability were observed.
In MM6 cells, the tested flavoring chemicals caused no significant cell death except in cinnamaldehyde treatment groups (Figure 2). The cell viability of the other treated groups; acetoin, diacetyl, pentanedione, maltol, vanillin, and coumarin ranged above 70%. At 100 and 1,000 μM cinnamaldehyde concentrations, MM6 cell viability was reduced to 61 and 32% respectively (Figure 2). Only with the cinnamaldehyde treatment, we observed a dose-dependent cytotoxic response (p < 0.01) compared to the untreated control group.
Cytotoxicity Due to Flavored e-Liquid Exposure
In order to assess the cytotoxicity of the flavored e-liquids, we exposed U937 cells to 0.25 and 0.5% concentrations of selected e-liquids from Table 1. Typically, e-liquid base includes propylene glycol (PG). Thus, PG was used as a control. PG showed no cytotoxicity. Tested e-liquids caused decreased cell viability at the higher dose for each e-liquid in general. However, only Mystery Mix exhibited significant cytotoxicity, reducing cell viability to 71% (p < 0.05). Treating the cells with "mixed flavors" e-liquids at 0.5% concentration decreased the cell viability to 59% (p < 0.01) (Figure 3).
Cell-Free ROS Release by Flavoring Chemicals and with Flavored e-Liquids
To measure the amount of exogenous ROS released by flavoring chemicals in e-liquids, the DCFH-DA dye was treated with the flavoring chemicals of interest, and the florescence was measured. The concentration of the ROS was expressed as H 2 O 2 equivalents. For all the tested flavoring chemicals, acetoin, diacetyl, pentanedione, cinnamaldehyde, maltol, o-vanillin, and coumarin, the solvent controls (DMSO and ethanol) gave rise to extremely low H 2 O 2 equivalents. For all the chemicals, the H 2 O 2 equivalents at 10 μM concentration were minimal, whereas at 1,000 μM concentration it was significantly elevated (p < 0.001) compared to control DMSO and EtOH. Diacetyl, FIGURE 1 | Percent viability of U937 cells 24 h post-exposure to e-cigarette flavoring chemicals, i.e., acetoin, diacetyl, pentanedione, cinnamaldehyde, maltol, o-vanillin, and coumarin at concentrations between 10 μM and 1,000 μM. U937 monocytes were treated with e-cigarette flavoring chemicals at varying concentrations and incubated at 37 • C with 5% CO 2 for 24 h. Cells were rinsed with PBS and stained with AO/PI dye. The viability of the cells was assessed using the Cellometer 2000. Data are expressed as mean ± SEM (n = minimum 3 per group). Statistical significance was determined by one-way ANOVA (Tukey's multiple comparison test). ***p < 0.001 vs. Control. cinnamaldehyde, maltol, and o-vanillin significantly elevated H 2 O 2 equivalents at 100 μM concentration. While acetoin, diacetyl, pentanedione, cinnamaldehyde, maltol and ovanillin exhibited moderately increased ROS levels at 10 μM concentration, only coumarin showed a significant increase in ROS levels compared to the control groups (p < 0.05) (Figures 4A-G).
To measure the cell-free OX/ROS produced by flavored e-liquids with a new atomizer, the aerosols were bubbled through the DCF-DA indicator solution, then the fluorescence was measured as H 2 O 2 equivalents. As shown in Figure 5A, Strawberry Zing, Very Berry, American Tobacco, Mystery Mix, and Mixed Flavors produced higher H 2 O 2 equivalents compared to PG (p < 0.001). Respectively, American Tobacco, Mystery Mix, and Mixed Flavors had the highest H 2 O 2 equivalents compared to PG (p < 0.001) (Figure 5A).
In order to quantify the ROS levels released with a used atomizer, the same atomizer was continuously used with selected e-liquids and PG was used in between to reduce the carryover of residual ROS from one e-liquid to the next during aerosolization. While Chai Tea produced comparable H 2 O 2 equivalents to PG, Café Latte, Cinnamon Roll, and Cotton Candy produced highly significant levels of H 2 O 2 equivalents compared to the control PG group (p < 0.001) ( Figure 5B).
Cell-Free ROS Release by Consecutive Mixture of Flavors
Consecutive aerosolization of 10 different e-liquids produced significantly elevated H 2 O 2 equivalents compared to the control PG (p < 0.001) (Figure 5C). This OX/ROS amount was comparable to the Mixed Flavors in Figure 5A.
Inflammatory Mediator (IL-8) Response Due to Flavoring Chemicals
The inflammatory response due to the exposure to flavoring chemicals was assessed by treating MM6 and U937 monocytic cells with flavoring chemicals and measuring the IL-8 concentrations in the conditioned media.
In U937 cells, treatment with flavoring chemicals of interest was performed at least twice with various dose concentrations. Representative treatment and its respective control data sets were chosen. Treatment with acetoin decreased IL-8 levels in a dose-dependent manner. At 1,000 μM concentration, this downregulation in IL-8 cytokine is highly significant (p < 0.0001) FIGURE 2 | Percent viability of Mono Mac 6 (MM6) cells 24 h post-exposure to e-cigarette flavoring chemicals, i.e., acetoin, diacetyl, pentanedione, cinnamaldehyde, maltol, o-vanillin, and coumarin at concentrations 100 and 1,000 μM. Mono Mac 6 cells were treated with e-cigarette flavoring chemicals and incubated at 37 • C with 5% CO 2 for 24 h. Cells were rinsed with PBS and stained with AO/PI dye. The viability of the cells was assessed using the Cellometer 2000. Data are expressed as mean ± SEM (n = 3 per group). Statistical significance was determined by one-way ANOVA (Tukey's multiple comparison test). **p < 0.01 vs. Control.
( Figure 6A). Treatment with a concentration of 1,000 μM diacetyl resulted in a significant elevation in IL-8 levels (p < 0.0001) ( Figure 6B). 2, 3-Pentanedione and o-vanillin treatments caused a significant increase in IL-8 response in a dose-dependent manner (Figures 6C,D). Maltol and coumarin treated groups (1,000 μM concentration) increased the IL-8 concentrations significantly (p < 0.001) (Figures 6E,F). Treatment with 10 μM concentration of cinnamaldehyde increased the IL-8 highly significantly (p < 0.001), whereas 1,000 μM concentration of cinnamaldehyde treatment reduced the IL-8 lower than its untreated control likely due to the cytotoxicity of the treatment ( Figure 6G).
Inflammatory Response (IL-8) Due to Flavored e-Liquid Exposure
Inflammatory response due to flavored e-liquid treatment was assessed by the measurement of IL-8 concentrations in conditioned media after 24 h of flavored e-liquid treatment. These treatments were performed twice, and representative data sets were chosen with its corresponding control. The untreated control cells had relatively low IL-8 levels compared to the treated groups, in most cases averaging around 50 pg/mL. Cinnamon Roll and Mystery Mix showed significant dosedependently increasing levels of IL-8 with p < 0.01 or stronger at either dose (Figures 7A,C). Café Latte and Mixed Flavors e-liquid treatment at 0.5% caused a highly significant IL-8 response (p < 0.001) (Figures 7B,I). Interestingly, treatment with Mega Melons, Grape Vape, and Pineapple Coconut either had a slight increase or equal levels of IL-8 at 0.25% dose and a significant decrease in IL-8 levels at 0.5% dose compared to their untreated counterparts (Figures 7E,F,K). Similarly, treatment with American Tobacco and Very Berry significantly reduced the IL-8 response even at 0.25% dose (Figures 7G,H). Treatment with Fruit Swirl and Strawberry Zing had comparable IL-8 levels to the untreated control (Figures 7D,J).
DISCUSSION
E-cigarettes hold the popular misconception that they have relatively less or no harm to the consumer's health in contrast to conventional combustible tobacco due to lack of sufficient evidence to prove its harmful effects. These uncertainties are primarily due to many unstandardized facets of ENDS such as e-liquid constituents and unstandardized e-cigarette devices. Many studies have shown that the consumption of e-cigarettes potentially causes harm to pulmonary, cardiovascular, immune and nervous systems (Qasim et al., 2017). The adverse health FIGURE 3 | Percent viability of U937 cells 24 h post-exposure to e-liquid base propylene glycol and selected nicotine-free e-liquids, i.e., Strawberry Zing, Café Latte, Pineapple Coconut, Cinnamon Roll, Fruit Swirl, Mega Melons, Mystery Mix, American Tobacco, Grape Vape, Very Berry, and mixed flavors at two concentrations 0.25% and 0.5%. U937 monocytes were treated with e-liquids at two concentrations, 0.25% and 0.5% (mixed e-liquid treatment only at 0.5%) for 24 h. Cells were then rinsed with PBS and stained with AO/PI. The viability of the cells was assessed using the Cellometer 2000. Data are expressed as mean ± SEM (n = 5 per treatment group). Statistical significance was determined by one-way ANOVA (Tukey's multiple comparison test). *p < 0.05, **p < 0.01 vs. control. effects of nicotine have been well established; however, health effects related to e-cigarettes without nicotine are still emerging. These health effects are mainly due to constituents of e-liquid vapors (Varlet et al., 2015). Studies have shown that e-liquid aerosols contain significant levels of toxic compounds, such as aldehydes and acrolein that are detrimental to e-cigarette users (Sleiman et al., 2016;Talih et al., 2016).
The focus of this study was to investigate the oxidative stress and inflammatory effects of commonly used e-cigarette flavoring chemicals and flavored e-liquids without nicotine. We selected cell-free ROS levels and IL-8 levels as they are well established biomarkers for oxidative stress mediated inflammation and tissue damage (Vlahopoulos et al., 1999;Mittal et al., 2014;Lerner et al., 2015b). Exogenous ROS levels produced by flavoring chemicals and e-liquids were quantified in this study. Oxidative stress caused by these reactive species activates inflammatory genes, such as IL-8 chemokine. IL-8 has a profound effect on neutrophil recruitment and activation. We have previously demonstrated that the exposure to e-cigarette flavoring chemicals induces a significant IL-8 response (Lerner et al., 2015b;Gerloff et al., 2017).
The flavoring chemicals, acetoin, diacetyl, pentanedione, cinnamaldehyde, maltol, ortho-vanillin, and coumarin were tested in this study. According to Tierney et al., e-liquids contain 10-40 mg/mL of total flavoring chemicals (Tierney et al., 2016). Treatment concentrations from 10 to 1,000 μM were selected to encompass and account for the variability in consumption due to low voltage and high voltage ENDS and the vaping habits.
Among the flavoring chemicals tested, cinnamaldehyde showed the most toxicity to both the cell types. O-vanillin and pentanedione also showed significant cytotoxicity. These results are consistent with other studies that were recently published showing significant cytotoxicity of flavors such as "Cinnamon Ceylon" on various other cell lines such as epithelial cells and fibroblasts (Bahl et al., 2012;Behar et al., 2016). Treatment of cells with selected e-liquids from commonly marketed categories exhibited cytotoxicity. Mystery Mix, a selection from the "menthol" category, showed significant cytotoxicity. This is consistent with other in vitro studies in which other investigators have found significant cytotoxicity with menthol flavoring aerosol exposures on epithelial cell lines (Leigh et al., 2016;Singh et al., 2016). Mixing equal proportions of e-liquids from 10 differently flavored e-liquids gave rise to the highest cytotoxicity. This suggests that e-cigarette users who inhale a variety of flavored e-liquids at social events are perhaps prone to higher toxic effects than those who vape a single flavor of e-liquid.
The OX/ROS analysis revealed that all the flavoring chemicals of interest produced significant levels of H 2 O 2 equivalents. Moreover, we observed that several e-liquids (American tobacco, Mystery Mix, Café Latte, Cinnamon Roll, Pineapple Coconut, and Cotton Candy) also produced significant amounts of H 2 O 2 equivalents. There was no distinct trend in ROS release with a new or used atomizer suggesting that continuous use of an atomizer does not enhance the ROS production. Mixing various flavors of e-liquids together produced comparable H 2 O 2 equivalents to aerosolizing the same e-liquid flavors consecutively. This simulates a social situation where smokers exchange and vape several e-liquid flavors in a short period of time. This data suggest that acute exposure to a combination of e-liquid flavors is more harmful than the exposure to a single flavor. This response is consistent with the cell viability and IL-8 data where exposure to Mixed Flavors was more cytotoxic compared to individual flavors and caused significant inflammation. The presence of ROS in e-liquids can potentially cause oxidative stress related lung injury and diseases such as asthma, bronchiectasis/bronchiolitis obliterans, COPD and pulmonary fibrosis (Park et al., 2009). This is consistent with the human study conducted by Carnevale et al., showing that the use of ecigarettes increases oxidative stress/injury biomarkers, such as 8-isoprostanes in blood compared to non-smokers (Carnevale et al., 2016).
Pro-inflammatory cytokine, IL-8, is a neutrophil chemoattractant mediating the inflammatory process. IL-8 plays a crucial role in the pathogenesis of chronic inflammation and cancer (Mukaida, 2003). In our study, we observed that diacetyl, pentanedione, o-vanillin, maltol, coumarin, and cinnamaldehyde induced significant levels of IL-8 secretion in MM6 and U937 monocytes. This upregulation was also observed with several e-liquids, such as Cinnamon Roll, Café Latte, Mystery Mix, Mega Melons, and with Mixed Flavors. These findings are similar to other studies that showed an increased pro-inflammatory response in other cells, such as THP-1 monocytes and primary human airway epithelial cells (Wu et al., 2014;Ween et al., 2017). In contrast, with the acetoin treatment, we observed a dose-dependent reduction in IL-8 secretion. It may be due to immuno-suppressive effects, as there have been several studies with similar results, e.g., Clapp et al observed immunosuppression in alveolar macrophages and NK cells caused by cinnamaldehyde treatment .
FIGURE 5 | (A)
Cell-free ROS in flavored e-liquids with a new atomizer at each use with one puff per min. E-liquids (Strawberry Zing, Strawberry Fields, Very Berry, Grape Vape, American Tobacco, Mystery Mix and Mixed Flavors) aerosols were drawn through the DCFH solution using a SciReq inExpose. Oxidized DCF fluorescence was measured using a fluorometer. Data are shown as mean ± SEM (n = 6 per group). Statistical significance was determined by One-way ANOVA (Tukey's multiple comparison test). ***P < 0.001 vs. propylene glycol. (B) Cell-free ROS in selected e-liquids using a PG aerosolized atomizer. Selected e-liquid aerosols (Café Latte, Cinnamon Roll, Chai Tea, Pineapple Coconut and Cotton Candy) were aerosolized using a SciReq inExpose and drawn through DCFH with PG aerosolization in between e-liquids. Oxidized DCF fluorescence was measured using a fluorometer. Data are shown as mean ± SEM (n = 2-6 per group). Statistical significance was determined by One-way ANOVA (Tukey's multiple comparison test). ***p < 0.0001 vs. propylene glycol. (C) Cell-free ROS in acute exposure of consecutively aerosolized flavors. Ten e-liquid flavors (Strawberry Zing, Café Latte, Pineapple Coconut, Cinnamon Roll, Fruit Swirl, Mega Melons, Mystery Mix, American Tobacco, Grape Vape and Very Berry) were aerosolized consecutively (consecutive mixture of flavors) using a SciReq inExpose machine one flavor at a time during a cumulative 10 min period and drawn through DCFH. Oxidized DCF fluorescence was measured using a fluorometer. Data are shown as mean ± SEM (n = 6). Statistical significance was determined by student t-test. ***p < 0.001 vs. PG. Conditioned media was then assayed for IL-8 concentration by ELISA. Data are expressed as mean ± SEM. N = 4-6 per group. Statistical significance was determined by One-way ANOVA for multiple groups (Tukey's comparisons test). *p < 0.05, **p < 0.01, ***p < 0.001 vs. untreated control. Student t-test for comparing two groups. ***p < 0.001 vs. untreated control. Pineapple Coconut induced pro-inflammatory cytokine, IL-8, response by U937 cells. U937 monocytes were treated with e-liquids at two doses, 0.25% and 0.5%, for 24 h. Conditioned media was then assayed for IL-8 concentration by ELISA. Data are expressed as mean ± SEM. (N = 4 per group. Statistical significance was determined by One-way ANOVA for multiple groups (Tukey's comparisons test). *p < 0.05, **p < 0.01, ***p < 0.001 vs. untreated control. Student t-test for comparing two groups (**p < 0.01 and ***p < 0.001 vs. untreated control).
Martin et al observed down-regulation of CSF-1 and CCL26 inflammatory genes (Martin et al., 2016). Reidel et al. found increased neutrophilic activation and mucin hypersecretion by e-cigarette in users (Reidel et al., 2017). Many studies have shown that e-cigarette exposure can dampen immunity against bacteria, such as Streptococcus pneumonia, Staphylococcus aureus, and viruses, such as influenza A in mice (Sussan et al., 2015;Hwang et al., 2016).
Our data suggest that the presence of ROS in flavored e-liquids could play an essential role in the oxidative stress-mediated inflammatory response. This is consistent with previous studies conducted by our laboratory on lung epithelial cells and C57BL/6 mice (Lerner et al., 2015b). It is possible that ROS initiate the activation of transcription factors, such as NF-κB, STAT3, AP-1, and Nrf2 resulting in the propagation of other cellular and inflammatory responses such as secreting inflammatory cytokines and regulating the antioxidant defense systems (Kreiss et al., 2002;Reuter et al., 2010;Morgan and Liu, 2011). Thus, IL-8 modulation in monocytes treated with flavored e-liquids and flavoring chemicals was observed.
Recent studies have demonstrated that the most preferred eliquid flavors are the sweet, fruity, creamy, and buttery flavors. Zeng et al. also showed that there is a high frequency of mixing of those flavors together by the consumers during vaping (Kim et al., 2016;Chen and Zeng, 2017). These commonly consumed flavors are derived from flavoring chemicals tested in our study. The most prevalent class of compounds in e-liquids is aldehydes which include acetaldehyde and formaldehyde (example: vanilla flavor). Most prevalent non-aldehydes include acetoin and diacetyl (Klager et al., 2017;Ogunwale et al., 2017). The most prevalent alcoholic compound classes include alcohols, such as maltol and menthol (Tierney et al., 2016). Other most common flavoring chemicals include acetoin, diacetyl, and 2'3'-pentanedione (Allen et al., 2016). Obliterative bronchiolitis (bronchiolitis obliterans) is a disease caused by exposure to butter flavoring chemicals (diacetyl, 2, 3-pentanedione). Chronic inhalation of these chemicals causes airway epithelium injury ultimately resulting in the formation of pro-fibrotic lesions (Morgan et al., 2012;Flake and Morgan, 2017;Wallace, 2017). Chocolate flavoring chemical, 2.5-dimerthylpyrazine has shown to alter cystic fibrosis transmembrane conductance regulator (CFTR) expression, which could have adverse effects in immune mechanisms, such as mucociliary clearance, dampening the epithelial defense against inhaled particulates and pathogens (Sherwood and Boitano, 2016). Mucus-hypersecretion can hinder the respiratory pathogen clearance and exacerbate respiratory function in pulmonary diseases, such as COPD and asthma (Vareille et al., 2011). ROS present in flavoring chemicals and flavored e-liquids can also bind to biomolecules, such as DNA and cause adducts along with histone modifications (Sundar et al., 2016). Prior studies have shown that e-cigarettes release nanoparticles in comparable amounts to combustible cigarettes, which can deposit deep in the alveolar region to smaller airways/peripheral areas. Inhaling these nanoparticles provides a route of exposure of toxic chemicals to the bloodstream (Lee et al., 2017). These nanoparticles included copper, tin, chromium and nickel that can pose detrimental health risks (Williams et al., 2013;Lerner et al., 2015a). Findings in our study as well as from others imply that there is much to be scientifically investigated and the ENDS must be standardized. E-liquid flavoring chemicals and other constituents must be tightly regulated to minimize the risk of lung disease especially among teens.
There are several limitations to this study. Exposure of U937 monocytes directly to the e-liquid provided meaningful toxicological data. However, it ideally would be preferable to expose the cells to e-liquid aerosols with lower concentrations to understand the cellular toxicity of flavored e-liquid aerosol. As a future direction, we intend to perform in vitro and in vivo flavored e-liquid aerosol exposures and assess the inflammatory cytokine profile. Lastly, only one crucial chemokine/cytokine was measured in this study. We plan to quantify other inflammatory mediators induced by acute and chronic flavored e-liquid exposures in the future.
In conclusion, cinnamaldehyde, vanillin, and pentanedione were the most toxic flavoring chemicals on monocytes. Majority of the tested flavoring chemicals and the e-liquids caused the secretion of significantly elevated pro-inflammatory cytokine levels by monocytes. Mixing multiple flavors of e-liquids caused the greatest cytotoxicity implying the health risk of acute exposure to a variety of e-liquids as opposed to a single flavor. Some flavors and their key flavoring chemicals which impart flavors were more toxic than others. Based on flavoring chemical toxicity of the individual flavoring chemicals in e-liquids, flavors can be regulated. Further, our data indicate that tighter regulations are necessary to reduce the risk of inhalation toxicity due to exposure to e-liquids without nicotine and flavoring chemicals.
AUTHOR CONTRIBUTIONS
TM, MP, KA, JG, IS, and IR: Conceived and designed the experiments; TM, MP, and KA: Performed the experiments and analyzed the data; TM, KA, and IR: wrote the manuscript. | 2018-01-11T18:07:50.397Z | 2018-01-11T00:00:00.000 | {
"year": 2017,
"sha1": "84124ceccb0e88beeab8cd8e0c8c3b99107daccc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2017.01130/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84124ceccb0e88beeab8cd8e0c8c3b99107daccc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
21287772 | pes2o/s2orc | v3-fos-license | SLG controls grain size and leaf angle by modulating brassinosteroid homeostasis in rice
Highlight The rice SLG gene, functioning as homomers, plays essential roles in regulating grain size and leaf angle via modulation of brassinosteroid homeostasis.
Introduction
Rice (Oryza sativa) feeds more than half of the world's population as one of the world's most important cereal crops. Given the rapid increase in the world's population and decrease in cultivated land area, improving rice production remains a great challenge for rice breeding programs. Grain size and leaf angle are two important traits determining rice grain yield and have always been a consideration in breeding programs (Sinclair and Sheehy, 1999;Ikeda et al., 2013).
Grain size, determined by grain length, grain width, and grain thickness, not only contributes to grain yield, but also influences the appearance, processing, cooking, and eating quality of rice. For example, people in Japan, Korea, and Northern China favor medium length round grains, whereas people in the USA, Southeast Asian countries, and Southern China prefer long and slender grains (Unnevehr et al., 1992). The organ size is largely determined by the cell number and cell size during organogenesis (Potter and Xu, 2001;Sugimoto-Shirasu and Roberts, 2003). In recent years, several genes and quantitative trait loci (QTLs) that affect grain size by influencing cell number have been identified in rice, including GS3, GW2, GW5, GS5, GW8, qGL3, TGW6, GW6a, andBG1 (Song et al., 2007, 2015;Weng et al., 2008;Mao et al., 2010;Li et al., 2011;Wang et al., 2012;Zhang et al., 2012;Ishimaru et al., 2013;Liu et al., 2015). Some other genes and QTLs that control grain size by influencing cell size have also been isolated in rice, including PGL1, GL7, and GS2/GL2 (Heang and Sassa, 2012;Che et al., 2015;Duan et al., 2015;Hu et al., 2015;Wang et al., 2015). It has been documented that at least some of those genes participate in seed size control by regulating biosynthesis and signaling of plant hormones, including brassinosteroids (BRs), cytokinins, gibberellins, and auxin (Ashikari et al., 1999Hong et al., 2003;Tanabe et al., 2005;Ishimaru et al., 2013).
Leaf angle, the inclination between the leaf blade and vertical culm, is a key factor determining the plant architecture (Hoshikawa, 1989;Sinclair and Sheehy, 1999). A compact plant type with erect leaves is preferred since it increases photosynthetic efficiency and nitrogen storage for grain filling, and facilitates dense planting (Sinclair and Sheehy, 1999;Sakamoto et al., 2006). A number of genes or QTLs have been reported to have a role in controlling leaf angle, including Ta1, OsDWARF4, D2, OsBRI1, OsBU1, ILI1, LC2, and ILA1 (Li et al., 1998(Li et al., , 1999Sakamoto et al., 2006;Tanaka et al., 2009;Zhang et al., 2009;Zhao et al., 2010;Ning et al., 2011). The leaf lamina joint that connects the leaf blade and sheath is considered the most important tissue governing the leaf angle. The degree of the leaf angle largely depends on cell division and expansion as well as cell wall composition at the joint (Nakamura et al., 2009;Zhang et al., 2009;Zhao et al., 2010;Ning et al., 2011). Nevertheless, it is well known that BR treatment stimulates leaf inclination in rice (Wada et al., 1981).
BRs are a group of steroidal phytohormones that regulate diverse plant growth and developmental processes, including cell expansion and division, vasculature differentiation, root and leaf development, stem elongation, skotomorphogenesis, and grain filling (Clouse and Sasse, 1998;Fujioka and Yokota, 2003;Wu et al., 2008). In recent decades, researchers have clarified many genes and the main pathway of BR biosynthesis utilizing genetic studies, chemical feeding, and enzymatic analysis. Most of the enzymes known to catalyze BR biosynthesis belong to the cytochrome P450 protein family (Choe, 2006). The BR biosynthesis pathway mainly consists of the early and late C-22 oxidation pathway, and the early and late C-6 oxidation pathway (Choe, 2006). Similarly, research on BR signaling has also developed rapidly, and most of the main participants in BR signaling have been determined in Arabidopsis (Belkhadir and Chory, 2006). BRs are perceived by the receptor kinase BRI1 to transmit signaling (Li and Chory, 1997). In rice, it has been reported that BR plays important roles in the regulation of grain size, leaf angle, and yield potential. For example, most loss-of-function mutants in BR biosynthesis or signaling pathways, such as d2, d11, and d61, display short grains, erect leaves, and dwarf phenotypes (Yamamuro et al., 2000;Hong et al., 2003;Tanabe et al., 2005), while some other mutants or transgenic plants with enhanced BR signaling or increased BR levels, such as GSK2 knockdown lines, the DLT overexpresser, and the D11 activation mutant m107, show longer grains and larger leaf angles (Tanabe et al., 2005;Wan et al., 2009;Tong et al., 2012). More importantly, modulating the expression level of BR-related genes such as OsDWARF4 and OsBRI1 has been proven to improve rice grain yield at higher planting densities Sakamoto et al., 2006). BR homeostasis is vital for normal growth and development of plants. BRs are synthesized in most plant tissues, and their level is the highest in young developing organs but low in mature organs (Shimada et al., 2003). Unlike the other plant hormones such as auxin that can be transported from the site of synthesis to a distant target site (Berleth and Sachs, 2001), BRs do not undergo long-distance transport and have the same site of synthesis and action (Symons and Reid, 2004). Therefore, there exist mechanisms that cells or tissues use to modulate levels of endogenous BRs precisely to keep cell expansion in balance and ensure normal plant growth and development (Symons and Reid, 2004). Negative feedback regulation is a common mechanism that also regulates BR homeostasis. It is reported that expression of many BR biosynthesis and signaling genes is inhibited by BR treatment, such as D2, D11, OsDWARF4, BRD1, OsBRI1, and DLT in rice (Yamamuro et al., 2000;Hong et al., 2002Hong et al., , 2003Tanabe et al., 2005;Sakamoto et al., 2006;Tong et al., 2009). However, BR homeostasis is still poorly understood.
In this study, we characterized a rice semi-dominant mutant, slender grain Dominant (slg-D), with slender grains and enlarged leaf angles, which are caused by enhanced expression of SLG, a BAHD acyltransferase-like protein gene. We provide genetic evidence that the BR contents are associated with the expression level of SLG. In addition, the plants expressing an RNAi or a truncated version of SLG showed a semi-dwarf architecture with smaller leaf angles, which may be useful for rice yield improvement.
Plant materials
The slg-D mutant (3A-10513) was isolated from a collection of activation-tagging T-DNA insertion rice lines (Jeon et al., 2000;Jeong et al., 2006), and kindly provided by Professor Gynheung An. The wild type (WT) of slg-D was Dongjin, a japonica cultivar. d61-1, d11-2, and m107 were kindly provided by Professor Chencai Chu (Tong et al., 2012). The WT of d61-1 and d11-2 was a japonica cultivar, Zhonghua11, and the WT of m107 was a japonica cultivar, Nipponbare. Rice plants were cultivated in an experimental field under natural long-day conditions in Nanjing, China.
Scanning electron microsocpy (SEM) and light microscopy For SEM, lemmas were harvested from florets after flowering and fixed in 2.5% (v/v) glutaraldehyde. Fixed samples were soaked in 2% (w/v) OsO4 for 2 h, dehydrated in a graded ethanol series, infiltrated and embedded in butyl methyl methacrylate, treated with critical point drying, and then sputter coated with platinum. The outer and inner epidermal cells of lemmas were observed using a HITACHI S-3400N scanning electron microscope. For light microscopy, lamina joints of the second leaves were harvested 10 d after flowering and fixed with FAA solution, followed by a graded series of dehydration and infiltration steps. Fixed tissues were embedded in paraplast. After sectioning, 10 μm thick sections were dewaxed with xylene, rehydrated, stained with 1% toluidine blue, and observed with a Leica DM5000B microscope. Cell lengths and widths of each organ were measured with IMAGEJ software.
Isolation, cloning, and RNAi suppression of the SLG gene
To identify the T-DNA insertion locus in slg-D, we searched the flanking sequence database (Jeong et al., 2006; http://orygenesdb. cirad.fr/). The T-DNA loci were confirmed by PCR genotyping, using the primers P1, P2, and P3 (see Supplementary Table S1 at JXB online). To recapitulate the phenotype of slg-D, full-length cDNAs of Loc_Os08g44830 and Loc_Os08g44840 were amplified by PCR and cloned into the binary vector pCUbi1390 under the control of the maize Ubi promoter to create p1390-Ubi-830, and p1390-Ubi-840 constructs, respectively. These constructs were then transformed into the rice variety Dongjin according to a published Agrobacterium-mediated method (Hiei et al., 1994).
To obtain SLG RNAi plants, the construct pCUbi1390-ΔFAD2 (an FAD2 intron and ubiquitin promoter inserted into pCUbi1390) was used as an RNAi vector (Wu et al., 2007). Both sense and antisense versions of a specific 305 bp fragment from the cDNA of SLG were amplified with primer pairs SLG-RNAiL and SLG-RNAiR (Supplementary Table S1), and cloned into pCUbi1390-ΔFAD2 to create the pUbi-dsRNAiSLG construct, which was then transformed into the rice variety Dongjin by the Agrobacterium-mediated method described above.
RNA extraction and quantitative RT-PCR
Total RNA from roots, leaves, leaf sheaths, lamina joints, shoot apices, culms, and different stages of panicles were isolated using the RNAprep Pure Plant Kit (TIANGEN, Beijing, China). First-strand cDNA was reverse transcribed from 1 μg of total RNA using the PrimeScript 1st Strand cDNA Synthesis Kit (TaKaRa). Quantitative RT-PCR was performed using a SYBR Premix Ex TaqTM kit (TaKaRa) on an ABI prism 7500 Real-Time PCR System according to the manufacturer's instructions, and the ACTIN1 gene was used as an internal control. The primers for quantitative RT-PCR analysis are listed in Supplementary Table S1.
GUS staining
To analyze the expression pattern of SLG, an ~2.5 kb promoter fragment was cloned into the pCAMBIA1381Z vector to create the PRO SLG :GUS (β-glucuronidase) reporter gene construct, which was then transformed into the rice variety Dongjin by the Agrobacteriummediated method. GUS staining was performed on PRO SLG :GUS T 1 generation transgenic plants according to a method described previously (Jefferson et al., 1987). Images were taken using a Nikon CD5Ri1P camera. Primers used to clone the promoter fragment are listed in Supplementary Table S1.
In situ hybridization RNA in situ hybridization was performed as described previously (Bradley et al., 1993). A 305 bp gene-specific region of SLG amplified with primers SLG-PF and SLG-PR (see Supplementary Table S1) was cloned into the pGEM-T Easy vector (Promega). The linearized templates were amplified from the pGEM-T plasmid containing the gene-specific region of SLG using primers Yt7 and Ysp6. Digoxigenin-labeled RNA probes were transcribed in vitro using T7 and SP6 RNA polymerases, respectively, using a DIG Northern Starter Kit (Cat. No. 2039672, Roche) following the manufacturer's instructions. Images were taken using a Leica DM5000B microscope.
Subcellular localization of SLG
To determine the subcellular localization of SLG, green fluorescent protein (GFP) was fused to the C-terminus of SLG under the control of the 35S promoter in the PAN580 vector. In addition, the nuclear marker D53-mCherry was constructed. The SLG-GFP fusion construct was transiently co-transferred into rice protoplasts with the D53-mCherry constructs according to the method described previously (Chen et al., 2006). Next, GFP was fused to the C-terminus of SLG under the control of the Cauliflower mosiac virus (CaMV) 35S promoter in the pCAMBIA1305.1 vector. The pCAMBIA1305-SLG-GFP construct was transformed into the rice variety Dongjin by the Agrobacterium-mediated method. GFP fluorescence was examined in the young roots of 2-week-old T 1 transgenic plants. Fluorescence images were observed using a Zeiss LSM510 confocal laser microscope. Primers used to make these constructs are listed in Supplementary Table S1.
BR and BRZ treatment
The lamina joint bending assay using excised leaf segments was performed as described by Wada et al. (1981). Seeds were germinated for 2 d and then grown in the dark for 8 d at 30 °C. Segments of 2 cm comprising the second leaf blade, lamina joint, and leaf sheath were floated on distilled water for 24 h and then incubated in 2.5 mM maleic acid potassium solution containing various concentrations of brassinolide (BL; Sigma, http://www.sigmaaldrich.com/) for 48 h in the dark. Lamina joint angles were measured using IMAGEJ software. The coleoptile and root elongation tests were performed using a previously described method (Yamamuro et al., 2000). Seeds were germinated on agar plates containing various concentrations of BL, and then the coleoptile and root lengths were measured 1 d after germination.
To measure the effect of brassinazole (BRZ; TCI) treatment on lamina joint bending, the leaf tips of 8-day-old seedlings of slg-D and the WT were spotted with 1 μl of DMSO containing 0 or 10 μM BRZ daily for 3 d, followed by 7 d growth in a controlled growth chamber under long-day conditions (16 h light at 28 °C/8 h darkness at 24 °C). The angles of the third lamina joints were measured using IMAGEJ software.
Yeast two-hybrid assay
The full-length cDNA of SLG was cloned into pGBKT7 (Clontech, http://www.clontech.com). Full-length SLG as well as its N-and C-terminal truncated deletions were then subcloned into pGADT7 (Clontech) and all vectors were transformed into yeast strain AH109. A yeast two-hybrid library was constructed from the mRNA of young rice panicles 0.1-5 cm long. Yeast transformation and screening procedures were performed according to the manufacturer's instructions (Clontech). Primers used to make these constructs are listed in Supplementary Table S1.
Bimolecular fluorescence complementation (BiFC) assay
The full-length SLG cDNA was cloned into the vector pSPYCE(M), and the SLG cDNA and its truncated deletions were then subcloned into the vector pSPYNE173. The plasmids were transiently expressed in Nicotiana benthamiana leaves as described previously (Waadt and Kudla, 2008). Yellow fluorescent protein (YFP) fluorescent signals were observed under a Zeiss LSM510 confocal laser microscope between 48 h and 72 h post-transfection. Primers used to make these constructs are listed in Supplementary Table S1.
Pull-down assay
The SLG cDNA was cloned into the vectors pMAL-c2x and pGEX4T-2 to generate fusions with maltose-binding protein (MBP) and glutathione S-transferase (GST), respectively. Expression of MBP-SLG, GST-SLG, and GST in BL21 Rosetta cells was induced with 0.5 mM isopropyl-β-d-thiogalactoside at 16 °C for 20 h. The total protein concentration was quantified using the Bio-Rad protein assay reagent. The pull-down assay was performed as reported previously (Miernyk and Thelen, 2008). The proteins were separated on a 10% SDS-PAGE gel and immunoblotted with anti-GST or anti-MBP antibodies (Abmart, http://www.ab-mart.com). The primers used to make these constructs are listed in Supplementary Table S1.
Phenotypic characterization of the semi-dominant mutant slg-D
To identify new components involved in regulating rice grain size, we screened a collection of activation-tagging T-DNA insertion rice lines (Jeon et al., 2000;Jeong et al., 2006). As a result, we isolated a mutant (3A-10513) with a slendergrain phenotype, and named it slender grain Dominant (slg-D). slg-D showed less compact plant architecture than the WT at both the vegetative and mature stages (Fig. 1A, B). In slg-D, the grain length was significantly increased while the grain width decreased, and the 1000-grain weight was slightly decreased (Fig. 1C-F). The lamina joint bending angles of slg-D were larger than those of the WT, especially for the flag leaves (Fig. 1G, H). Together, these results indicate that slg-D displays slender grain and an enlarged leaf angle.
The F 1 plants from the cross slg-D×WT exhibited an intermediate phenotype in grain shape and leaf angle, indicating a semi-dominant nature of the mutation (Supplementary Fig. S1A-F). Genetic analyses of an F 2 population derived from the same cross showed a segregation ratio of 1:2:1 (64 normal:120 intermediate:56 mutant; χ 2 =0.09, P>0.05), suggesting that slg-D is a single locus mutation ( Supplementary Fig. S1G). This observation provides a hint that the semidominant nature of slg-D might be associated with an insertion of the activation-tagging T-DNA.
Cell length change in slg-D determines the mutant phenotypes
To investigate the mutant phenotypes in slg-D at a cell level, we performed SEM and light microscopy on slg-D plants along with the WT. The observations on the outer and inner epidermal cells of lemmas, which determine the shape and size of grains, showed that those cells were stretched longitudinally in slg-D, such that the slender-grain phenotype was developed ( Fig. 2A-G). The histological analysis on the second leaf lamina joints indicated that there was no significant alteration in cell size in the abaxial sides between slg-D and the WT (Fig. 2H-K). In contrast, the cell length of the adaxial sides was increased in slg-D (Fig. 2H, L-N). Therefore, it is the asymmetric cell expansion at the opposite sides of the lamina joint that causes a larger leaf bending in slg-D. Overall, these results indicate that the changes in cell length are responsible for the phenotype in slg-D.
Enhanced expression of Loc_Os08g44840 in slg-D leads to the mutant phenotypes
To isolate the gene for slg-D, we searched the T-DNA insertion database and obtained a genomic flanking sequence (Jeong et al., 2006; http://orygenesdb.cirad.fr/). Based on this information, we designed three PCR primers (P1, P2, and P3) and confirmed the site of the T-DNA insertion in slg-D (Fig. 3A, B). To understand whether the mutation in slg-D is related to an insertion of the activation-tagging T-DNA, we PCR-genotyped the mutant F 2 plants derived from the cross slg-D×WT and found that the mutant phenotypes were always associated with the presence of the T-DNA ( Supplementary Fig. S2). A BLAST search (http://www.ncbi. nlm.nih.gov/) showed that the T-DNA was inserted in an intergenic region, with Loc_Os08g44830 2000 bp upstream and Loc_Os08g44840 4500 bp downstream (Fig. 3A). Another nearby gene is Loc_Os08g44820, upstream of Loc_ Os08g44830 (Fig. 3A). Next we examined if the four enhancer repeats in the T-DNA had an influence on the expression level of the three nearby genes, and found that two of them (Loc_ Os08g44830 and Loc_Os08g44840) had elevated expression (Fig. 3C). This result suggests that the changed expression level of the two genes might be responsible for the mutant phenotypes in slg-D.
We assumed that overexpression of Loc_Os08g44830, Loc_Os08g44840, or both in the WT might recapitulate the phenotypes observed in slg-D. To test this assumption, Loc_Os08g44840 and Loc_Os08g44830, under control of the maize Ubi promoter, were individually overexpressed in WT plants. Interestingly, only plants overexpressing Loc_ Os08g44840, not Loc_Os08g44830, showed varying degrees of enlarged leaf angle and slender grain, thus phenocopying slg-D (Fig. 3D-I; Supplementary Fig. S3). The phenotypic variation in the transgenic plants was well correlated with the expression level of Loc_Os08g44840 (Fig. 3D-I). We concluded that it is the enhanced expression of Loc_Os08g44840 that causes the phenotypic changes in slg-D. We designated Loc_Os08g44840 as a SLENDER GRAIN (SLG) gene.
SLG encodes a putative BAHD acyltransferase-like protein
SLG encodes a protein of 445 amino acids. SLG belongs to the putative BAHD family of acyltransferases, which catalyze formation of a diverse group of plant metabolites using CoA thioesters as substrates (D'Auria, 2006). A phylogenetic analysis revealed that SLG-like proteins largely fall into two groups: dicot and monocot, and SLG is a member of the monocot group ( Supplementary Fig. S4). However, to date, none of the genes in this group has been functionally characterized.
Expression pattern and subcellular localization of SLG
Our quantitative RT-PCR analysis showed that expression of SLG is strong in young panicles, relatively high in lamina joints, low in shoot apices, culms and leaves, and very little in roots and leaf sheaths (Fig. 4A). Further, SLG has the highest expression level during early panicle development, but drops dramatically as the spikelets reach their final size (Fig. 4B). To investigate further the expression pattern of SLG, a genomic sequence ~2.5 kb upstream of the translation start site was cloned and introduced into the pCAMBIA1381Z vector, resulting in the PRO SLG :GUS reporter construct. Analysis of GUS activity in transgenic lines showed that strong GUS staining was observed in young panicles, lamina joints, and young stem nodes, with faint staining in leaf veins, but not visible in roots and leaf sheaths (Fig. 4C, parts 1-8). Cross-sectioning of the GUS-stained leaf lamina joint and the bottom portion of the stained internode further showed that the GUS signals were mainly restricted to vasculature regions (Fig. 4C, parts 9-12). We also performed RNA in situ hybridization to localize SLG expression during early panicle development more precisely. Strong SLG expression was detected in spikelet meristem primordia, floral meristem primordia, lemma and palea primordia, and vasculature regions (Fig. 4D). The predominant expression of SLG in young panicles and lamina joints implies its role in controlling grain shape and leaf angle.
To determine the subcellular localization of SLG, we fused the green fluorescent protein (GFP) to the C-terminus of SLG. Transient expression of this fusion protein in rice protoplasts revealed that the GFP signals were found in both the cytoplasm and the nucleus (Fig. 4E). Transgenic rice plants harboring the same fusion construct also showed the cytoplasmic and nuclear localization pattern of SLG (Fig. 4F).
SLG positively regulates endogenous BR levels
The slg-D phenotype resembles that of an activation mutant or transgenic plants with elevated BR accumulation (Wu et al., 2008;Wan et al., 2009), and that of transgenic plants with enhanced BR signaling (Tanaka et al., 2009;Tong et al., 2012), leading us to hypothesize that SLG may be involved in regulating the BR pathway. To determine whether slg-D responds differently to BR treatment, we first performed lamina joint bending assays using excised leaf segments (Wada et al., 1981). We measured the effects of a range of 24-epibrassinolide (BL; a type of active BR) concentrations on the angle of the lamina joints, and found that lamina joint bending was increased in a dose-dependent manner and the sensitivity to BL treatment was similar in slg-D and the WT (Fig. 5A, B). Next we performed another BR response assay involving coleoptile and root elongation using a previously described method (Yamamuro et al., 2000). Comparison of coleoptile and root lengths also showed that slg-D had a similar response to BL as the WT (see Supplementary Fig. S6). These results indicate that BR signaling is not altered in slg-D.
To investigate whether SLG functions in regulating endogenous BR levels, we first tested the effect of brassinazole (BRZ; a specific BR biosynthesis inhibitor; Asami et al., 2000) on slg-D in a lamina joint bending experiment. The leaf tips of 8-day-old seedlings of slg-D and the WT were spotted with 10 μM BRZ daily for 3 d, followed by 7 d growth in a chamber. We observed that the leaf angle of slg-D was restored to the WT level by BRZ treatment, whereas the WT seedlings had a milder response to the same treatment (Fig. 5C, D), indicating that slg-D is more sensitive to BRZ. A similar result was also seen when the D11 activation line m107, a BR overproduction mutant, was treated with BRZ (see Supplementary Fig. S7). Next we introduced the SLG-overexpressing construct into d61-1 (a loss-of-function mutant of the BR receptor gene OsBRI1; Yamamuro et al., 2000) and d11-2 (a mutant deficient in BR biosynthesis; Tanabe et al., 2005), and found that the d61-1 SLG:OE and d11-2 SLG:OE plants still retained the dwarfism, smaller and round grains, and erect leaves (Fig. 5E-G). Those results suggest a role for SLG in regulating BR levels. Consistent with this, our chemical analysis indeed showed a higher content of 6-deoxocastasterone (6-deoxoCS), typhasterol (TY), and castasterone (CS) in slg-D than in the WT ( Supplementary Fig. S8).
It is known that excessive BRs down-regulate the BR-related genes D2, D11, OsDWARF4, BRD1, OsBRI1, and DLT, but up-regulate OsBZR1 as a feedback mechanism (Yamamuro et al., 2000;Hong et al., 2002Hong et al., , 2003Wang et al., 2002;He et al., 2005;Tanabe et al., 2005;Sakamoto et al., 2006;Tong et al., 2009). We analyzed the expression level of those genes in slg-D and found that all except BRD1 had the expected transcription level change as a response to the elevated BR levels (Fig. 5H).
As an alternative control, we also measured expression of those BR genes in the mutant m107, where D11 was dramatically enhanced and BR levels increased, and found similar expression changes (see Supplementary Fig. S9). Those results further confirm higher BR contents in slg-D. However, SLG itself did not respond to the exogenous BL treatment (Supplementary Fig. S10). Taken together, these results suggested that SLG positively regulates endogenous BR levels and is a new regulator of BR homeostasis in rice.
Suppression of SLG leads to BR-deficient phenotypes
To explore further the function of SLG, a SLG RNAi vector was constructed and introduced into WT plants. The SLG RNAi plants displayed a more compact architecture, reduced plant height, smaller leaf angle, and shorter and rounder grain (Fig. 6A-G). These phenotypes are similar to those of BR-deficient mutants, such as d61 and d11 (Hong et al., 2003;Tanabe et al., 2005). In addition, we investigated expression changes of the genes involved in BR synthesis or signaling in R7, a typical SLG RNAi line with greatly reduced leaf angle and SLG expression (Fig. 6F, G). Six of the genes detected, D2, D11, OsDWARF4, BRD1, OsBRI1, and DLT were up-regulated, but OsBZR1 was down-regulated in R7 compared with the WT (Fig. 6H). The feedback regulation of those BR-related genes caused by knock-down of SLG further supports involvement of SLG in regulating the BR level. These results further highlight a role for SLG in regulating BR homeostasis. This observation also suggests that an optimized expression level of SLG may help create a compact and semi-dwarf ideal plant type.
SLG functions as homomers
It has been reported that enzyme proteins often function as homomers or heteromers (Ali and Imperiali, 2005). To investigate the functional forms of SLG, the full-length SLG protein was used as a bait to screen a yeast two-hybrid library prepared from young rice panicles. We identified four positive clones that contain different SLG cDNA fragments from ~1 million yeast transformants. To confirm the self-interaction of SLG, different truncated SLG proteins were used for interaction analysis. As shown in Fig. 7A, a 30 amino acid region in the N-terminus of SLG (SLG∆C4), rather than the two conserved motifs, was required for the self-interaction of SLG. An in vitro GST pull-down assay also confirmed the self-interaction (Fig. 7B). In addition, BiFC analysis also showed that SLG physically interacted with itself and this interaction required the N-terminal 30 amino acid region (Fig. 7C).
To study the importance of SLG self-interaction, two truncated SLG CDS, SLG∆C1 with only the 190 N-terminal amino acids, and SLG∆N3 without the 30 N-terminal amino acids, were individually overexpressed in slg-D and the WT. We found that overexpression of SLG∆C1 but not SLG∆N3 in both the WT and slg-D resulted in shorter and rounder grains, smaller leaf angles, and dwarf phenotypes, similar to those of SLG RNAi plants ( Fig. 7; Supplementary Fig. S11). The truncated SLG∆C1 protein might interfere with formation of functional homomers between the intact SLG proteins, thus leading to a dominant negative mutant phenotype. When the interaction region was removed in SLG∆N3, however, the truncated protein did not exert any effect on SLG, thus providing genetic evidence that SLG indeed functions as homomers in vivo.
Discussion
In this study, we have provided evidence that SLG is involved in BR homeostasis by positively regulating endogenous BR levels to control grain size and leaf angle in rice. First, the activation-tagging mutant slg-D and transgenic plants overexpressing SLG displayed longer and narrower grains and larger leaf angles that are similar to the mutants or transgenic plants with enhanced BR signaling or increased BR levels. Secondly, slg-D and the WT had similar sensitivity to BL treatment in lamina joint bending, coleoptile elongation, and root elongation assays. Thirdly, the BRZ treatment restored slg-D to the WT. Fourthly, overexpression of SLG in the BR-related mutants, d61-1 and d11-2, did not lead to slender grains and enlarged leaf angles. Fifthly, the major BRs were increased in slg-D. Sixthly, feedback regulation on expression of the known BR genes was seen in slg-D. Lastly, knockdown of SLG resembled mild BR-deficient mutants.
The size of an organ is determined by cell proliferation and cell expansion (Potter and Xu, 2001;Sugimoto-Shirasu and Roberts, 2003). Our results showed that SLG is required for cell expansion in grains. To investigate the possible regulatory relationship between SLG and other previously identified genes that control grain size by influencing cell expansion, such as PGL1, GL7, and GS2/GL2 (Heang and Sassa, 2012;Che et al., 2015;Duan et al., 2015;Hu et al., 2015;Wang et al., 2015), we examined the transcript level of these genes and found no obvious difference between slg-D and the WT (see Supplementary Fig. S12). This result suggests that SLG may regulate grain size in a pathway independent of PGL1, GL7, and GS2/GL2. SLG is predicted to encode a BAHD acyltransferase-like protein. Previous studies of BAHD acyltransferase family members have shown that this family is capable of using CoA thioesters and catalyzing the formation of a wide variety of plant metabolites by generating ester or amide bonds (D'Auria, 2006). In Arabidopsis, two BAHD acyltransferases, BIA1 and BAT1, are involved in BR homeostasis, probably by conversion of active BR intermediates into inactive acylated BR conjugates (Roh et al., 2012;Choi et al., 2013). Overexpression of BIA1 or BAT1 results in decreased levels of active BRs and typical BR-deficient phenotypes (Roh et al., 2012;Choi et al., 2013). In our study, SLG, as a BAHD acyltransferase, probably converts an as yet unidentified substrate to the corresponding acyl conjugate to affect endogenous BR levels in an opposite way. Overexpression of SLG induced increased levels of active BRs and BR-overproduction phenotypes. The difference in Arabidopsis and rice implies that the function of BAHD acyltransferases in BR homeostasis has been differentiated. On the other hand, most of the enzymes known to catalyze BR biosynthesis belong to the cytochrome P450 protein family (Choe, 2006), implying that SLG, as a BAHD acyltransferase, may not work directly on the known BR intermediates, or that it may represent a different class of enzymes mediating BR synthesis. Further studies are needed to clarify how SLG participates in BR homeostasis.
In many cases, homomer formation is an essential biochemical process as it forms the complex quaternary structures of proteins to regulate selectivity against different substrates, enzyme activity, or stability (Dayhoff et al., 2010). Here, we showed that SLG interacted with itself, and its N-terminal 30 amino acid region was required for the interaction. It is likely that the self-interaction of SLG forms a functional enzyme complex with a special quaternary structure that binds the target substrates effectively. A truncated protein SLG∆C1 lacking a 255 amino acid C-terminus is still able to interact with the intact version but may form a complex unable to function properly due to a change in the quaternary structure, thus leading to a dominant negative phenotype. This finding also suggests that the substrate recognition and/or catalyzing domain may be located in the C-terminus. The version without the N-terminal interaction region failed to create a dominant negative phenotype, further confirming that the homomer formation of SLG indeed exists in vivo. It will be interesting to investigate further the number of SLG proteins required to form a functional enzyme complex and its structural organization.
The plant architecture determines planting density, and thus yield. In rice, BR-deficient or -insensitive mutants show the erect leaf phenotype, such as d2, d11, and d61 (Yamamuro et al., 2000;Hong et al., 2003;Tanabe et al., 2005). More erect leaves that increase light capture and thus enhance photosynthetic efficiency and nitrogen storage for grain filling can be combined with high planting densities to improve grain yield and biomass in rice (Sinclair and Sheehy, 1999;Morinaka et al., 2006;Sakamoto et al., 2006). For example, modulating the expression levels of OsDWARF4 and OsBRI1 led to the erect leaf phenotype and efficiently improves rice grain yield and biomass in dense planting conditions Sakamoto et al., 2006). SLG, when knocked-down by RNAi or interfered with by a truncated version, can create a compact semi-dwarf plant type with smaller leaf angles. Therefore, SLG can be used as an alternative to manipulate plant height for lodging resistance and leaf angle for planting density by optimizing its expression level, offering the potential for improving rice production.
Supplementary data
Supplementary data are available at JXB online. Figure S1. The slg-D mutation behaves in a semi-dominant manner. Figure S2. Co-segregation analysis of phenotypes and genotypes in F 2 progeny. Figure S3. Overexpression of Loc_Os08g44830 does not phenocopy the phenotypes of slg-D. Figure S4. Phylogenetic tree of SLG homologs. Figure S5. Alignment of the monocot group of SLG homologs. Figure S6. Sensitivities of roots and coleoptiles to BL are not altered in slg-D. Figure S7. Responses of WT and m107 leaf lamina joint angles to BRZ. Figure S8. Measurements of endogenous BR intermediates. Figure S9. Quantitative RT-PCR analysis of BR-related genes in young m107 and WT panicles. Figure S10. Quantitative RT-PCR analysis of SLG expression in WT seedlings treated with BL. Figure S11. Overexpression of SLG∆N3 does not change the phenotypes of the WT and slg-D. Figure S12. Quantitative RT-PCR analysis of several genes that control grain size by influencing cell expansion in slg-D and the WT. Table S1. Primers used in this study. | 2018-04-03T04:15:25.143Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "73e8eda74be607a5241db3263d447a6d815fa577",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jxb/article-pdf/67/14/4241/18073676/erw204.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "73e8eda74be607a5241db3263d447a6d815fa577",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
6913080 | pes2o/s2orc | v3-fos-license | Trypanosoma cruzi trans-Sialidase in Complex with a Neutralizing Antibody: Structure/Function Studies towards the Rational Design of Inhibitors
Trans-sialidase (TS), a virulence factor from Trypanosoma cruzi, is an enzyme playing key roles in the biology of this protozoan parasite. Absent from the mammalian host, it constitutes a potential target for the development of novel chemotherapeutic drugs, an urgent need to combat Chagas' disease. TS is involved in host cell invasion and parasite survival in the bloodstream. However, TS is also actively shed by the parasite to the bloodstream, inducing systemic effects readily detected during the acute phase of the disease, in particular, hematological alterations and triggering of immune cells apoptosis, until specific neutralizing antibodies are elicited. These antibodies constitute the only known submicromolar inhibitor of TS's catalytic activity. We now report the identification and detailed characterization of a neutralizing mouse monoclonal antibody (mAb 13G9), recognizing T. cruzi TS with high specificity and subnanomolar affinity. This mAb displays undetectable association with the T. cruzi superfamily of TS-like proteins or yet with the TS-related enzymes from Trypanosoma brucei or Trypanosoma rangeli. In immunofluorescence assays, mAb 13G9 labeled 100% of the parasites from the infective trypomastigote stage. This mAb also reduces parasite invasion of cultured cells and strongly inhibits parasite surface sialylation. The crystal structure of the mAb 13G9 antigen-binding fragment in complex with the globular region of T. cruzi TS was determined, revealing detailed molecular insights of the inhibition mechanism. Not occluding the enzyme's catalytic site, the antibody performs a subtle action by inhibiting the movement of an assisting tyrosine (Y119), whose mobility is known to play a key role in the trans-glycosidase mechanism. As an example of enzymatic inhibition involving non-catalytic residues that occupy sites distal from the substrate-binding pocket, this first near atomic characterization of a high affinity inhibitory molecule for TS provides a rational framework for novel strategies in the design of chemotherapeutic compounds.
Introduction
Chagas' disease, the American trypanosomiasis, is a chronic disabling parasitic disease caused by the flagellate protozoon Trypanosoma cruzi. With an estimated global burden of 100 million people at risk, 8 million already infected, and approximately 40,000 new cases/year, Chagas' disease represents a major health and economic problem in Latin America [1]. The infection is naturally transmitted by triatomine vectors (''kissing bugs''), from the south of the USA to the southern region of South America, although chagasic patients are in fact dispersed worldwide due to migrations. Patients can also transmit the disease either by in utero infection leading to the congenitally acquired disease or by accidental transmission through contaminated blood. The acute infection is characterized by patent parasite burden. During this initial stage, T. cruzi induces several alterations in the infected mammal including intense polyclonal activation of lymphocytes [2], transient thymic aplasia [3,4] and other clinical hematological findings [5,6]. The majority of the patients control the parasitemia, survive the acute phase, and enter into an indeterminate form of the disease that may last for many years or even indefinitely [1]. Up to 20 years after the infection, ,35% of patients develop different pathologies, such as cardiomyopathy, peripheral nervous system damage, and/or dysfunction of the digestive tract [1].
Sialic acids have proven to be crucial during the parasite's life cycle and survival in the mammalian host [7][8][9][10]. However, T. cruzi is unable to perform de novo synthesis of sialic acids [11]. This family of nine-carbon carbohydrates, is actually scavenged from the host's glycoconjugates, through a glycosyl-transfer reaction mediated by trans-sialidase (TS), a modified sialidase expressed by the parasite. In this way, the surface of the parasite becomes rapidly sialylated, with mucins being the main sialyl acceptors, in a process that allows the parasite to evade its destruction by serum factors [9,10]. TS activity is also involved in host cell attachment and invasion [7,8], as well as in parasite escape from the parasitophorous vacuole into the cytoplasm, where the parasite replicates [12].
In the trypomastigote stage, TS is a glycosylphosphatidylinositol-anchored non-integral membrane protein [13], actively released to the extracellular milieu, leading to a systemic distribution of the enzyme through the bloodstream. Its half-life in blood is significantly extended due to the presence of a Cterminal repetitive domain named SAPA [14]. TS activity is detectable in the bloodstream of infected humans and mice, until antibodies able to neutralize its catalytic activity are elicited [15]. The systemic distribution of TS is associated with several pathologies observed during the early steps of infection including depletion of thymocytes [16], absence of germinal centers in secondary organs [17] and thrombocytopenia and erythropenia [5,6], all alterations that can be prevented by the passive transfer of TS-neutralizing antibodies [17,18]. In fact, administration of the enzyme in mice before T. cruzi challenge, leads to more severe evolution of the infection [19]. These finding are also consistent with the fact that increased shedding of the enzyme correlates with increased virulence of the corresponding parasite strains [20].
TS has thus been identified as a potential target for drug discovery and design. Added to its key roles in host response evasion, cell invasion and pathogenesis, TS is not present in the mammalian host. The development of suitable drugs to treat/ prevent Chagas' disease is urgently needed [21]. Only two compounds, benznidazol and nifurtimox, are currently available for treating both acute and chronic infections. These drugs are far from being optimal: fairly toxic, they trigger serious side effects, while also showing suboptimal efficacy in a high proportion of patients. The emergence of resistant parasite strains adds a concerning issue [22]. Several attempts to obtain suitable TS inhibitors have been made, especially once its 3D structure became available [23,24]. However, only low affinity molecules have been obtained so far [25,26], some of them toxic in in vivo assays [27], ultimately suggesting that further and more active efforts must be pursued.
We have obtained a TS-neutralizing mouse monoclonal antibody (mAb 13G9) that displays very high affinity and specificity towards the T. cruzi enzyme. This mAb is able to prevent immune system and hematological abnormalities, even when assaying highly virulent parasites under lethal infection conditions [5,17]. We now report an extensive functional characterization of mAb 13G9, as well as the crystal structure of the 13G9-TS binary complex. The molecular features of the inhibitory mechanism are unveiled, providing novel insight for the development of TS inhibitors, which might also be relevant for related neuraminidases in other pathogens.
Biochemical Characterization of the TS-neutralizing Monoclonal Antibody
Mice were immunized with a TS recombinant protein (D1443TS), identical to the wt except it includes a deletion of a non-neutralizing epitope. D1443TS retains full enzymatic activity, while avoiding the otherwise typical delay in eliciting TSneutralizing antibodies [28,29]. Hybridomas were screened by TS-inhibition assay [30] and the 13G9 clone secreting a TSneutralizing mAb (IgG 2ak ) was obtained. The specificity of this mAb was confirmed by the absence of reactivity against the closely related sialidase from Trypanosoma rangeli and the TS from Trypanosoma brucei (data not shown). As depicted in Figure 1A, this mAb showed high affinity for the T. cruzi TS (K D ,7.2610 210 M) as calculated from the kinetic constants determined by surface plasmon resonance. In agreement, isothermal titration calorimetry assays indicated an equilibrium dissociation constant lower than 10 29 M (raw data not shown).
The mAb was purified by Protein A-affinity chromatography from filtered hybridoma supernatants. This purified material was further subjected to anionic chromatography ( Figure S1). The mAb eluted as a single peak as evaluated both by TS-neutralizing activity (not shown) as well as by TS recognition in dot-blot assays ( Figure S1). The same sequence was found in several mRNAs encoding for the antibody (not shown), in support of a clonal nature of the hybridoma. Purified mAb was proteolized with papain to generate the Fab fragment. Inhibitory activity of the fragment was determined and compared with that from the whole IgG protein (Figure 1, panels B and C). Although the full-length mAb appears to have a higher inhibitory activity (half maximal inhibitory concentration IC 50 5.6610 211 M), its Fab fragment still retains a nanomolar IC 50 (1.6610 29 M), clearly conserving its antigen-binding mechanism. These high inhibitory potencies are consistent with the apparent dissociation constant determined by surface plasmon resonance (see above), even though IC 50 figures cannot be compared with affinity constants in absolute terms at this point (allosteric effects, or yet mixed inhibition mechanisms, may flaw a linear relationship). The purified Fab proved to be fairly unstable when non-complexed to TS, requiring immediate use for biochemical characterizations. This may be one of the main reasons for the observed inhibitory potency decrease compared to the entire immunoglobulin molecule. The Fab's instability precluded its use for further in vivo and in vitro biologic assays.
Author Summary
Chagas' disease, or American trypanosomiasis, is an endemic illness that affects approximately 8 million people in Latin America. The etiologic agent is the protozoan parasite Trypanosoma cruzi. To survive in the mammalian host and invade its cells, leading to the chronic infection, the parasite incorporates a charged carbohydrate (sialic acid). However, the parasite is unable to synthesize sialic acid, having to scavenge it from the host's sialoglycoconjugates, through a transglycosylation reaction catalyzed by the enzyme trans-sialidase, which is unique to these organisms. We have obtained a monoclonal antibody that fully inhibits T. cruzi trans-sialidase actually being, at the best of our knowledge, the most potent inhibitor available. We now report a complete characterization of this neutralizing monoclonal antibody, at the functional and molecular levels. The antibody displays very high affinity and specificity for the T. cruzi enzyme, labels the parasites' surface and effectively blocks its sialylation and host cell invasion capacities. The determination of the 3D structure of the enzyme-antibody immunocomplex by X ray diffraction, allowed us to unveil the inhibition mechanism, providing clues for rational drug design. Given that sialidases are virulence factors in several pathogenic microorganisms, the reported data shall help to expand informative knowledge in this area.
T. cruzi TS belongs in fact to a huge superfamily of genes, among which at least four families can be discriminated [31]. TSs are only included in one of these families, which encodes for a number of enzymatically active and inactive members [32]. These two forms of TS can be distinguished by the single Tyr 342 His mutation [33]: only the active TSs have the Tyr 342 residue acting as the enzyme's nucleophile during the ping-pong reaction [34]. TS-mAb competition assays performed with the inactive TS showed that both proteins reacted similarly with the mAb. An equimolar mixture of inactive and active TSs, displayed ,50% reduction of the neutralizing reactivity ( Figure 1D). In a separate set of assays, heat-inactivated TS was not recognized by the mAb 13G9 ( Figure S1), consistent with the hypothesis that the neutralizing epitope is conformational [35]. In the infective trypomastigote stages, all TSs include the SAPA C-terminal extension [31], which is absent in all the other TS-related families allowing for clear-cut discrimination. To address whether the mAb 13G9 was specific only for TS proteins, extracts from biotinylated trypomastigotes were reacted with the antibody ( Figure 1E). Pulled-down material was subjected to Western blot and developed in parallel with anti-SAPA (for TS) and streptavidin for all the biotinylated parasite surface components. Strong signals were readily observed in both lanes, matching the TS expected protein sizes. No differential pattern was detected whatsoever, confirming the very high specificity of 13G9 antibody only towards proteins belonging to the TS family.
mAb 13G9 Reduces Cell Invasion and Inhibits the Sialylation of the Parasite
The reactivity of mAb 13G9 with whole parasites was assayed by immunofluorescence showing surface labeling consistent with the expected cellular membrane localization of TS ( Figure 2A). The ability of the mAb to inhibit TS-mediated transfer of sialic acid from the surrounding environment to the parasite's surface molecules was then tested. To reduce the basal sialylation of parasites, sialyl residue donors were largely depleted replacing fetal bovine serum (FBS) by bovine serum albumin (BSA) in the infected tissue cultures; only host cells remained as the unique source of the sugar. Trypomastigotes were then collected and incubated with a(2,3)sialyllactose as sialic acid donor and TS, in the presence of mAb 13G9. The amount of transferred sialic acid was determined by the thiobarbituric acid method [36]. As shown in Figure 2B, mAb 13G9 very efficiently inhibited the parasites' sialylation, demonstrating its biologic relevance as a TS-inhibitory molecule. The sialylation observed in the treated parasites corresponds to the sugar acquired before the addition of the mAb. These quantitative results are in agreement with the Western blot assays we have recently reported for sialyl-transfer inhibition by mAb 13G9 using azido-modified sialic acids [37].
TS is involved in cell invasion [8,12] given that sialic acid is required for competent interplay with the host cells. The ability of mAb 13G9 to interfere with the invasion process was therefore studied. The addition of the mAb ( Figure 2C) strongly reduced the number of infected cells, highlighting its biologic activity and contributing direct evidence that TS is a valid target for drug discovery.
3D Structure of the Immunocomplex Fab/TS
To gain atomic insight into the antigen-antibody interactions allowing mAb 13G9 to neutralize the TS catalytic activity with extremely high efficiency, we solved the structure of the immunocomplex by X ray crystallography.
Crystallogenesis screenings were performed under a sitting-drop vapor diffusion setup with a Honeybee963 robotic station, using standard 96-well plates. Several initial hits were obtained. Further manual optimization eventually allowed to grow crystals (0.760.0560.05 mm) in polyethylene glycol (PEG) 20,000 plus dioxane, suitable for X ray diffraction data to be collected (Table 1). Limiting resolution was 3.4Å on a Cu rotating anode generator, and indexing was straightforward, indicating a primitive cell in the trigonal/hexagonal system. Cell parameters (a = b = 178.1Å , c = 140.7Å ) suggested the presence of as many as 3 binary complexes per asymmetric unit, raising as well the hypothesis that its weak diffraction could respond to limiting X ray beam intensity in the context of a fairly large unit cell (low number of scattering cells per crystal unit volume). To rule out this possibility, several crystals were tested at the ALS (Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA) beamline 5.0.2 (8610 11 photons/s with 1.5 mrad divergence at 12.4 keV), with no detectable improvement in resolution as judged by standard quantitative statistics, strongly suggesting that crystal disorder linked to high solvent content (66% as determined after full refinement) is the major cause for maximum resolution sphere limitation.
No 6-fold peaks were found in self-rotation function maps, and the k = 180u section revealed significantly weaker signals than the 3-fold axis (data not shown) consistent with point group 3. Systematic extinctions were observed in the reciprocal 00l axis, strongly suggesting space groups P3 1 or P3 2 . The structure was solved by molecular replacement confirming SG P3 1 . Two search probes were used to calculate rotation and translation functions: Protein Data Base (PDB) 3CLF (mouse IgG Fab fragment, chosen according to sequence similarity to mAb 13G9) and 2AH2 (high resolution T. cruzi TS model). Iterative cycles of maximum likelihood refinement [38] were interspersed with manual rebuilding [39]. The high resolution of the molecular replacement search models resulted in excellent maps and straightforward rebuilding, mostly adding missing side chains on the immunoglobulin heavy and light chains. Tight non-crystallographic symmetry restraints were kept only in the first refinement cycles, thereafter allowing for automatic local NCS detection, with variable weights according to evolving rms deviations, as implemented in the program Buster/TNT [40]. Model refinement statistics are summarized in Table 1. Interestingly, the PISA server (European Bioinformatics Institute, Hinxton) predicts that the TS-Fab 13G9 complex would not be stable in solution, contradicting our experimental results. This discrepancy reveals the still challenging task of predicting energetic and thermodynamic properties of protein/protein associations, based on the analysis of crystal structures of partners and derived complexes, despite the fact that prediction algorithms are complex and attempt integrating enthalpic and entropic effects, as well as solvent accessible surface burial and geometric complementarity [41].
Indeed, three binary Fab-TS complexes are located in the asymmetric unit, all very similar at the level of precision of our data. Refined models of immunocomplex 2 (IC2, composed by TS chain B, and chains I and M of the Fab molecule) and IC3 (TS chain C, complexed to Fab J and N) were superposed sequentially onto complex IC1 (TS chain A with H "heavy" and L "light" chains from the Fab molecule) minimizing root mean squared deviations (rmsd) of atomic coordinates. Such structural alignments resulted in 0.84Å rmsd between IC1 and IC2, and 0.82Å between IC1 and IC3. Regions of highest variation correspond to intrinsically mobile segments, as reflected by detailed analysis of atomic displacement parameters (isotropic B factors). The mean B factor for all atoms is relatively high (59.9 Å 2 ), consistent with the low resolution to which these crystals diffract X rays. Crystal packing is indeed loose, leading to high bulk solvent content and corresponding protein flexibility. TS molecules display lower B factors then the Fab dimers to which they are bound. A global tendency is also maintained among the independent complexes, IC3 showing greater mobility than IC2, which in turn is more flexible than complex IC1 (59.53.48 Å 2 ), probably due to the different packing environments. In the case of the immunoglobulin heterodimers, chains also display a clear difference among variable domains, more rigid, compared to the constant domains, which show a reproducible flexibility on the distal half, away from the interdomain hinge.
Given the overall structural similarity among the three complexes and the fact that complex IC1 resulted in a model with lower B factors, subsequent analyses will be referred only to this complex. Figure 3 shows the immunocomplex IC1 highlighting that the variable regions of the Fab light chain are interacting with TS loops located closer to the entrance of the enzyme's catalytic pocket, while the heavy chain associates to an adjacent, more distal patch.
The solvent accessible surface that becomes buried due to the enzyme-antibody interaction corresponds to 1810.2 Å 2 (916.5 Å 2 on the TS and 893.7 Å 2 on the Fab, adding 506 Å 2 from the heavy chain, and 387.7 Å 2 from the light chain), within the typical range of antibodies reacting with protein antigens. On this interface, 15 hydrogen bonds and one salt bridge can be distinguished, as well as a number of residues that establish contact interactions (van der Waals forces), as listed on Table 2. The resolution limit of the diffraction data allowed for the identification of very few water molecules, none of which are directly involved in the accessible nor the buried surfaces engaged in interaction. The shape complementarity statistics [42] correspond to 0.673 and 0.645, after analysis of the interface areas with the light and the heavy chains, respectively. These figures are within the typical range (0.64-0.74) of specific protein:protein interfaces. The epitope ( Figure 4) consists of residues H 171 , Y 248 , R 311 -W 312 , and loops 199-201 (KKK) and 116-128 (SRSYWTSHGDARD -W 120 and A 126 do not interact directly).
The structural bases of the catalytic inhibitory effect that this mAb elicits, can start to be elucidated by modeling the entrance of the sialylated substrate into the TS reactional center in the context of the TS-Fab complex ( Figure 5). Superimposing TS PDB models 1S0I and 1S0J, onto our structure, allowed to define the positions of the substrates N-acetyl-neuraminyl-lactose (a(2,3)sialyllactose) and 4-methylumbelliferyl-N-acetyl-neuraminic acid (MU-NANA), respectively ( Figure 5). The most readily observable feature is the steric hindrance that TS residue Y 119 imposes, blocking the entrance of the sialyl residue in the reactional pocket.
The free mobility of the phenolic side chain of Y 119 is limited by the juxtaposed residue S 30 from the Fab's light chain ( Figure 5). This restraint seems to play a central role in precluding the entrance of sialylated substrates into the catalytic pocket, entrance that absolutely requires the movement of Y 119 [23]. A second effect could not be excluded, namely the spatial constraint exerted by the overall architecture of the associated complex. Residues S 26 -S 28 (within the light chain complementarily determining region CDRL1) and S 66 -G 67 on the same Fab chain, establish direct contact with TS residues R 311 and W 312 . This interaction is located just on top of the catalytic pocket entrance, functioning as a 'roof' (SG/RW roof), where the catalytic center itself would be the floor. As shown in Figure 5B, when sialyllactose is located in position, the substrate pocket appears to be too small, predicting direct clashes of the glucosyl residue with the SG/RW roof (particularly residues Ser 66 -Gly 67 of the Fab light chain). This scenario of course implies that Y 119 could eventually be forced to move out of the sialic acid binding site, an unlikely event. The light chain loop 29-31 is also prone to interfere with the saccharide, if rearrangements are to be considered during its accommodation (data not shown). In order to obtain further experimental data evaluating the relative effects of Y 119 -mobility hindrance and/or the spatial constraints exerted by the SG/RW roof onto the catalytic pocket cavity volume, MU-NANA was assayed in TScatalyzed sialidase reactions. MU-NANA is an artificial substrate that allows for TS-catalyzed hydrolytic and trans-glycosidase activities [43], and given its smaller volume, could better accommodate, avoiding steric clashes with the SG/RW roof structure ( Figure 5B). TS-mediated MU-NANA hydrolysis was efficiently inhibited by mAb 13G9 (Figure 6), suggesting that the immobilization of Y 119 does play a central role. The spatial confinement in the pocket, partly due to the SG/RW roof structure, might impose secondary constraints precluding torsional accommodation, even in the case of smaller compounds.
Discussion
This report describes an extensive biochemical and structural characterization of the mouse mAb 13G9, which is herein demonstrated to act as a powerful inhibitor of the T. cruzi TS catalytic activity, displaying high specificity and affinity for the enzyme. T. cruzi TS is a virulence factor required for the survival of the parasite in the mammalian host. Several different biologic activities of the enzyme can be discriminated. The parasite uses TS Figure 3. Overall structure of one binary immunocomplex. Immunocomplex 1 (IC1 as defined in the text) is depicted as a background cartoon representation with a superposed transparent solvent-accessible surface rendered in colors. trans-Sialidase is colored green, Fab light chain magenta and heavy chain cyan. The antibody light chain is slightly occluding the entrance of the enzyme's catalytic pocket, while the heavy chain is more eccentric, establishing a large interaction surface on one side of the reaction center. doi:10.1371/journal.ppat.1002474.g003 activity to sialylate its own surface molecules, allowing it to evade lysis by serum factors [9,10]. In this context, it should be noted that the addition of mAb 13G9 inhibited this sialylation process ( Figure 1) in agreement with our previous findings with azidomodified sugars [37]. As well, TS is not only directly involved in the parasite/host cell interaction through the generation of a required sialylated epitope [7,8] but also in escaping from the parasitophorous vacuole to the cytoplasm [12]. In concert with these findings, here we report that mAb 13G9 significantly reduces parasite infection of cell cultures (Figure 1). Passive transfer of neutralizing mAb 13G9 to heavily infected mice, protects them against TS-induced deleterious effects on the immune system and platelets [5,17]. In this sense, it is well known that antibodies against neuraminidases are also effective in preventing other diseases such as Influenza [44]. These protective effects are very much promising to delineate a therapeutic tool. The high molecular weight of antibodies constitutes a main drawback in their use, due to eventual hindrance for effective diffusion into infected tissues, where high concentrations of locally produced TS are expected to be found. On the other hand, Fab fragments, small recombinant antibody-derived molecules (e.g. scFv), or yet antibody-mimetic engineered molecules [45], can be cleared exceedingly fast from the bloodstream [46], resulting in poor pharmacokinetic figures. PEGylation, and other modifications to improve bioavailability of these smaller protein scaffolds, constitute interesting approaches to be tested using mAb 13G9 as starting lead [47]. As a second interesting avenue to explore for therapeutic derivatives, the high affinity and specificity of this mAb, prompted us to elucidate its neutralizing mechanism, as an attempt to thereafter conceive low molecular weight inhibitors, suitable as chemotherapy leads. Some information can be gathered in this respect from previous studies of the neuraminidase from Influenza virus, a protein orthologous to TS. The overall geometry of the antibody/TS association that we are now reporting, is reminiscent of the one described for a Fab/Influenza-N2 neuraminidase complex (PDB 2AEP; [48]), which shows interaction with enzyme's loops on the same side of the reaction pocket, opposite to the patch where most other anti-neuraminidase antibodies have been reported (such as the ones involving avian N9 neuraminidase with antibodies NC41 and NC10, PDBs 1NCA and 1NMB, respectively; among others) [49][50][51]. The interaction surfaces of TS-13G9 mAb (this report) and N2NA-Mem5Fab (2AEP) are largely overlapping, although the antibodies are bound in inverted configurations with respect to the location of the heavy and light chains. Well defined escape mutations in Influenza (loops including positions 198-199 and 220-221, following N2 Influenza numbering scheme) identify epidemiologically important antigenic sites of neuraminidase, revealing antigenic drift in human viruses seemingly under natural antibody selection of enzyme variants [52]. These loops, connecting b2-b3 within the second blade of the six-bladed b-propeller domain, and b4 of this blade with b1 of the next one, are not structurally conserved between T. cruzi and Influenza enzymes, being longer in the former. Nevertheless, it is clear that the equivalent loops in T. cruzi TS do play a critical role in the 13G9 Fab association that we are now reporting.
One of the specific mAb loops that interact in a proximal position to the catalytic pocket of the enzyme, was observed precluding the displacement of Y 119 , a critical residue that has already been shown to be flexible in TS [24,53]. Indeed, the mobility of Y 119 plays a key role in the trans-glycosidase mechanism of TS. The determination of the three-dimensional coordinates of the paratope, including these features that lead to spatial constraints, uncovers relevant information. This is to be used as a precise guide, not only to undertake peptidomimetic syntheses, but most importantly, to use as a working template for the synthesis of non-peptidic molecules including critical pharmacophores [54].
Ethics Statement
The protocol of this study was approved by the Committee on the Ethics of Animal Experiments of the Universidad Nacional de San Martín, which also approved protocol development under the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health.
Recombinant Enzymes
Recombinant T. cruzi TSs (constructs 1N1, 2Vo, D1443TS and 3.2) [24,28,33], T. rangeli sialidase [23] and T. brucei TS [55] were used. The 1N1 and 2Vo clones correspond to the full-length (including the SAPA repeats [33]) wild type genes that encode for enzymatically active and inactive molecules, respectively. The D1443TS recombinant TS was used for immunization procedures. D1443TS is an engineered variant where the deletion of a nonneutralizing epitope in the globular domain was done [28]. The TS 3.2 construct [24] is engineered to express the enzymatically competent globular domain only, containing seven mutations of surface-located residues that allow for protein crystallization. All TSs were expressed in Escherichia coli BL21 and immediately used after purification, avoiding .3 weeks storage at 4uC. Recombinant proteins were purified to homogeneity as described elsewhere [56], briefly, TS was subjected to immobilized metal affinity chromatography (Ni ++ -charged, Hi-Trap Chelating HP) followed by MonoQ anionic exchange chromatography (both from GE-Healthcare).
Hybridoma Screening and mAb Production
Splenocyte suspensions were mixed with Sp2/0-Ag14 cells (ATCC) and fusions performed with polyethylene glycol (GIBCO) following standard procedures [58]. Cells were seeded on 96-well flat-bottom plates at a density of 1610 5 cells/well in RPMI 1640 with 2 mM Na Piruvate, 10% FBS, 1X hypoxanthine-aminopterin-thymidine (HAT) solution (all from Invitrogen) and supplemented with 2% supernatant of Sp2/0-Ag14 cultures. One-week later, plates were observed under microscopy and the supernatant of those wells containing hybridomas were taken and refilled with fresh medium. ELISA was performed with these samples in search for TS-specific antibody production. To preserve discontinuos epitopes, the recombinant TS 1N1 containing the C-terminus repetitive extension (SAPA) was linked to the plate (MaxiSorb, NUNC) by Protein A-Sepharose (HiTrap, GE-Healthcare)purified rabbit IgG anti-SAPA, a procedure that safely retained the enzymatic activity (not shown). Those culture wells where anti-TS antibodies were detected were further assayed by TS-inhibition assay [30]. Hybridomas secreting neutralizing antibodies were cloned twice by cell dilution. From four inhibitory antibodysecreting hybridomas detected, only one (named 13G9) was successfully recloned twice by the dilution method and then expanded. The mAb 13G9 was typed as IgG2ak using the Mouse Antibody Isotyping Kit (GIBCO).
mAb Production and Purification
The 13G9 hybridoma was cultured in RPMI 1640 plus 2 mM Na Piruvate and 10% FBS. Supernatants were clarified and subjected to Protein A-Sepharose (GE-Healthcare) affinity chromatography. The mAb was eluted with 150 mM NaCl, 0.1 M Glycine-HCl pH 3.5 and aliquots were received on 0.1 M Tris-ClH pH 7.6 and dialyzed against 50 mM NaCl, 20 mM Tris-HCl, pH 7.6. Fractions were then loaded into an ion-exchange column (MonoQ, GE-Healthcare) and eluted with a 50-500 mM NaCl gradient in the same buffer ( Figure S1). Purified 13G9 mAb was tested by TS-inhibition assay [30] and by reactivity to native and denatured TS-SAPA molecules spotted on nitrocellulose ( Figure S1).
Determination of Kinetic Parameters of mAb 13G19 Reactivity
The association/dissociation kinetic constants (k on /k off ) were determined with a BIAcore 2000 (BIAcore AB, Uppsala, Sweden). Purified mAb was dialyzed against 20 mM sodium acetate pH 5.6 and immobilized to sensor chips CM5 by using the amine-coupling kit (BIAcore AB). Chips were quenched with 1 M ethanolamine/HCl. After equilibration with 150 mM NaCl, 0.05% P20 surfactant, 10 mM HEPES pH 7.4 (HBS-EP), different concentrations of TS (from 1 nM to 10 mM) were injected at 50 ml/min. After each recording cycle, chips were regenerated with an injection of 2 mM HCl for 30 sec. A free surface of the chip was used as control throughout the experiments. Kinetic constants were evaluated using the program BIAevaluation 3.01 (BIAcore AB). Isothermal titration calorimetry assays were performed in the laboratory of Dr. Alan Cooper (Department of Chemistry, Joseph Black Building University of Glasgow, UK).
Inhibition constants of TS activity were determined for mAb 13G9 and its derived Fab fragment (see below for digestion details) by testing increasing amounts of inhibiting antibody with 2 ng of TS in 30 ml of 150 mM NaCl, Tris-HCl pH 7.6. After 5 min at room temperature (RT), 1 mM sialyllactose and 0.4 nmol (about 40,000 cpm) of [D-glucose-1-14 C]-lactose (54.3 mCi/mmol, GE-Healthcare) were added. Remnant TS activity was evaluated [30] after 30 min incubation at RT.
Specificity of mAb 13G9 Reactivity
Trypomastigotes (120610 6 ) were purified from supernatants of infected Vero cell cultures, biotinylated (Sulfo-NHS-LC-Biotin kit form Pierce, Rockford, IL) washed and lysed in the presence of protease inhibitors and centrifuged at 16,000 g. Supernatant was precleared with Protein A-Sepharose (GE-Healthcare) and then reacted with 50 ml of mAb 13G9 hybridoma supernatant for 30 min. Then, Protein A-Sepharose was added and beads extensively washed before SDS-PAGE sample buffer addition and boiling. SDS-PAGE was performed with two parallel aliquots that were then transferred to polyvinylidene fluoride (PVDF) membrane (GE-Healthcare) and developed with either rabbit IgG anti-SAPA followed by horseradish peroxidase (HRP)-labeled secondary antibody or HRP-streptavidin and Super Signal West Pico Chemiluminescent substrate (Pierce).
Inhibition of Parasite Cell Invasion
T. cruzi trypomastigotes (CL-Brenner strain) obtained from Vero cell cultures (Minimum Essential Medium (Invitrogen) supplemented with 0.2% BSA instead of FBS to reduce sialic acid donors) were exhaustively washed with PBS. Parasites were tested by infection of Vero and HeLa cell cultures in the same medium at a multiplicity of infection of 30 in the presence of 0.1 mg/ml of mAb 13G9. After 3 h, cells were washed and medium plus 10% FBS was added. Cells were fixed and stained 24 h later for counting infected cells under microscopy. IgG purified from naïve mouse was used as control.
Inhibition of Parasite Sialylation
Parasites obtained under low sialic acid conditions as above were incubated with 1 mM sialyllactose (Sigma) as sialyl residue donor substrate and TS (2 mg/ml) with or without mAb 13G9 (0.1 mg/ml). After washings with PBS, sialyl residue content was determined by the thiobarbituric HPLC assay after hydrolysis in 0.1 M HCl for 1 h at 80uC [36]. IgG purified from naïve mouse was used as control.
Immunofluorescence
Cell culture-derived trypomastigotes were washed with PBS and incubated with mAb 13G9 (0.05 mg/ml) for 15 min, washed, fixed with 1% paraformaldehyde for 10 min on ice, washed again and blocked for 1 h with 2% BSA plus 5% swine serum in PBS.
After that, the parasites were adhered to glass slides via Poly-L-Lysine (Sigma), blocked again, developed with a FITC-conjugated secondary antibody (DAKO, Denmark) and observed by epifluorescence microscopy.
Inhibition of Sialidase Activity
The sialidase activity of TS was determined by measuring the fluorescence of 4-methylumbelliferone released by the hydrolysis of 0.2 mM MU-NANA (Sigma). To 50 ng of TS, different amounts of hybridoma culture supernatant (0-10 ml) or RPMI plus 10% FBS (control) were added. The assay was performed in 50 ml of 150 mM NaCl, 20 mM Tris-ClH pH 6.8. After 10 min at RT, 200 mM of MU-NANA was added and incubation continued for 30 min. The reaction was stopped by dilution in 0.2 M NaHCO 3 pH 10, and fluorescence was measured with a DYNA Quant TM 200 fluorometer (GE-Healthcare). Fluorescence values were referred to each RPMI control.
Generation of Antibody Fragments and Immunocomplex
Purified mAb was dialyzed against 2 mM EDTA, 0.1 M Tris-HCl pH 7.6. Before papain digestion 1 mM dithiothreitol (DTT) was added. Papain-agarose beads (Sigma) were washed with the same buffer and activated by addition of 1 mM DTT for 15 min at 37uC. The Fab fragment was generated by digestion for 5 h at 37uC with papain-agarose beads (3U papain/mg mAb; 30 mg of beads for 14 mg of mAb) with gentle end-over-end agitation [58]. After centrifugation at 3,000 rpm, 10 mM trans-epoxysuccinyl-Lleucylamido(4-guanidino)butane (E-64) was added. Undigested antibody and Fc fragment were depleted by Protein A-Sepharose (GE-Healthcare) chromatography and Fab digestion and purity was assayed by SDS-PAGE.
To generate the immunocomplex, pure TS (3.2 clone) was immediately added after the depletion of papain-beads and E-64 addition step before subjecting the mixture to Protein A-Sepharose chromatography as above ( Figure S1). The immunocomplex was brought to 25 mM NaCl and concentrated on a BIOMAX 30 K (Millipore) to 14 mg/ml and the buffer changed to 25 mM NaCl, 20 mM Tris-HCl pH 7.6. The purified immunocomplex was essentially free from contaminating proteins and only traces of TS activity remained (see Figure S1). Before crystallization trials, the immunocomplex was repurified by size exclusion chromatography (Superdex200 10/300, GE Healthcare) in an AKTA Purifier, (GE Healthcare) with isocractic elution in 100 mM NaCl, 20 mM Tris-HCl pH 7.6. The resulting single symmetric peak was pooled and concentrated to 7.5 mg/ml by ultrafiltration (Vivaspin, Sartorius-Stedim Biotech; 30 kDa-cutoff membrane) in buffer 25 mM NaCl, 20 mM Tris-HCl pH 7.6.
Immunocomplex Crystallization
Crystallogenesis conditions were screened with a HoneyBee 963 robot (Digilab), using the vapor diffusion method in sitting-drops and reservoirs filled with 150 ml mother liquors (kits JCSG Core Suites I, II, III and IV, Qiagen), rendering 396 different conditions in 96-well plates (3-drop round bottom, Greiner). Protein drops were dispensed mixing equal parts of protein and reservoir solutions (300 nl + 300 nl). Plates were immediately sealed and incubated at 20uC. Hits were obtained in several conditions, one of them was chosen for manual optimization in 24-well plates (VDX, Hampton Research). Final optimized conditions consisted in 2+2 ml hanging-drops, 0.1 M bicine pH 8.5, 10% PEG 20,000, 4% 1,4-dioxane as mother liquor. To obtain larger crystals suitable for single crystal X ray diffraction experiments, repeated macroseeding cycles proved to be essential. Each cycle included selection of best crystal seeds that were transferred to protein-free drops of mother liquor and crystals etched for 30 sec (this washing procedure was repeated three times). Finally, the seed was added to a fresh hanging-drop containing 2 ml protein + 2 ml mother liquor, over 1 ml pure mother liquor. Single needles grew in 5-10 days, cryoprotected with mother liquor containing 12% PEG 20,000 and 30% glycerol and flash frozen in liquid nitrogen until data collection.
Crystal Structure Determination
Single crystal X ray diffraction experiments were performed with a rotating copper anode (Micromax007-HF, Rigaku), multilayer mirrors (Varimax HF, Rigaku) and an image plate detector (Mar345 dtb, Mar Research). Crystals were mounted to collect data under cryogenic temperature (108uK, Cryostream Series 700, Oxford Cryosystems). To attempt improving diffraction resolution, similar crystals were subjected to X ray diffraction using synchrotron radiation at beamline 5.0.2 ALS, equipped with a wiggler inserted device. All data sets were processed with MOSFLM [59], SCALA and TRUNCATE [60].
The structure was solved by molecular replacement with the program Phaser [61], using the models 3CLF (mouse IgG Fab) and 2AH2 (T. cruzi TS in complex with 3-flourosialic acid) as search probes. The Fab probe was previously modified using Chainsaw [60], keeping only the conserved side chains, the rest pruned to alanine or glycine.
The model was refined to the highest collected resolution (3.4 Å ) with the program Buster/TNT [38], using a maximum likelihood target function and non-crystallographic restraints throughout the entire process. A TLS model was used to refine correlated anisotropic atomic displacement parameters in large rigid-body domains. Reciprocal space refinement cycles were iterated with manual model rebuilding [39]. Validation tools within Coot were inspected regularly during the refinement process. Last validation steps were done with MolProbity [62].
Accession Numbers
The atomic coordinates and structure factors of the Fab-TS immunocomplex that we have solved in this report are accessible in the PDB with accession code 3OPZ. The models used to solve the phase problem have PDB accession codes 3CLF (mouse IgG Fab fragment) and 2AH2 (T. cruzi TS). A certain number of sialidase and trans-sialidase structures solved previously by us or by other groups, are mentioned in the Discussion section and can be accessed in the PDB with codes: 2AEP (Fab/Influenza-N2 neuraminidase complex); 1NCA (avian N9 neuraminidase complexed with antibody NC41); 1NMB (avian N9 neuraminidase complexed with antibody NC10); 2AEP (N2NA-Mem5Fab); 1S0I (T. cruzi TS in complex with sialyllactose) and 1S0J (T. cruzi TS in complex with MUNANA). Sequence of T. cruzi trans-sialidase can be accessed from the GenBank with the code L26499. Figure S1 Production of the mAb 13G9-TS immunocomplex. A) MonoQ-chromatogram of Protein A-purified hybridoma 13G9 supernatant. The mAb eluted as a single peak. B) TS reactivity of eluted and pass-trough proteins. Nitrocellulose membranes were spotted with TS-SAPA native (1) or heatdenatured (2). Upper panel was tested with flow through proteins, middle panel with the eluted peak and lower panel with an anti-SAPA mAb. Filters were developed with an HRP-labeled secondary antibody against mouse immunoglobulins. Note the absence of reactivity to the denatured protein by the 13G9 mAb (middle panel, spot 2) in contrast with the anti-SAPA mAb that recognizes a continuous epitope (lower panel). C) Purification of the Fab-TS complex through a Protein A affinity column. The retained protein corresponds to the Fc fraction. D) SDS-PAGE of the purified TS-Fab complex. E) Almost null remnant TS activity was found in the TS-Fab complex. (EPS) | 2014-10-01T00:00:00.000Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "99769228bebc111d24e435efb9e8da4a55b930cd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1002474&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51f964bbe2b157763a0c19e807435302be2c1beb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
249351330 | pes2o/s2orc | v3-fos-license | Impact of Doxorubicin on Cell-Substrate Topology
Variable-Angle Total Internal Reflection Fluorescence Microscopy (VA-TIRFM) is applied in view of early detection of cellular responses to the cytostatic drug doxorubicin. Therefore, we determined cell-substrate topology of cultivated CHO cells transfected with a membrane-associated Green Fluorescent Protein (GFP) in the nanometer range prior to and subsequent to the application of doxorubicin. Cell-substrate distances increased up to a factor of 2 after 24 h of application. A reduction of these distances by again a factor 2 was observed upon cell aging, and an influence of the cultivation time is presently discussed. Applicability of VA-TIRFM was supported by measurements of MCF-7 breast cancer cells after membrane staining and incubation with doxorubicin, when cell-substrate distances increased again by a factor ≥ 2. So far, our method needs well-defined cell ages and staining of cell membranes or transfection with GFP or related molecules. Use of intrinsic fluorescence or even light-scattering methods to various cancer cell lines could make this method more universal in the future, e.g., in the context of early detection of apoptosis.
Introduction
Doxorubicin, an anthracycline antibiotic, is used as a cytostatic drug in cancer chemotherapy, such as breast cancer, bronchial carcinoma, and lymphoma, and has been studied and applied for several decades [1,2]. The drug is taken up by cells either by passive diffusion through their membrane [3] or by endocytosis after encapsulation [4,5] and finally intercalates in DNA strands, where it causes chromatin condensation and initiates apoptosis [6,7]. In cardiomyocytes, densely packed with mitochondria, oxidative stress is a key factor during treatment with anthracycline dyes [8], which are used in chemotherapy or heart failure [9]. Due to its fluorescence properties [10], doxorubicin can be localized within the cells, e.g., by widefield microscopy, fluorescence lifetime measurements [11][12][13], or hyperspectral imaging [14].
While in previous papers we focused on the uptake and intracellular distribution of doxorubicin in 2-dimensional [15] and 3-dimensional [16] cell cultures, we now draw our attention to the plasma membrane, in particular to cell-substrate topology, in order to obtain new information on cell morphology upon application of doxorubicin. Chinese Hamster Ovary (CHO) cells were transfected with a membrane-associated Green Fluorescent Protein (GFP), and the distance between the fluorescent cell membrane and a glass slide, upon which cells were growing as monolayers, was calculated for all pixels from individual images acquired by Variable-Angle Total Internal Reflection Fluorescence Microscopy (VA-TIRFM) [17]. This technique was used in order to measure early cellular responses caused by doxorubicin within 2-24 h, since conventional tests proved cytotoxicity only after an incubation time t ≥ 24 h for 2D cell cultures [18][19][20] and after t = 48-96 h for 3D cultures [16]. In addition, established cytotoxicity tests require considerably more time for evaluation (up to about 7 days for a colony formation assay [21]).
Theory
Our experiments are based on Variable-Angle Total Internal Reflection Microscopy (VA-TIRFM) with light incidence on a cell monolayer at an angle Θ, which is larger than the critical angle Θc = arcsin (n 2 /n 1 ) for total internal reflection (TIR) with n 1 corresponding to the refractive index of the medium of light incidence and n 2 (≤n 1 ) to that of the cells. Using a two-layer model with refractive indices of a glass prism (n 1 ) and the cytoplasm (n 2 ), an evanescent electromagnetic field arises on the cell-substrate interface with an exponential decrease and a penetration depth [22] d(Θ) = (λ/4π) (n 1 2 sin 2 Θ − n 2 2 ) −1/2 (1) The refractive index of the plasma membrane is negligible in this equation due to its thickness of around 5 nm. A previous calculation [17] showed that the fluorescence intensity of a fluorophore located in a thin layer of thickness t (e.g., cell membrane) and excited by the evanescent field can be approximated as with an experimental constant A, the concentration c of the fluorophore, the transmission factor T between media of different refractive indices, and the distance ∆ between the cell membrane and the substrate (see also Supplementary Information, Section S2). With for polarized light perpendicular to the plane of incidence [23], a plot of ln[I F /T(Θ)] over 1/d(Θ) results in a linear function with the slope −∆ according to Equation (2). ∆ corresponds to the cell-substrate distance of each pixel, as further visualized in [24]. In the present paper, ∆ was evaluated from fluorescence images recorded under VA-TIRFM.
Results
Representative fluorescence spectra from subcultures with 56-64 cell splittings are depicted in Figure 1 for Total Internal Reflection (TIR) angles of 66 • , 69 • , and 73 • after 72 h cell growth and (subsequently) 0 h (control), 2 h, or 24 h incubation with doxorubicin in cultivation medium. The spectra are characteristic for Green Fluorescent Protein (GFP) with emission maxima around 530 nm, although some overlap by the fluorescence of intracellular doxorubicin or its degradation products, e.g., 7,8-dehydro-9,10-desacetyldoxorubicinone [25], cannot be excluded. At increased incubation times, the fluorescence signal shows a stronger decrease with an increasing angle of incidence, i.e., with a decreasing penetration depth of the evanescent field (see also Supplementary Material, Figure S1).
Theory
Our experiments are based on Variable-Angle Total Internal Reflection Microscopy (VA-TIRFM) with light incidence on a cell monolayer at an angle Θ, which is larger than the critical angle Θc = arcsin (n2/n1) for total internal reflection (TIR) with n1 corresponding to the refractive index of the medium of light incidence and n2 (n1) to that of the cells. Using a two-layer model with refractive indices of a glass prism (n1) and the cytoplasm (n2), an evanescent electromagnetic field arises on the cell-substrate interface with an exponential decrease and a penetration depth [22] The refractive index of the plasma membrane is negligible in this equation due to its thickness of around 5 nm. A previous calculation [17] showed that the fluorescence intensity of a fluorophore located in a thin layer of thickness t (e.g., cell membrane) and excited by the evanescent field can be approximated as with an experimental constant A, the concentration c of the fluorophore, the transmission factor T between media of different refractive indices, and the distance Δ between the cell membrane and the substrate (see also Supplementary Information, Section 2). With for polarized light perpendicular to the plane of incidence [23], a plot of ln[IF/T(Θ)] over 1/d(Θ) results in a linear function with the slope −Δ according to Equation (2). Δ corresponds to the cell-substrate distance of each pixel, as further visualized in [24]. In the present paper, Δ was evaluated from fluorescence images recorded under VA-TIRFM.
Results
Representative fluorescence spectra from subcultures with 56-64 cell splittings are depicted in Figure 1 for Total Internal Reflection (TIR) angles of 66°, 69°, and 73° after 72 h cell growth and (subsequently) 0 h (control), 2 h, or 24 h incubation with doxorubicin in cultivation medium. The spectra are characteristic for Green Fluorescent Protein (GFP) with emission maxima around 530 nm, although some overlap by the fluorescence of intracellular doxorubicin or its degradation products, e.g., 7,8-dehydro-9,10-desacetyldoxorubicinone [25], cannot be excluded. At increased incubation times, the fluorescence signal shows a stronger decrease with an increasing angle of incidence, i.e., with a decreasing penetration depth of the evanescent field (see also Supplementary Material, Figure S1). This result is confirmed by Figure 2, which shows representative fluorescence images at 66° excitation as well as the cell-substrate topology calculated for 0 h, 2 h, and 24 h This result is confirmed by Figure 2, which shows representative fluorescence images at 66 • excitation as well as the cell-substrate topology calculated for 0 h, 2 h, and 24 h incubation with doxorubicin according to Equation (2). In addition to the increasing density of cells at 24 h (due to the longer growth time), the images of cell topology show larger cell-substrate distances with increasing incubation time. incubation with doxorubicin according to Equation (2). In addition to the increasing density of cells at 24 h (due to the longer growth time), the images of cell topology show larger cell-substrate distances with increasing incubation time. In Figure 3, the result is quantified in the histograms showing the frequencies of cellsubstrate distances of representative images calculated after 0 h, 2 h, and 24 h incubation (zero distances calculated for regions outside the cells are omitted). Furthermore, the position of the maximum, i.e., the most frequent distance, is indicated. This distance-evaluated for all the histograms-was (50 ± 14) nm for 0 h, (67 ± 28) nm for 2 h, and (99 ± 20) nm for 24 h incubation, as depicted in the inset in Figure 3. According to a t-test for unequal variances, differences between 0 h and 2 h as well as between 0 h and 24 h were statistically significant. In an additional experiment, cells were grown for 96 h in culture medium, and most frequent cell-substrate distances were evaluated in a similar way, resulting in (50 ± 10) nm without and (82 ± 5) nm after 2 h incubation with doxorubicin. Values without doxorubicin were thus shown to be independent from growth time in culture, and values after 2 h incubation with doxorubicin differed only slightly in a statistically nonsignificant way. In Figure 3, the result is quantified in the histograms showing the frequencies of cellsubstrate distances of representative images calculated after 0 h, 2 h, and 24 h incubation (zero distances calculated for regions outside the cells are omitted). Furthermore, the position of the maximum, i.e., the most frequent distance, is indicated. This distanceevaluated for all the histograms-was (50 ± 14) nm for 0 h, (67 ± 28) nm for 2 h, and (99 ± 20) nm for 24 h incubation, as depicted in the inset in Figure 3. According to a t-test for unequal variances, differences between 0 h and 2 h as well as between 0 h and 24 h were statistically significant. In an additional experiment, cells were grown for 96 h in culture medium, and most frequent cell-substrate distances were evaluated in a similar way, resulting in (50 ± 10) nm without and (82 ± 5) nm after 2 h incubation with doxorubicin. Values without doxorubicin were thus shown to be independent from growth time in culture, and values after 2 h incubation with doxorubicin differed only slightly in a statistically nonsignificant way. Figure 4 shows a comparison of cell-substrate distances (most frequent distances determined from the histograms as mean value ± standard deviation) for the subcultures with 28-35, 48-50, and 56-64 cell splittings for 0 h and 2 h incubation with doxorubicin. In comparison with subcultures 56-64 (previously presented in detail), the younger cell cultures generally showed higher cell-substrate distances, which increased after incubation with doxorubicin (subcultures 28-35 from 116 ± 58 nm to 147 ± 38 nm and subcultures 48-50 from 103 ± 14 nm to 126 ± 20 nm). The relative increase was similar to that of subcultures 56-64, but only the increase for subcultures with 28-35 and 56-64 cell splittings was statistically significant (levels of significance: p ≤ 0.05 between 0 h and 2 h in both cases; p ≤ 0.001 between 0 h and 24 h for subcultures 56-64). The increase for subcultures 48-50 may be regarded as a relevant trend, which could not be tested for statistical significance due to the low number of individual measurements (n = 6 each). The value at 0 nm (resulting from outside the cells) has been omitted. Inset: most frequent distance evaluated from all histograms as mean value ± standard deviation and p-values for statistical sig nificance obtained from a t-test for 2 samples assuming unequal variances (p 0.05: statistical significant). Subcultures with 56-64 cell splittings. Figure 4 shows a comparison of cell-substrate distances (most frequent distances de termined from the histograms as mean value ± standard deviation) for the subculture with 28-35, 48-50, and 56-64 cell splittings for 0 h and 2 h incubation with doxorubicin In comparison with subcultures 56-64 (previously presented in detail), the younger ce cultures generally showed higher cell-substrate distances, which increased after incuba tion with doxorubicin (subcultures 28-35 from 116 ± 58 nm to 147 ± 38 nm and subculture 48-50 from 103 ± 14 nm to 126 ± 20 nm). The relative increase was similar to that of sub cultures 56-64, but only the increase for subcultures with 28-35 and 56-64 cell splitting was statistically significant (levels of significance: p 0.05 between 0 h and 2 h in bot cases; p 0.001 between 0 h and 24 h for subcultures 56-64). The increase for subculture 48-50 may be regarded as a relevant trend, which could not be tested for statistical signi icance due to the low number of individual measurements (n = 6 each).
Discussion
The measurements described above show increasing cell-substrate distances upo incubation with doxorubicin for t 2 h. Since this time is shorter than incubation time generally used in cytotoxicity tests, our method appears applicable for early detection o cellular responses to doxorubicin. However, with a view to validate our method, som open questions remain to be resolved: are there any additional effects due to the retentio time of the cells in culture medium (without any drug), e.g., due to stiffening of microtu bules [26] or cellular traction forces [27]? Previous findings of CHO-K1 cells expressing human insulin receptor (hIR) and glucose transporter 4-myc-GFP [28] proved that growt time of the cells in a culture medium may have an impact on cell-substrate topology. I the present case, prolongation of the cultivation time from 72 h to 96 h did not change th
Discussion
The measurements described above show increasing cell-substrate distances upon incubation with doxorubicin for t ≥ 2 h. Since this time is shorter than incubation times generally used in cytotoxicity tests, our method appears applicable for early detection of cellular responses to doxorubicin. However, with a view to validate our method, some open questions remain to be resolved: are there any additional effects due to the retention time of the cells in culture medium (without any drug), e.g., due to stiffening of microtubules [26] or cellular traction forces [27]? Previous findings of CHO-K1 cells expressing a human insulin receptor (hIR) and glucose transporter 4-myc-GFP [28] proved that growth time of the cells in a culture medium may have an impact on cell-substrate topology. In the present case, prolongation of the cultivation time from 72 h to 96 h did not change the cell-substrate distances, and after incubation of the cells for 2 h with doxorubicin, these distances were only slightly, but not significantly, larger if a growth time of 96 h instead of 72 h was chosen. Therefore, only further experiments with a larger amount of data can finally settle this question. Furthermore, the age of the cell cultures seems to play a major role. In the present case, we primarily evaluated subcultures with 56-64 splittings, i.e., rather aging cells. When, in additional experiments, we evaluated subcultures with 28-35 or 48-50 cell splittings, the cell-substrate distances were higher by about a factor of 2, while the relative increase after 2 h incubation with doxorubicin was similar to that of the subculture with 56-64 splittings, as shown in Figure 4. This implies that changes in cell-substrate distances may still be regarded as an early response to doxorubicin, but for further evaluation, cell ages should be well-defined. Cell aging may have an impact on the mechanical properties of the cell, e.g., due to mechano-transduction and modified tension of the cytoskeleton (for a review, see [29]) or due to an altered stiffness of cell membranes in connection with a changing cholesterol level [30].
It should be mentioned that some fluorescence of doxorubicin or its degradation product [25] may overlap GFP fluorescence, as proven for whole-cell experiments (upon illumination at Θ = 62 • , i.e., below the critical angle of incidence, Θ C ). As shown in Figure 5, this overlap becomes obvious in the cell nucleus (at Θ = 62 • ) and is very low in the TIRFM experiments (Θ ≥ 66 • ). Therefore, fluorescence of doxorubicin or its degradation product probably does not-or only very slightly-falsify our experimental results on cell-substrate distances based on TIRFM. Although variable-angle TIRFM of cell-substrate contacts were reported almost 3 years ago [23,31], since then there have been very few further reports in the literature, e.g in the authors' previous publications on nanotopology of cell adhesion to distinguish be tween tumor cells and less malignant cells [24], and on cell-substrate topology in photo dynamic therapy (PDT) [32]. An application of this technique to test the efficacy of doxo rubicin (or any other cytostatic drug) is hitherto unknown. Therefore, we suggest consid ering this method for cell monolayers upon fluorescence staining of cell membranes o transfection with membrane-associated fluorescent proteins. The present CHO-pAcGFP1 Mem cell line appeared appropriate for this purpose. Cell lines from tumors, e.g., brea Fluorescence images excited at Θ = 62 • show some additional red fluorescence due to doxorubicin, which almost disappears at Θ = 66 • . Fluorescence spectra excited at Θ = 62 • exhibit two emission maxima around 530 nm (GFP) and 543 nm with a long-wave tail, possibly related to doxorubicin or its degradation product. The long-wave part of the spectrum is less pronounced in the TIRFM experiments.
Although variable-angle TIRFM of cell-substrate contacts were reported almost 30 years ago [23,31], since then there have been very few further reports in the literature, e.g., in the authors' previous publications on nanotopology of cell adhesion to distinguish between tumor cells and less malignant cells [24], and on cell-substrate topology in photodynamic therapy (PDT) [32]. An application of this technique to test the efficacy of doxorubicin (or any other cytostatic drug) is hitherto unknown. Therefore, we suggest considering this method for cell monolayers upon fluorescence staining of cell membranes or transfection with membrane-associated fluorescent proteins. The present CHO-pAcGFP1-Mem cell line appeared appropriate for this purpose. Cell lines from tumors, e.g., breast cancer, bronchial carcinoma, or lymphoma, which have been treated by doxorubicin for many years, are candidates for further studies. Therefore, in a preliminary study (described in the Supplementary Information, Section S3), we tested MCF-7 breast cancer cells prior to and after 2 h incubation with doxorubicin using our Variable-Angle (VA)-TIRFM method. Since cell membranes were nonfluorescent, we stained them with the well-known marker 6-dodecanoyl-2-dimethylaminonaphthalene (laurdan) [33] and evaluated the angular dependence of fluorescence intensity in the spectral band of 500-520 nm. Images served as a control but were not evaluated quantitatively due their low signal-to-noise ratio. Figure 6 proves that, according to our algorithm, cell-substrate distances increased from (22 ± 9) nm to (54 ± 28) nm after 2 h incubation with doxorubicin. This increase was even more pronounced than for CHO-pAcGFP1-Mem cells and suggests again the applicability of our method. However, the distances were generally lower, possibly since MCF-7 cells grow within smaller colonies (islets) with rather strong adhesion to the substrate (see Figure 6). VA-TIRFM is a sequential method, i.e., images are recorded at about 10 successively increasing angles of light incidence Θ, which are easily adjusted by a stepping motor directly coupled to a light-deflecting mirror [17]. Furthermore, evaluation of cell-substrate distances for all pixels of an image as well as calculation of histograms has to be performed offline based on Equation (2) using, e.g., the MATLAB script described in Section 5.2. This script also permits correction of slight shifts in the x or y directions, which occasionally occur in the course of our experiments. Errors by artifacts, e.g., interference patterns due to grains of dust, which may have an impact on images recorded at certain angles, are corrected manually by eliminating the corresponding image of a series, as reported in [24]. Altogether, each image requires about 2 min for acquisition and 40 s for calculation by the algorithm. Further automation and rapid adjustment using machine learning programs may reduce these time constants, but online topography does not appear to be possible, since various angles of incidence Θ are needed for acquisition of each image series.
In the future, it may be possible to use intrinsic fluorescence or even light-scattering methods instead of fluorescence markers or fluorescent proteins. This would make the method more universal, but further information concerning the cellular location of fluorophores (or scatterers) as well as highly sensitive (e.g., electron multiplying (EM)-CCD) VA-TIRFM is a sequential method, i.e., images are recorded at about 10 successively increasing angles of light incidence Θ, which are easily adjusted by a stepping motor directly coupled to a light-deflecting mirror [17]. Furthermore, evaluation of cell-substrate distances for all pixels of an image as well as calculation of histograms has to be performed offline based on Equation (2) using, e.g., the MATLAB script described in Section 5.2. This script also permits correction of slight shifts in the x or y directions, which occasionally occur in the course of our experiments. Errors by artifacts, e.g., interference patterns due to grains of dust, which may have an impact on images recorded at certain angles, are corrected manually by eliminating the corresponding image of a series, as reported in [24]. Altogether, each image requires about 2 min for acquisition and 40 s for calculation by the algorithm. Further automation and rapid adjustment using machine learning programs may reduce these time constants, but online topography does not appear to be possible, since various angles of incidence Θ are needed for acquisition of each image series.
In the future, it may be possible to use intrinsic fluorescence or even light-scattering methods instead of fluorescence markers or fluorescent proteins. This would make the method more universal, but further information concerning the cellular location of fluorophores (or scatterers) as well as highly sensitive (e.g., electron multiplying (EM)-CCD) cameras would be needed.
It remains to be proven whether changes of cell-substrate topology may be considered as an early indicator of apoptosis in general. Shrinking and changes of cell morphology upon apoptosis are well-documented in the literature (for reviews, see [34,35]), and lightscattering methods using wavelength dependence [36] or angular resolution [37] have been reported in view of sensing and quantitation. Since these techniques, however, do not prove early cellular responses, molecular sensors for specific proteins, or nucleic acids of cells undergoing apoptosis have been suggested, including sensors for the enzyme caspase-3 frequently activated in tumor cells [38][39][40]. Luo et al. [41] developed a sensor with a cyan fluorescent protein (CFP) fused to a yellow fluorescent protein (YFP) via a caspase-sensitive amino acid peptide (DEVD). This peptide linker is short enough to bring the two fluorescent proteins in close proximity to each other (≤10 nm) to enable nonradiative ("Förster") Resonance Energy Transfer (FRET [42]) from CFP to YFP. FRET is interrupted due to cleavage of DEVD by caspase-3 during the onset of apoptosis, resulting in pronounced changes of the fluorescence spectra and lifetimes. Using enhanced fluorescent proteins (ECFP, EYFP) and anchoring the sensor in the plasma membrane (pMem-ECFP-DEVD-EYFP) improved its sensitivity and allowed more specific microscopy techniques to be used, e.g., TIRFM for 2-dimensional cell monolayers [43] or Light Sheet Fluorescence Microscopy (LSFM) for 3-dimensional cell cultures [44]. In both cases, the decrease in acceptor (EYFP) fluorescence and increase in donor (ECFP) lifetime upon apoptosis have been well-documented, even before changes of cell morphology became apparent. Imaging of cell-substrate topology may possibly bridge a gap between early detection of apoptosis by a molecular sensor, as reported above, and rather late detection by measuring changes in cell morphology. The topology method reported in this manuscript probably characterizes early changes of cell membranes (see e.g., [45]) and appears promising, since it is biochemically less complicated than the molecular sensor system, and since it can possibly be applied at an earlier stage of apoptosis than measurements of overall changes of cell morphology, e.g., cell shrinking. Some limitations of the technique have been discussed: dependence on the individual cell line, on cell age, and eventually on cultivation time. A further important parameter would be temperature. So far, all measurements were performed at a room temperature of 22-24 • C. However, it is well-known that membrane stiffness and fluidity depend on temperature [30,46], and concomitant changes of cell-substrate distances should be considered. Therefore, maintaining well-defined temperatures in course of our experiments is an important prerequisite.
Cells
Chinese hamster ovary cells transfected with a membrane-associated green fluorescent protein (CHO-pAcGFP1-Mem) were supplied by the Institute of Laser Technology in Medicine and Metrology (ILM), University of Ulm. Cells were seeded at a density of 200 cells/mm 2 on glass slides and grown for 72 h in quadriPERM cell culture vessels (Guder Labortechnik GmbH, Bad Oeynhausen, Germany) containing RPMI 1640 medium supplemented with 10% fetal calf serum, 1% Penicillin/Streptomycin, and 500 µg/mL Geneticin at 37 • C and 5% CO 2 . Prior to the experiments, cells were incubated for 2 h or 24 h in medium containing 2 µM doxorubicin before rinsing the glass slides with Earle's Balanced Salt Solution (EBSS) (all purchased from Sigma-Aldrich GmbH, München, Germany). Nonincubated cells ("0 h") were used as a reference. It should be mentioned that with 24 h incubation, the total growth period was 96 h compared to 72 h at a lower incubation time. For experiments reported in this paper, we preferentially used subcultures with 56-64 cell splittings, when up to 19 measurements (n = 13 for 0 h, n = 16 for 2 h, and n = 19 for 24 h) were performed with 4 measurements of each object slide in different positions. For control experiments, cells were grown for 96 h in culture medium, and cell-substrate distances were determined without incubation and after 2 h incubation with doxorubicin (n = 5 measurements each). "Younger" subcultures with 28-35 or 48-50 cell splittings served for a comparison at 0 h and 2 h incubation with n = 20 measurements (0 h) or n = 18 measurements (2 h) for subcultures 28-35 and n = 6 measurements each for subcultures 48-50.
VA-TIRFM
For Variable-Angle Total Internal Reflection Microscopy (VA-TIRFM), an upright microscope (Axioplan 1, Carl Zeiss, Jena, Germany) was equipped with a condenser unit permitting excitation of the samples under total internal reflection at variable angles Θ and, thus, variable depths d(Θ) of the evanescent electromagnetic field [17]. Polarized light from an argon ion laser (λ = 476 nm; Innova 90, Coherent, Palo Alto, CA, USA) was incident via a single-mode fiber on a hemicylindrical glass prism, which was optically coupled to the object slide containing the cells. The polarization was always perpendicular to the plane of incidence, and angles between Θ = 66 • and Θ = 75 • were adjusted by a stepping motor (precision: ± 0.15 • ; for calibration, see [17]). Fluorescence was detected by a 63×/0.90 water immersion objective lens (dipping into the buffer solution that surrounded the cells) and a long pass filter for λ ≥ 510 nm. Fluorescence spectra were recorded by an optical multichannel analyzer (IMD4562, Hamamatsu Photonics, Ichino-Cho, Japan) combined with a purpose-made polychromator, which permitted a spectral resolution of about 10 nm. Corresponding images were recorded by a CCD camera (AxioCam HRm, Carl Zeiss, Jena, Germany) and integrated for up to 2 s. In addition, some color images were recorded with a Canon 500D reflex camera.
The condition for total internal reflection (TIR) was fulfilled for all angles of incidence above the critical angle Θc = arcsin (n 2 /n 1 ) = 64.3 • with n 1 = 1.52 corresponding to the refractive index of the glass prism and n 2 = 1.37 to that of the cytoplasm. This resulted in a penetration depth of the evanescent wave between 72 nm and 167 nm in an angular range 66 • ≤ Θ ≤ 75 • for a wavelength λ = 476 nm according to Equation (1). While fluorescence spectra served as controls, cell-substrate distances ∆ were calculated for all pixels of fluorescence images according to Equation (2) using the automated MATLAB script described in Figure 7. These distances were displayed in a color-coded topology map, from which a relative frequency histogram was calculated. Histograms were determined for all experiments performed under the same conditions (subcultures with 28-35, 48-50, or 56-64 cell splittings as well as incubation times of 0 h, 2 h, or 24 h), and the most frequent distance was evaluated as mean ± standard deviation. A t-test for 2 samples assuming unequal variances [47] was applied to check the significance of changes of this distance between 0 h and 2 h as well as between 0 h and 24 h (for subcultures 56-64) upon application of doxorubicin. Changes were regarded as significant at p ≤ 0.05. relative frequency histogram was calculated. Histograms were determined for all experiments performed under the same conditions (subcultures with 28-35, 48-50, or 56-64 cell splittings as well as incubation times of 0 h, 2 h, or 24 h), and the most frequent distance was evaluated as mean ± standard deviation. A t-test for 2 samples assuming unequal variances [47] was applied to check the significance of changes of this distance between 0 h and 2 h as well as between 0 h and 24 h (for subcultures 56-64) upon application of doxorubicin. Changes were regarded as significant at p 0.05. Figure 7. Flow chart of the MATLAB script used for automated cell-substrate calculations (GUI = Graphical User Interface; T(Θ) = transmission factor; d(θ) = penetration depth of the evanescent field). Once the parameters, e.g., the corresponding metric pixel size and the refractive indices, are set and the recorded images are imported into the MATLAB environment, the code gradually starts its calculations. Images are implemented as a virtual stack with Ni corresponding to the maximum number of images for each set. Beginning with a loop covering the image stack, an algorithm is used to correct any possible shifts in the x and y directions, which may occur in individual experiments. Thus, it is ensured that every pixel addresses the same field of view of the recorded cell at various Figure 7. Flow chart of the MATLAB script used for automated cell-substrate calculations (GUI = Graphical User Interface; T(Θ) = transmission factor; d(θ) = penetration depth of the evanescent field). Once the parameters, e.g., the corresponding metric pixel size and the refractive indices, are set and the recorded images are imported into the MATLAB environment, the code gradually starts its calculations. Images are implemented as a virtual stack with Ni corresponding to the maximum number of images for each set. Beginning with a loop covering the image stack, an algorithm is used to correct any possible shifts in the x and y directions, which may occur in individual experiments. Thus, it is ensured that every pixel addresses the same field of view of the recorded cell at various angles. Next, the angle of the recorded image is extracted from the image file name in order to calculate the transmission factor and the penetration depth, as reported above. Data Availability Statement: Relevant data sets can be found at https://www.hs-aalen.de/users/ 226 (accessed on 12 February 2022) ("Veröffentlichungen"). | 2022-06-05T15:11:31.178Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "3b55443ee3ea619df26df68b7a1aa7fd74d0edb6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/23/11/6277/pdf?version=1654257849",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "23ce3649859e366dbe9a0abf6088488ea000e0dc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
113398078 | pes2o/s2orc | v3-fos-license | Comments on Supercurrent Multiplets, Supersymmetric Field Theories and Supergravity
We analyze various supersymmetry multiplets containing the supercurrent and the energy-momentum tensor. The most widely known such multiplet, the Ferrara-Zumino (FZ) multiplet, is not always well-defined. This can happen once Fayet-Iliopoulos (FI) terms are present or when the Kahler form of the target space is not exact. We present a new multiplet S which always exists. This understanding of the supersymmetry current allows us to obtain new results about the possible IR behavior of supersymmetric theories. Next, we discuss the coupling of rigid supersymmetric theories to supergravity. When the theory has an FZ-multiplet or it has a global R-symmetry the standard formalism can be used. But when this is not the case such simple gauging is impossible. Then, we must gauge the current S. The resulting theory has, in addition to the graviton and the gravitino, another massless chiral superfield Phi which is essential for the consistency of the theory. Some of the moduli of various string models play the role of Phi. Our general considerations, which are based on the consistency of supergravity, show that such moduli cannot be easily lifted thus leading to constraints on gravity/string models.
Introduction
Supersymmetric theories 1 have a conserved supersymmetry current S µα ∂ µ S µα = 0 . (1.1) It is unique up to an improvement term of the form Clearly, S ′ µα is conserved and yields the same supercharge Q α upon integrating over a space-like hypersurface. The supersymmetry current S µα can be embedded in a supermultiplet. This multiplet should include the conserved energy-momentum tensor T µν , which is also ambiguous due to a possible improvement of the form The most widely known such multiplet is the Ferrara-Zumino (FZ) multiplet [1], J αα .
It is a real superfield 2 satisfying DαJ αα = D α X , (1.6) (The component expressions of J αα and X appear below.) This multiplet includes six bosonic operators from the conserved T µν , four bosonic operators in a (non-conserved) current J µ = j µ and two bosonic operators in the complex scalar X = x. Similarly, 1 Throughout this note we will focus on four-dimensional theories. We will be using N = 1 superspace, but its existence is not essential to our discussion. We simply use it to package supersymmetry multiplets in a convenient way. 2 We follow the Wess and Bagger conventions [2]. A vector ℓ µ , is often expressed in bi-spinor notation as We sometimes use [D 2 , Dα] = 2iD α ∂ αα .
(1. 5) it has twelve fermionic operators in the conserved S µα and its complex conjugate. As expected in a supersymmetric theory, the number of bosonic operators is the same as the number of fermionic operators.
The pair (J µ , X) can be transformed as Another multiplet, which is somewhat less known, exists whenever the theory has a continuous R-symmetry (see e.g. section 7 of [3]). We will refer to it as the R-multiplet.
Therefore, j (R) µ is conserved. Like the FZ-multiplet, this multiplet also includes twelve bosonic operators and twelve fermionic operators.
It is often the case that a theory has several continuous R-symmetries. They differ by a continuous conserved non-R-symmetry. The latter is characterized by a real linear superfield J (D 2 J = 0). This ambiguity in the R-multiplet is (1.9) It affects the supercurrent and energy-momentum tensor through improvement terms (1.2)(1.3).
If a theory has an FZ-multiplet (1.6), it is easy to show that it has an exact U (1) R symmetry if and only if there exists a real and well-defined 3 U such that D 2 U = −2X (this normalization is for later convenience). Intuitively, U includes a non-conserved ordinary (non-R) current. The equation D 2 U = −2X means that the violation of its conservation is similar to that of the R-current at the bottom of the FZ-multiplet. Therefore, the shift leads to a conserved R-current. Indeed, it is easy to check that this current satisfies (1.8) with However, not every theory has such supersymmetry multiplets. First, it is clear that if the theory does not have a continuous R-symmetry, R αα does not exist. It is less obvious that the FZ-multiplet J αα is not always well-defined. It was pointed out in [4] that when the theory has Fayet-Iliopoulos terms the FZ-multiplet is not gauge invariant. We will show in section 2 that when the Kähler form of the target space is not exact the FZ-multiplet is not globally well-defined and hence does not correspond to a good operator in the theory.
This motivates us to look for another multiplet for the supersymmetry current and the energy-momentum tensor which exists in all theories. We propose to consider the multiplet 4 S αα which "interpolates" between (1.6) and (1.8): (1.11) We will see that this multiplet exists for every supersymmetric theory. In some cases, if we can solve with a well-defined real U , it can be explicitly improved and reduced to either (1.8) or (1.6).
In section 2 we study the multiplet (1.11) in detail and clarify its relation to (1.6) and (1.8).
In section 3 we discuss the three multiplets (1.11)(1.6)(1.8) in simple cases and clarify when each of them exists.
In section 4 we present field-theoretic applications of our multiplets. We review the discussion in [4] about FI-terms. We then present a similar argument for theories with nontrivial target spaces. In both cases we find that if the UV theory has neither FI-terms nor non-trivial target space topology, then it possesses an FZ-multiplet (1.6). Therefore, the low-energy theory must also have the same multiplet (since (1.6) is an operator equation). This immediately shows that FI-terms cannot be generated (even for emergent gauge groups in the IR), and also that the topology of the quantum moduli space of the theory is constrained. More explicitly, we show that starting with a renormalizable field theory without FI-terms and flowing to the IR the Kähler form of the quantum moduli space must be exact, and in particular, it cannot be compact (except of course isolated points).
In section 5 we couple supersymmetric field theories to supergravity. Here we study rigid theories whose parameters are independent of M P and couple them to linearized supergravity at the leading order in 1 M P . This setup excludes theories with parameters of order the Planck scale such as FI-terms ξ ∼ M 2 P and nonlinear sigma models with f π ∼ M P .
If the FZ-multiplet exists, it can be gauged; i.e. coupled to supergravity. This naturally gives rise to the "old minimal supergravity" [6][7][8] formalism. Theories without FZ-multiplets (e.g. theories with FI-terms or non-trivial target spaces) can still be coupled to supergravity using the old minimal formalism provided certain conditions are satisfied. For example, this is possible when the theory has a continuous R-symmetry. In this case the R-multiplet exists and we can gauge it. The resulting theory is related to the "new minimal supergravity" [9,10]. This way of constructing such theories leads to a new perspective on the construction of [11] and the results of [12][13][14] which were based on the "old minimal formalism" (see also the recent papers [15,16]). We review the supergravity that we obtain by gauging the R-multiplet in an appendix. It should be emphasized that as explained in [17], the resulting supergravity is the same as the one obtained using the old formalism.
However, gravity theories with continuous global symmetries are expected to be inconsistent. Therefore, we cannot base the consistency of the theory on the existence of an exact continuous R-symmetry. This leads us to the study of theories without an R-symmetry and without an FZ-multiplet. We emphasize that such theories cannot be coupled to minimal supergravity. The simplest possibility then is to couple the S-multiplet to supergravity.
This turns out to be related to "16/16 supergravity" [18][19][20]. We will limit ourselves to the linearized theory (leading order in 1/M p ) and will derive the fact that in addition to the graviton and the gravitino the theory includes a propagating chiral "matter" superfield Φ (or equivalently a linear multiplet). We will study the constraints on the coupling of this superfield. In particular, the obstruction to the existence of the FZ-multiplet is that the equation D 2 D α U = − 2 3 χ α (1.12) cannot be solved with a well-defined U . The new superfield Φ couples through the combination which is well-defined.
Special cases include the relation to the absence of supergravity theories with FIterms [4] and a connection with the results of [21] about the quantization of Newton's constant.
Section 6 summarizes our results. Here we discuss aspects of moduli stabilization and use our conclusions to constrain gravity/string models, including various string constructions like D-inflation, sequestered models, flux vacua, etc.
The S-Multiplet
The multiplet defined below is a new option for embedding the supercurrent and energy-momentum tensor in a superfield. The advantage of this multiplet is that it exists in many examples where the others do not. Its defining properties are It is straightforward to work out the component expression for these superfields. The result after a little bit of algebra is and 3) (x is the lowest component of the superfield X rather than a spacetime coordinate) satisfying the additional relations In addition, the supercurrent S µα is conserved and the energy-momentum tensor T µν is symmetric and conserved.
We see that the multiplet includes the 12 + 12 operators in the FZ-multiplet, as well as one Weyl fermion ψ, a closed two-form F (S) µν and a real scalar Z. Hence it has 16 + 16 physical operators. These additional 4 + 4 operators circumvent the no-go theorem of [5].
From the superfield (2.2) we can find the anticommutators Note that these anticommutators are consistent with the conservation equation ∂ µ S µα = 0.
The standard supersymmetry algebra follows provided the fields approach zero fast enough at spatial infinity and that d 3 xF vanishes for all nonzero spatial i, j.
Given the operators (S αα , X, χ α ) we can transform with any real superfield U and preserve the defining relations (2.1). This transformation shifts the energy-momentum tensor and the supersymmetry current by improvement terms We interpret the bottom component of S αα as an R-current which is not conserved.
The θθ component of U is an ordinary (non-R) current which is also not conserved.
Hence, the transformation (2.6) shifts the non-conserved R-current and yields another non-conserved R-current.
We consider certain special cases: 1. If we can solve X = − 1 2 D 2 U with a well-defined (i.e. local and gauge invariant) real operator U , we can transform X away and find the R-multiplet (1.8). Now, the bottom component of S αα is a conserved R-current. Conversely, if the theory has an exact U (1) R symmetry, the R-multiplet (1.8) exists and therefore we can solve X = − 1 2 D 2 U in terms of a well-defined U . Therefore, we interpret an X which cannot be written as D 2 U with a real operator U as the obstruction to having an R-symmetry. Note that the remaining freedom in (2.6) which preserves X = 0 restricts U to satisfy D 2 U = 0, i.e. U is a conserved current multiplet. This has the effect of shifting the conserved by a conserved non-R-current, as we explained around (1.9).
2. If we can solve χ α = − 3 2 D 2 D α U with a well-defined real operator U , we can transform χ α away and find the FZ-multiplet (1.6). The remaining freedom which preserves χ α = 0 restricts U to the form Ξ + Ξ with a chiral Ξ. This is the ambiguity in the FZ-multiplet we explained around (1.7). Hence we interpret a χ α which cannot be written as − 3 2 D 2 D α U as the obstruction to the existence of the FZ-multiplet.
If we can write both
This is equivalent to the discussion around (1.10).
If we can simultaneously solve
, we can set both X and χ α to zero. Then the theory is superconformal.
Wess-Zumino Models
As an example, let us first discuss the general sigma model, with Kähler potential The expressions for J αα and X are is not invariant under Kähler transformations. This has the following geometric interpre- we identify j bosonic µ as the pullback of A to spacetime.
We learn that when ω is not exact, A is not globally well-defined and hence the current j µ is not a good operator. In this case, the whole FZ-multiplet is not well-defined. For example, if the target space has 2-cycles with non-vanishing integral of the Kähler form ω, the FZ-multiplet does not exist.
A point of clarification is in order here. If we can find a globally well-defined A there is still freedom in performing Kähler transformations which affect the FZ-multiplet by improvement terms. The global obstruction we discuss here arises only when we must cover the target space with patches with nontrivial Kähler transformations between them.
We conclude that theories with a Kähler form that is not exact do not have an FZmultiplet.
If the theory has a U (1) R symmetry (either spontaneously broken or not), we expect to find a globally well-defined R αα -multiplet. Let us see how this comes out. We can use a basis where our chiral superfields Φ i have well-defined R-charges, R i . The condition that there is an R-symmetry implies the following two constraints we can express K is a real superfield because of the second constraint in (3.3). Now we can perform the shift (2.8) and obtain the R-multiplet. This leads to has an R-symmetry, the multiplet R αα is well-defined. Hence, the supersymmetry current and the energy-momentum tensor in this R-multiplet are good operators.
Finally, let us discuss the most general case in which the target space has a nontrivial Kähler form and the theory does not have an R-symmetry. Our motivation is that we would like to eventually discuss supergravity, where exact continuous global symmetries are expected to be forbidden.
In this case neither the FZ-multiplet nor the R-multiplet exist, but our S αα exists.
Indeed, the operators
Gauge Fields with FI Terms
We now consider a theory with a U (1) gauge field with an FI-term This case is easily handled by the substitution K → K + ξV in the expressions From (3.6),(3.7) we see that R αα and S αα do not have explicit ξ dependence. They depend on ξ through the equations of motion.
Let us emphasize the analogy between an FI-term and nontrivial geometry. When ξ is nonzero the multiplet J αα is not gauge invariant [4]. If the theory has nonzero ξ but it has an R-symmetry, R αα is a good gauge invariant operator [22,15,16]. However, if ξ = 0 and the theory does not have an R-symmetry, we must use the multiplet S αα . It includes gauge invariant and conserved S µα and T µν .
The similarities between the situation with a nontrivial target space and when there is a nonzero ξ are easily understood by considering a simple example. A U (1) gauge theory with n chiral superfields with charge one and negative ξ has as its classical moduli space of vacua CP n−1 . (In four dimensions this theory is quantum mechanically anomalous, but this is irrelevant for this reasoning). The parameter ξ controls the size of the space.
The peculiarities of the FI-term in the microscopic description which includes the gauge field translate to nontrivial transition functions in the macroscopic theory. Hence J αα is not gauge invariant in the short distance theory and it is not globally well-defined in the low-energy theory.
Applications to Field Theory
In the previous section we explained that theories with non-exact Kähler form or with an FI-term do not have a well-defined FZ-multiplet. This fact can be used to prove some non-renormalization theorems. Let us first review the argument in [4] for the FI-term.
A theory that has no FI-term gives rise to a well-defined FZ-multiplet satisfying the operator equation (1.6). Since this operator is well-defined, it behaves regularly along the renormalization group flow. This immediately implies that no FI-term can be generated for the original gauge group and even for gauge groups that emerge from the dynamics.
This explains why models of SUSY breaking predominantly break SUSY through F -terms.
We can repeat the same idea for the moduli space. In the UV, we usually start form weakly interacting particles with canonical kinetic terms. Therefore, the Kähler metric is trivial and the FZ-multiplet exists. Since this multiplet must remain well-defined throughout the flow, it follows that the quantum moduli space is constrained. It has to be such that the Kähler form ω ∼ dA is exact; i.e. A is a globally well-defined. Hence the integral ω ∧ ω ∧ ω · · · over any compact cycle must vanish. In particular, this means that the whole target space cannot be compact (it can, of course, be a set of points). 6 Let us see how this works in the case of SQCD with N f = N c . 7 The short distance theory is characterized by the classical moduli space It is instructive to compare these nonrenormalization theorems to those about the FI-term. Three approaches to these nonrenormalization theorems are possible.
1. Both nonrenormalization theorems follow from the fact that the FZ-multiplet is not well-defined. This constrains the radiative corrections and the renormalization group flow in such theories. In both cases it prevents us from finding a macroscopic theory with a nonzero FI-term or non-exact Kähler form if they are absent in the short distance theory. This is the approach we have taken in this section.
2. The authors of [25,26] followed [27] and promoted all coupling constants to background fields. The inability to do this for the FI-term leads to its non-renormalization. 8 We can follow this approach also for the Kähler potential K. We introduce a coupling constant by replacing K → 1 K. If K is globally well-defined, we do not need to use Kähler transformations as we move from patch to patch. In this case we can trivially extend 1 to a real superfield (or to a chiral plus an antichiral superfield) and find complicated higher order radiative corrections. However, if we need to cover the target space by patches which are related to each other by Kähler transformations, then 1 cannot be promoted to a background superfield; this would ruin the invariance 8 For an earlier related approach see [28].
of the Lagrangian under Kähler transformations. 9 Therefore, radiative corrections can arise only at one loop. 10 3. Similar nonrenormalization theorems can be derived by weakly coupling the theory to supergravity and by using the non-existence of certain supergravity theories. We will discuss such supergravity theories in section 5 and in the appendix.
Coupling to Supergravity
In this section we study the coupling to supergravity of the various supercurrent multiplets we presented above. We are only interested in linearized supergravity, namely the leading order in 1 M p . This approach to supergravity is taken, for example, in [32]. We begin with a review of the coupling of the FZ-multiplet to supergravity. We then explain the coupling of the S-multiplet to supergravity. The case of the R-multiplet is reviewed in the appendix.
Gauging the FZ-Multiplet
We start by reviewing the coupling of the FZ-multiplet to linearized gravity. The FZ-multiplet (1.6) contains a conserved energy-momentum tensor and supercurrent and can therefore be coupled to supergravity. The supergravity multiplet is embedded in a real vector superfield H αα . The θθ component of H αα contains the metric field, h µν , a two form field B µν , and a real scalar. The coupling of gravity to matter is dictated at leading order by We should impose gauge invariance, namely, the invariance under coordinate transformations and local supersymmetry transformations. The gauge parameters are embedded 9 The situation in N = 2 supersymmetry in two dimensions is a bit different. Here both the coefficient of the FI-term and 1 in the case with nontrivial geometry can be promoted to the real part of a twisted chiral superfield. This allows us to write a supersymmetric effective action for these coupling constants. Such an analysis leads to a simple derivation [29] of the nonrenormalization theorems of [30,31] about radiative corrections to the Kähler metric in sigmamodels. 10 In fact, in four dimensions these corrections are quadratically divergent and therefore ambiguous.
in a complex superfield L α , which so far obey no constraints. We assign a transformation law to the supergravity fields of the form where Lα is the complex conjugate of L α , and thus this maintains the reality condition.
Requiring that (5.1) be invariant under these coordinate transformations, we get a constraint on the superfield L α . Indeed, invariance requires that 0 = d 4 θDαJ αα L α = d 4 θXD α L α . Since X is an unconstrained chiral superfield we get the complex equation 11 The analog of the Wess-Zumino gauge is that the lowest components of H µ vanish, i.e.
as well as the fact that H µ θσ ν θ is symmetric in µ and ν.
There is also some residual gauge freedom: 1. H µ θ 2 can be shifted by any complex divergenceless vector. This leaves only one complex degree of freedom, ∂ µ H µ θ 2 .
2. The metric field h µν transforms as where ξ µ is a real vector.
The gravitino transforms as
In this Wess-Zumino gauge the components containing the gravitino and metric take the and H µ θ 2 θ = Ψ µα + σ µ σ ρ Ψ ρ . [6,7,8]. This is in accordance with the 12 degrees of freedom in the FZ-multiplet.
A simple consistency check is to use (5.1) to check the leading couplings of the graviton and gravitino to matter. Recalling the formula for J αα (use (2.2) with χ α = 0) we find as expected. Similarly, for the coupling of the gravitino to matter we get We would also like to mention that in analogy with the situation in ordinary curved space, improvements of J αα as in (1.7) shift the coupling to gravity (5.1) by a term proportional The last ingredient is the kinetic term for the graviton and gravitino. We begin by constructing a real superfield E F Z αα by covariantly differentiating H αα This real expression 12 is equivalent to a different-looking expression in [33]. The gauge transformations (5.2) act as The superfield in parenthesis is chiral. Note the similarity of (5.13) to the defining property of the FZ-multiplet itself (1.6). The fact that E F Z is invariant and satisfies an equation identical to the supercurrent superfield guarantees that the Lagrangian is invariant. This contains in components the linearized Einstein and Rarita-Schwinger terms. The six additional supergauge-invariant bosons, ∂ µ H µ θ 2 , H µ θ 4 are auxiliary fields which are easily integrated out yielding ∂ µ H µ θ 2 ∼ ix, H µ θ 4 ∼ j µ where x and j µ are the matter operators in the supercurrent multiplet.
We conclude that theories which have a well-defined FZ-multiplet can be coupled to supergravity in this fashion. The coupling to supergravity adds to the original theory a propagating graviton and gravitino.
If there is no FZ-multiplet but there is an R-symmetry, one can still use the ill defined FZ-multiplet by slightly modifying the gauging procedure to construct a consistent supergravity theory. Alternatively, in this case we can construct the R-multiplet and couple it to supergravity. 13 For example, a free supersymmetric U (1) theory with an FI-term can be coupled to supergravity in this fashion, thus reproducing the component Lagrangian of [11]. This gives rise to a supergravity theory with a continuous global R-symmetry (unless there are no charged fields in the spectrum). This explains in a simple fashion the results about FI-terms [11,[12][13][14]4]. We expect that consistent theories of quantum gravity do not have such continuous symmetries. Hence, we will not pursue theories with an exact U (1) R symmetry here, but will describe them in the appendix.
Supergravity from the S-Multiplet
We emphasized above that various supersymmetric field theories do not have an FZmultiplet and the energy-momentum tensor and the supersymmetry current must be embedded in a larger multiplet S αα . In such a case the only possible supergravity theory is the one in which this (or a larger) multiplet is gauged. In this section we analyze this theory and as in the previous subsection, we limit ourselves to the analysis of the linearized theory. We will see that this supergravity theory is not merely a different set of auxiliary fields, there are new on-shell modes. 13 For some comments on this case see also [15,16].
We begin from the coupling to matter For this to be invariant under (5.2), we need to impose the constraints The first of them already appeared in the gauging of the FZ-multiplet (5.3) and the second one is shown in the appendix to arise in the gauging of the R-multiplet. Since L α is more constrained here than in the previous subsection, we will find more gauge invariant degrees of freedom.
We also note that the top component of H µ is invariant. Thus, we see that we have 16 off-shell bosonic degrees of freedom. The fermion is in the θ 2 θ component (and its complex conjugate). It has residual gauge symmetry Since ω α satisfies the Dirac equation it cannot be used to set any further components to zero. This is analogous to the discussion about the metric (5.18). Therefore, our theory includes a gravitino as well as an additional Weyl fermion. Thus, we have 16 off-shell fermionic degrees of freedom.
We conclude that the theory has 16 + 16 fields. This is in accord with the 16 + 16 operators in the multiplet S αα . This (16,16) supergravity multiplet has been recognized in the supergravity literature [18,19]. 14 We will explain some of its important features below and then turn to derive some consequences.
It is easy to construct a kinetic term; in fact E F Z αβ defined in (5.11) is still invariant because the set of transformations here is smaller than when the FZ-multiplet is gauged.
However, This theory has another invariant. It is easy to see that To summarize, we find that this theory admits two independent kinetic terms. Thus there is one free real parameter, r, and the most general kinetic term is 15 Our goal now is to identify the on-shell degrees of freedom in this theory and study their couplings to matter fields. One possibility is to substitute the most general H αα in (5.15) and (5.23). Then we can identify the auxiliary fields and integrate them out.
This is the approach we took in the previous subsection. Alternatively, we can enlarge the gauge symmetry, relaxing either one of the two constraints (5.16) or both, and add compensator fields. This makes the results more transparent and hence we will follow this approach here.
In order to contrast the situation with that in the previous subsection we choose to keep the constraint (5.3) (the first one in (5.16)) and relax the second one by adding a chiral compensator field λ α which transforms as 14 For an early discussion see also [34]. 15 In order not to clutter the equations we set M p = 1 and we suppress an overall constant in front of the Lagrangian.
First, the non-invariance of the coupling to matter d 4 θH αα S αα can be corrected by a adding to the Lagrangian the term − 1 6 d 2 θλ α χ α + c.c.. Next, we move to the kinetic terms (5.23). The first term is invariant, but the second term is not. This is easily fixed by adding more terms to the Lagrangian. We end up with the invariant Lagrangian (5.25) The first term in the second line corrects the non-invariance of the coupling to matter and the other two terms fix the transformation of the kinetic term (5.22).
In order to display the spectrum of (5.25) we introduce G = D γ λ γ + Dγλγ which is a real linear superfield (i.e. it satisfies D 2 G = 0). We also express with a real U . We should remember that this U might not be well-defined; e.g. it might not be globally well-defined or might not be gauge invariant. In fact, the need of gauging the S-multiplet arises precisely when this U is not well-defined. The Lagrangian (5.25) becomes to find the Lagrangian (5.28) In this presentation the theory looks like a standard supergravity theory based on the FZmultiplet which is coupled to a matter system which includes the original matter as well as the chiral superfield Φ. This is consistent with the counting of degrees of freedom (4 + 4 degrees of freedom in addition to ordinary supergravity) and with the identification [20] of the 16/16 supergravity as an ordinary supergravity coupled to a chiral superfield. Note that even though the new superfield Φ originated from the gravity multiplet, its couplings are not completely determined. At the linear order we have freedom in the dimensionless parameter r and we expect additional freedom at higher orders.
The linear multiplet G in (5.26) or equivalently the chiral superfield Φ in (5.28) are easily recognized as the dilaton multiplet in string theory. There the graviton and the gravitino are accompanied by a dilaton, a two-form field and a fermion (dilatino). These are the degrees of freedom in G. After a duality transformation this multiplet turns into a chiral superfield Φ. Furthermore, as in string models, the second term in (5.28) mixes the dilaton and the trace of the linearized graviton h µ µ . Both this term and the term quadratic in Φ lead to the dilaton kinetic term.
As we mentioned above, the need for the multiplet S αα arises when the operator U is not a good operator in the theory. In this case the current J αα does not exist. The couplings in (5.28) explain how the chiral field Φ fixes this problem. Even though U is not a good operator, U = Φ + Φ † + U is a good operator. If U is not gauge invariant, Φ transforms under gauge transformations such that U is gauge invariant. And if U is not globally well-defined because it undergoes Kähler transformations, Φ has similar Kähler transformations such that U is well-defined.
The result of this discussion can be presented in two different ways. First, as we did here, we started with a rigid theory without an FZ-multiplet and we had to gauge the S-multiplet. This has led us to the Lagrangian (5.28). Alternatively, we could add the chiral superfield Φ to the original rigid theory such that the combined theory does have an FZ-multiplet. Then, this new rigid theory can be coupled to standard supergravity by gauging the FZ-multiplet.
Our discussion makes it clear that if we want to couple the theory to supergravity, the additional chiral superfield Φ is not an option -it must be added.
It is amusing to compare these conclusions with the discussion in section 4. There we used the fact that it is impossible to promote the FI-term or the coupling constant characterizing the geometry to background fields. The coupling of such theories to gravity forces us to turn these coupling constants to fields. However, these are not background classical fields but fluctuating dynamical fields.
Summary and applications
In most theories the supersymmetry current and the energy-momentum tensor can be embedded in the familiar FZ-multiplet (1.6). But in a number of situations this multiplet is not a good operator in the theory. It is either non-gauge invariant or not globally well-defined. In this case we must use the larger multiplet S αα which we analyzed in this paper.
These observations about the FZ-multiplet and the S-multiplet allowed us to prove some non-renormalization theorems. For example, we have shown that starting with a renormalizable gauge theory, the moduli space of supersymmetric vacua cannot be compact. Similarly, the known non-renormalization theorems of theories with FI-terms trivially follow.
Of particular interest to us was the coupling of theories without an FZ-multiplet to supergravity. Here we have limited ourselves to supersymmetric field theories in which all dimensionful parameters are fixed and study the limit M p → ∞. We did not study theories in which the matter couplings depend on M p . Since the FZ-multiplet does not exist, we have to gauge the S-multiplet. The upshot of the analysis of this gauging is the following. We add to the rigid theory a chiral superfield Φ whose couplings are such that the combined system including Φ has an FZ-multiplet. This determines some but not all of the couplings of Φ to the matter fields. In the case of the FI-term Φ Higgses the symmetry and in the case of nontrivial target space geometry of the rigid theory it creates a larger total space in which the topology is simpler. Now that we have an FZ-multiplet we can simply gauge it using standard supergravity techniques. In particular, at the linearized level the couplings of Φ depend on only one free parameter -the normalization of its kinetic term.
Our results fit nicely with the many known examples of string vacua. We see that the ubiquity of moduli in string theory is a result of low energy consistency conditions in supergravity. As we emphasized above, the chiral superfield Φ is similar to the dilaton superfield in four dimensional supersymmetric string vacua. We often have field theory limits without an FZ-multiplet. For example, we can have a theory on a brane with an FIterm. The field theory limit does not have an FZ-multiplet and correspondingly, U ∼ ξV is not gauge invariant. This problem is fixed, as in (5.28), by coupling the matter theory to Φ which is not gauge invariant as in [35]. Similarly, we often consider field theory limits with a target space whose Kähler form is not exact. This happens, for instance, on D3-branes at a point in a Calabi-Yau manifold. If the latter is non-compact we find a supersymmetric field theory on the brane which typically does not have an FZ-multiplet because U is not globally well-defined. Coupling this system to supergravity corresponds to making M p finite. In this case this is achieved by making the Calabi-Yau compact. Then in addition to the graviton, various moduli of the Calabi-Yau space become dynamical.
They include fields like our Φ which couple as in (5.28), thus avoiding the problems with the FZ-multiplet and making the supergravity theory consistent. This discussion has direct implications for moduli stabilization. It is often desirable to stabilize some moduli at energies above the supersymmetry breaking scale. In this case we have to make sure that the resulting supergravity theory is still consistent. In particular, it is impossible to stabilize Φ in a supersymmetric way and be left with a low energy theory without an FZ-multiplet.
For example, if the low energy theory includes a U (1) gauge field with an FI-term, this term must be Φ dependent. Furthermore, if the mass of Φ is above the scale of supersymmetry breaking, it must be the same as the mass of the gauge field it Higgses.
Consequently, there is no regime in which it is meaningful to say that there is an FIterm. Similar comments hold for theories with a compact target space. It is impossible to stabilize the Kähler moduli while allowing moduli for the positions of branes to remain massless without supersymmetry breaking. 16 The comments above have applications to many popular string constructions including D-inflation, flux compactifications, and sequestering. Some of these constructions might need to be revisited.
It would be nice to explore these ideas further, and to study in more detail specific examples in string theory. The question of moduli stabilization is crucial for understanding low energy aspects of string theory and may lead us to a better understanding of the space of vacua and SUSY breaking. It would also be nice to find additional results using our new tools. In particular, it is conceivable that sharp statements can be made about the masses of moduli by studying the full nonlinear supergravity theory. 16 This conclusion can also be obtained using the result of [21]. Such a putative stabilization leads at energies below the mass of Φ to an effective supergravity theory which violates the consistency conditions in [21]. This argument can be made in spite of the modifications to [21] we found in appendix A (and the general analysis in [36]).
Note that the transformation of the vector b µ is consistent with the coupling (A.1) to the conserved current in R µ .
The two-form B µν has three off-shell degrees of freedom and the gauge field b µ has three degrees of freedom as well. Together with the metric we find 12 bosonic degrees of freedom. The gravitino provides the 12 fermionic degrees of freedom. This is equivalent to the "new minimal multiplet" of supergravity [9,10].
Our goal now is to construct the kinetic term for this theory. We again define a Thus, E R αα is invariant under the restricted gauge transformations (A.2). Note that the object in parentheses D γ D 2 L γ + DγD 2 Lγ is a linear multiplet as in (1.9).
Another important relation E R satisfies is These relations guarantee that is invariant.
We can now summarize the Lagrangian. We will not be careful about the coefficients since our goal is to explain the qualitative behavior. The bosonic couplings of supergarvity to matter fields follow from (A.1) L matter−garvity = h µν T µν + ǫ µνρσ ∂ ν B ρσ A (S) µ + j (R) (A.10) We have denoted F µν is the field strength appearing in the R-multiplet. 17 Note that this Lagrangian is invariant under all the residual gauge transformations. The (quadratic) kinetic term for the bosonic gravitational degrees of freedom are 1 M 2 Here h∂ 2 h is just a shorthand for the linearized Einstein theory. Note the absence of b 2 µ due to gauge invariance.
We see that b µ is an auxiliary field that can be easily integrated out to yield Hence, B µν is an auxiliary field that is solved in terms of the R-current. We conclude that both the B-field and the vector field b µ are auxiliary non-propagating degrees of freedom.
Thus, the coupling to supergravity via the R-multiplet does not introduce new propagating degrees of freedom beyond the graviton and gravitino.
Using (A.12) in the action we get L total = h∂ 2 h + h µν T µν + A (S)µ j (R) µ + · · · . (A.13) As an example we can consider a pure U (1) gauge theory. It has an R-current given by R αα = − 4 g 2 W α Wα. We Denote by A µ the elementary gauge field in the problem. For this theory it easy to see that the operator A (S) µ in the R-multiplet is just the FI-term times the fundamental gauge field, A (S) µ ∼ ξA µ . Hence, the effect of (A.13) is to shift the gauge charges of all the fields in the problem by their R-charge (proportional to the FI-term and suppressed by the Planck scale). This reproduces the results of [12][13][14] which was derived in the old minimal formalism about the coupling of R-symmetric theories with an FI-term to supergravity. The necessity of an exact R-symmetry is the root of the incompatibility of these models with a complete quantum gravity theory.
Another interesting case is the CP 1 sigma model. Since this theory has no superpotential, there is an R-symmetry such that all the fields carry R-charge zero. It is therefore guaranteed that this theory has a well-defined R-multiplet (3.6). As we have explained in section 3, this multiplet is globally well-defined. It can therefore be (classically) coupled to supergravity for any value of the radius of the sphere. 18 17 The expression in components for the R-multiplet is obtained from (2.2) by setting X = 0. 18 This conclusion differs from [21], which claims that the radius of the sphere has to be quantized in units of the Planck scale. The difference arises because of the existence of the R-symmetry. For a detailed discussion of this point see [36]. | 2010-06-23T20:37:04.000Z | 2010-02-11T00:00:00.000 | {
"year": 2010,
"sha1": "d94737603f7b9b6c77b668ea8fbac4f7ddf60191",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1002.2228",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d94737603f7b9b6c77b668ea8fbac4f7ddf60191",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
267546069 | pes2o/s2orc | v3-fos-license | MUC1-EGFR crosstalk with IL-6 by activating NF-κB and MAPK pathways to regulate the stemness and paclitaxel-resistance of lung adenocarcinoma
Abstract Background The chemotherapy resistance often leads to chemotherapy failure. This study aims to explore the molecular mechanism by which MUC1 regulates paclitaxel resistance in lung adenocarcinoma (LUAD), providing scientific basis for future target selection. Methods The bioinformatics method was used to analyse the mRNA and protein expression characteristics of MUC1 in LUAD. RT-qPCR and ELISA were used to detect the mRNA and protein expression, flow cytometry was used to detect CD133+ cells, and cell viability was detected by CCK-8 assay. The mRNA-seq was performed to analyse the changes in expression profile, GO and KEGG analysis were used to explore the potential biological functions. Results MUC1 is highly expressed in LUAD patients and is associated with a higher tumour infiltration. In paclitaxel resistance LUAD cells (A549/TAX cells), the expression of MUC1, EGFR/p-EGFR and IL-6 were higher than that of A549 cells, the proportion of CD133+ cells was significantly increased, and the expression of cancer stem cell (CSCs) transcription factors (NANOG, OCT4 and SOX2) were significantly up-regulated. After knocking down MUC1 in A549/Tax cells, the activity of A549/Tax cells was significantly decreased. Correspondingly, the expression of EGFR, IL-6, OCT4, NANOG, and SOX2 were significantly down-regulated. The mRNA-seq showed that knocking down MUC1 affected the gene expression, DEGs mainly enriched in NF-κB and MAPK signalling pathway. Conclusion MUC1 was highly expressed in A549/TAX cells, and MUC1-EGFR crosstalk with IL-6 may be due to the activation of NF-κB and MAPK pathways, which promote the enrichment of CSCs and lead to paclitaxel resistance.
Introduction
lung cancer remains the most common cause of cancer death worldwide.Most patients were in the advanced stage at the time of diagnosis, with a high rate of recurrence and metastasis [1,2].lung adenocarcinoma (lUaD) was the most common lung cancer, accounting for approximately 40% of all lung cancer cases [3]. the treatment for lung cancer mainly includes surgery, chemotherapy, radiotherapy, immunotherapy [4].Paclitaxel is one of the commonly used drugs for chemotherapy, but its resistance often leads to chemotherapy failure [5].Recently, targeted therapy and its combination with other treatments have shown promising results.however, similar to traditional chemotherapy, these new targeted drugs also tend to fail due to the development of drug resistance [6].therefore, overcoming drug resistance is a challenging issue in the treatment of lUaD.some mechanisms have been proposed to explain the emergence of paclitaxel resistance, one of which is the therapeutic tolerance mediated by cancer stem cells (cscs) [7]. the current chemotherapy and radio technology kill most of cancer cells, but due to the specific drug resistance mechanism of cscs, they cannot be eliminated [8][9][10].cscs are a subpopulation of cancer cells with the ability to self-renew and differentiate.Researches have reported that cscs are involved in cancer resistance, metastasis, and recurrence, and significantly affect tumour treatment [9].During the treatment of cancer, cscs exert their drug resistance by maintaining dormant state, increasing DNa repair capacity, turning off apoptotic pathways, managing reactive oxygen species and reactive nitrogen species in a highly efficient manner, and by manipulating the tumour microenvironment (tMe) [7,9].it has been proposed that targeting csc subpopulations can lead to tumour regression and reduce the possibility of tumour recurrence after treatment.[10].erlotinib, a tyrosine kinase inhibitor (tiK) of the epidermal growth factor receptor (eGFR), can inhibit the enrichment of cervical cscs induced by the MUc1-eGFR-cReB/GRβ-il-6 axis in cervical cancer [11], but there are no relevant reports on this pathway in lUaD.
Mucin1 (MUc1) is a type i transmembrane protein with a highly glycosylated extracellular domain [12]. in healthy tissues, MUc1 exists on the surface of all epithelial tissues and serves as a protective barrier for cells through its extracellular domain [13,14].Nevertheless, in cancer cells, MUc1 has intracellular signalling function and is closely related to the treatment and progression of different epithelial cancers [15,16].MUc1 can also up-regulate the expression of atP-binding cassette transporter B1 (aBcB1) in eGFR-dependent manner to induce chemotherapy resistance [17].Previous studies have shown that MUc1-c plays a crucial role in inducing stemness and paclitaxel resistance in human non-small cell lung cancer (Nsclc) a549 cells, but the molecular mechanism by which MUc1 induces paclitaxel resistance in lUaD is still unclear [18].
this study analysed the expression characteristics of MUc1 in lUaD patients through bioinformatics, and compared the expression of MUc1, eGFR and il-6 in human lung cancer a549 cells and paclitaxel resistance lUaD a549/taX cells and detection of cancer cell stemness in two types of cells.Furthermore, we knocked down MUc1 in a549/taX cells and performed mRNa sequencing (mRNa-seq) to further analyse the molecular mechanism of MUc1 regulating the stemness of paclitaxel-resistant lUaD cells, providing scientific basis for future target selection.
Expression analysis of MUC1 in LUAD patients
the gene expression data and clinical data of MUc1 in lUaD were downloaded from the cancer Genome atlas (tcGa) database and the expression of MUc1 in lUaD samples and matched normal tissues were compared.Moreover, we also analysed the expression of MUc1 in different clinical phenotypes.the protein expression of MUc1 in lUaD tissues and normal tissues were retrieved in the human Protein atlas (hPa) database (https://www.proteinatlas.org/).the expression of MUc1 in different tumour tissues and matched normal tissues were compared by the tumour immune estimation Resource (tiMeR) database (http://timer.cistrome.org/).the study was conducted in accordance with the Declaration of helsinki and approved by the ethics committee of the 363 hospital.
Immune infiltrate analysis of MUC1
ciBeRsORt was used to evaluate the infiltration status of 22 immune cells in the high and low MUc1 groups in the tcGa-lUaD dataset.the 'Pearson' method was used to calculate the correlation between MUc1 expression and differential immune cells, an R-value of −1 indicates complete negative correlation, +1 indicates complete positive correlation, and 0 indicates no correlation.p < 0.05 was considered to be significant.through the tiMeR database (https://cistrome.shinyapps.io/timer/),the influence of copy number variation of MUc1 on immune cell infiltration was evaluated.the estiMate algorithm was used to calculate the immune score, matrix score, tumour purity, and estiMate score for each lUaD patient.
Cell culture
human lung cancer a549 cells were purchased from the cell bank of the chinese academy of sciences (shanghai, china), and paclitaxel resistance lUaD a549/taX cells were purchased from shanghai MeiXUaN biological science and technology ltD (shanghai, china).a549 cells were cultured in DMeM medium supplemented with 10% foetal bovine serum (FBs) and penicillin/streptomycin (100 U/ml), while a549/taX cells were additionally added with 200 ng/ ml paclitaxel in the above medium to maintain their drug resistance.then place the culture bottle in a 5% cO 2 incubator at 37 °c for static cultivation.
RNA extraction and real-time qPCR (RT-qPCR)
tRizol (tiaNGeN, Beijing, china) was used to extract total RNa from cells and the FastKing cDNa first strand synthesis kit (tiaNGeN, Beijing, china) was used for reverse transcribed to synthesize cDNa.the superReal PreMix Plus (tiaNGeN, Beijing, china) was used to amplify the target genes in the aBi stepOne Plus fluorescence quantitative PcR instrument, 2 -ΔΔct method was used to perform relative quantitative analysis.the primers were listed in supplementary table 1.
ELISA
according to the instructions of manufacturers, the protein expression of MUc1, eGFR, and il-6 were measured using protein elisa kit (Kexing, shanghai, china).the absorbance (optical density value) of each well in sequence was measured at 450 nm, repeats each result 3 times, and determined the protein concentration by comparing the relative absorbance between the sample and the standard.
Flow cytometry assay
Flow cytometry was used to detect the proportion of cD133 + cells.Firstly, cells were treated with 6 well plates and cell suspensions were collected and rinsed with phosphate-buffered saline (PBs) buffer.subsequently, count 1 × 10 6 cells were resuspended in 100 µl staining buffer and added 5 µl antibodies (aPc anti-human cD133), incubated at room temperature in dark for 30 min.after incubation, 1 ml PBs was added to each sample for resuspension centrifugation and discarded the supernatant.Finally, 500 µl PBs was added to each sample for measurement the proportion of cD133 + cells using flow cytometry (Novocell, agilent, Usa) and analysed using FlowJo 7.6 software (tree star, inc, ca, Usa).siRNA transfection all siRNas were purchased from hippo bio (huzhou, china).the transfection reagent (lipo8000) was mixed with siRNa to transfect a549 and a549/taX cells.the cell culture plate was placed at 37 °c in a 5% cO 2 incubator for 24 h, and fresh culture medium could be changed after 4-6 h of transfection.the siRNa sequences used for transfection were listed in supplementary table 2.
Cell counting kit 8 (CCK 8) assay
the cell counting kit 8 was used to detect the viability of negative control a549/taX-Nc cells and MUc1 knockdown a549/taX siRNa-MUc1 cells.the cells were normally cultured to 90%, digested with trypsin, and after counting, the cell concentration was adjusted to 1 × 10 4 cells/ml.add 100 µl of adjusted cell suspension to each well in 96 well plates, with 3 wells in each group, and incubate in a 5% cO 2 incubator at 37 °c for subsequent testing.after reaching the cultivation time, remove the culture plate and add 10 µl of ccK-8 solution from the reagent kit to the 96-well cell culture plate, and then incubate in a 37 °c incubator for another 4 h. the absorbance at 450 nm was measured using a microplate reader (Molecular Devices, llc).
mRNA-sequencing and differential expression analysis
the illumina high-throughput sequencing platform was used to perform mRNa sequencing of a549/ taX-Nc cells and a549/taX siRNa-MUc1 cells, and generate a large amount of raw data.the fastp software was used to filter the original sequencing data and obtained high-quality clean data.salmon v0.13.0 was used for mRNa gene expression analysis.the differentially expressed genes (DeGs) was obtained by Deseq2, with screening parameters of P adj <0.05 and |log 2 fold change (Fc)| >1. the GO functional enrichment and KeGG pathway analysis of DeGs was performed by used David database (https://david.ncifcrf.gov/tools.jsp).
Statistical analysis
GraphPad Prism 5.0 was used for statistical analysis.the data was represented by mean ± standard deviation (sD), with at least 3 replicates for each group of data.the difference analysis of variables was evaluated using a student's t-test.p < 0.05 was considered statistically significant for the difference.
The expression characteristics of MUC1 in LUAD
to explore the expression characteristics of MUc1 in lUaD, the mRNa expression of MUc1 in lUaD samples in the tcGa data sets were first mined.it was found that the mRNa level of MUc1 was significantly higher in tumour tissues than in normal tissues (Figure 1(a)) and MUci also exhibited the same expression pattern in different clinical features (Figure 1(B-G)).consistently, the protein expression of MUc1 in lUaD tissues was higher than that in normal tissues (Figure 1(h)).Furthermore, we investigated the expression of MUc1 in different types of tumour tissues and matched normal tissues and found that the expression of MUc1 was significantly up-regulated in lUaD, while significantly down-regulated in lung squamous cell carcinoma (lUsc), indicating that MUc1 has different effects on different types of cancer (supplementary Figure 1).
The immune infiltration analysis of MUC1 expression
according to the median expression of MUc1, lUaD patients in tcGa were divided into high-MUc1 group and low-MUc1 group, and the DeGs between the two groups were analysed.a total of 585 DeGs were obtained with a standard of p < 0.05 and |log 2 Fc| >0.5, of which 363 DeGs were up-regulated and 222 DeGs were down-regulated in high-MUc1 group (supplementary Figure 2).studies have shown that MUc1 is closely related to tumour immune microenvironment of Nsclc [19].to explore the relationship between MUc1 and tumour immune microenvironment, the immune cells abundance of high-MUc1 group and low-MUc1 group was analysed.as depicted in Figure 2(a), the high-MUc1 group and low-MUc1 group showed a statistical difference in the abundance of 12 immune cells.among them, t cells follicular helper, t cells regulatory, Dendritic cells, Mast cells resting, Monocytes, and B cells were positive correlated with MUc1 expression, while t cells cD4 + memory activated, t cells gamma delta, eosinophils, Macrophages M1, Master cell activated, and Neutrophils were negative correlated with MUc1 expression (p < 0.05) (Figure 2(B)).
Further, the tiMeR database was used to investigate the effect of MUc1 copy number variation on immune cell infiltration, and we found that the copy number variation of MUc1 was related to the infiltration degree of B cells, cD8 + cells, cD4 + cells, macrophages, Neutrophil, and Dendritic cells (Figure 3(a)).Meanwhile, estiMate algorithm was used to calculate the immune score, stromal score, tumour purity, and estiMate score.it was found that the stromal score and estiMate score in the high-MUc1 group were lower than that in the low-MUc1 group, while the tumour purity was higher than that in the low-MUc1 group (Figure 3(B)).
MUC1 was up-regulated in paclitaxel-resistant lung cancer cells, and increased cancer cell stemness
to explore the correlation between MUc1 and paclitaxel resistance lUaD, the expression of MUc1 in a549 cells and a549/taX cells were detected by Rt-qPcR, and the results showed that the expression of MUc1 in a549/taX cells was significantly higher than that in a549 cells (Figure 4(a)).Previous studies have shown that MUc1 may through MUc1-eGFR-cReB/GRβ axis stimulate il-6 expression to induce cervical cscs enrichment, and thus we detected the expression of eGFR and il-6 in both types of cells.the mRNa and protein expression levels of eGFR/p-eGFR and il-6 in a549/taX cells was significantly increased compared with a549 cells (Figure 4(a,B)).Meanwhile, the proportion of cD133 + cells was also significantly increased in a549/taX cells (Figure 5(a,B)).For cancer cells, the acquisition of stem cell stemness means that the cancer cells are more aggressive and lead to a poor clinical prognosis.therefore, we detected the expression of stem cell pluripotency factors.the mRNa expression of NaNOG, Oct4, and sOX2 in a549/tax cells were significantly increased, while the expression of MYc was significantly decreased (Figure 5(c-F)).these results indicated that the cancer cell stemness was increased in paclitaxel resistance lUaD a549/taX cells.
MUC1 knockdown decreased the stemness of A549/TAX cells
to further investigate the effect of MUc1 on paclitaxel resistance lUaD, we knocked down the MUc1 gene in a549/taX cells and selected siRNa3 as an effective interference target for subsequent experiments based on Rt-qPcR results (Figure 6(a)).the ccK-8 data showed that knocked down the MUc1 gene significantly reduced a549/taX activity (Figure 6(B)).compared with paclitaxel-resistant lUaD, knocked down MUc1 significantly decreased the expression of eGFR, il-6, Oct4, NaNOG, and sOX2.therefore, we speculated that knocked down MUc1 may affect the stemness of paclitaxel resistance lUaD a549/taX cells (Figure 6(c,D)).
Identification of DEGs and functional enrichment between A549/Tax-NC and A549/TAX-siRNA-MUC1
to explore the mechanism by which MUc1 affects the stemness of paclitaxel resistance lUaD a549/taX cells, the mRNa-sequencing on the negative control a549/ taX-Nc cells and MUc1 knockdown a549/taX-siRNa-MUc1 cells was performed.a total of 282 DeGs were screened out with a standard of p < 0.05 and |log 2 Fc| >1, of which 174 genes were up-regulated, and 108 genes were down-regulated.the volcano plot and heat map of DeGs were shown in Figure 7(a,B), and the top 20 DeGs were listed in table 1. as is well known, the aBcB1 encoded by p-glycoprotein, the aBcc2 and aBcG2 genes encoded by transport proteins MRP2 and BcRP, respectively, play a role in the development of paclitaxel-resistance.to further explore the mechanism of paclitaxel-resistance, we analysed the gene expression of aBcB1, aBcc2, and aBcG2 in the mRNa-seq dataset.there was no significant difference in the expression of aBcB1, aBcc2, and aBcG2 between a549/taX-Nc cells and a549/taX siRNa MUc1, indicating that MUc1 may regulate paclitaxel resistance not by p-glycoprotein encodes the aBcB1 (Figure 7(c-e)).
GO functional enrichment and KeGG pathway analysis of up-regulated and down-regulated genes were performed by the David database, respectively.in the up-regulated genes, GO enriched to the top terms were 'cellular response to lipopolysaccharide' in the biological process (BP), 'postsynaptic membrane' in the cell component (cc), and 'cXcR chemokine receptor binding' in the molecular function (MF), respectively (Figure 8(a)).KeGG pathway analysis showed that the up-regulated DeGs were mainly enriched in il-17 signalling pathway, tNF signalling pathway, and NF-kappa B signalling pathway (Figure 8(B)). in the down-regulated genes, GO enriched to the top terms were 'signal transduction' in the BP and 'plasma membrane' in the cc, respectively (Figure 8(c)).KeGG pathway analysis showed that the down-regulated DeGs were enriched in MaPK signalling pathway (Figure 9).MUc1 may regulate the stemness of paclitaxel resistance lUaD cells through these key pathways.
Discussion
to date, chemotherapy is still the main treatment for patients with lUaD [4].however, the application of chemotherapy is limited by intrinsic or acquired drug resistance [20].chemotherapy resistance involves multiple genes and related pathways, the role of MUc1 in resistance has been reported, but the molecular mechanism in lUaD is still unclear [9,21].this study is the first to explore the molecular mechanism of MUc1 regulating paclitaxel resistance in lUaD cells.MUc1 is highly expressed in lUaD and with extensive tumour infiltration.In vitro experiments have confirmed that MUc1 affects cancer cell stemness through the MUc1-eGFR-il-6 pathway, characterized by a significant decrease in the expression of il-6 and stem cell transcription factors after knocking down MUc1 in a549/taX cells.Furthermore, we explored the impact of knocking out MUc1 on the expression profile in a549/taX cells, and speculated that the crosstalk between MUc1-eGFR and il-6 may be due to NF-κB and MaPK pathways.
MUc1, as a well-known oncogenic gene, is usually overexpressed in various epithelial adenocarcinomas [22][23][24][25].MUc1 regulate the growth, proliferation, metastasis, apoptosis, and development processes of cancer cells by participating in different signalling pathways [26,27].the overexpression of MUc1 in Nsclc is associated with poor disease-free survival and overall survival [28].this study found that the expression of MUc1 in tumour tissues was significantly higher than that in normal tissues of lUaD patients by mining the tcGa database.in addition, the pan cancer analysis showed that the expression of MUc1 was significantly up-regulated in lUaD, and significantly down-regulated in lUsc, indicated that MUc1 has different effects on different cancer types.a bioinformatics analysis of Nsclc shows that MUc1 has a higher expression frequency in adenocarcinoma and other non-small cell lung cancer compared to squamous cell carcinoma, which is consistent with our study [28].MUc1-c promotes the suppressive immune microenvironment in non-small cell lung cancer, which is a potential target for reprogramming of the tumour microenvironment. in this study, it is also showed extensive infiltration of tumour immune cells in the high expression MUc1 group, confirming previous studies [19].Moreover, MUc1 has been confirmed to be related to the formation of drug resistance [21].a recent study showed that the extracellular domain of MUc1 acts as a hydrophilic barrier and regulates cellular permeability to paclitaxel, thereby inhibiting its uptake [29].therefore, it is necessary for us to study the correlation between MUc1 and paclitaxel-resistance in lUaD alone.
MUc1 was over-expressed in Nsclc tissues and paclitaxel resistance Nsclc a549/PR cells [18].consistently, in the present study, the expression of MUc1 was significantly increased in a549/taX cells compared with a549 cells.a study in Nsclc has shown that MUc1 plays a central role in the integration of PD-l1, and the down-regulation of MUc1 inhibits the progression disease of Nsclc [30].Besides, co-administration of the MUc1-eGFR-aBcB1 inhibitors with paclitaxel significantly blocked tumour growth and relapse in xenograft mouse model [17].similarly, in vitro and in vivo analyses have consistently shown that MUc1 cytoplasmic domain ( MUc1-c) is involved in the stemness and paclitaxelresistance of cancer cells, and the inhibition of MUc1-c expression combined with paclitaxel treatment was sufficient to reduce the sphere-forming ability and survival of paclitaxel-resistance lUaD a549/PtX cells [31].the above results consistently indicate that MUc1 is the dominant factor for paclitaxel resistance, and regulating the expression of MUc1 may help overcome chemotherapyinduced paclitaxel resistance.
in cervical cancer, MUc1 may pass through MUc1-eGFR-cReB/GRβ axis stimulation of il-6 expression induces cscs enrichment, leading to drug resistance [11].hence, we compared the expression of related genes and proteins in a549 cells and a549/taX cells, and found that the expression of eGFR/p-eGFR and il-6 in a549/taX cells were significantly increased.it was reported that the cD133 and MUc1 expression were associated with an aggressive tumour phenotype, and MUc1 is highly expressed in pancreatic cD133 + cells [32].cD133 serves as a stemness biomarker for cD133 + cscs, which have been found in lung cancer tissues [33,34].Given that, the proportion of cD133 + cells was detected using flow cytometry.compared with a549 cells, the proportion of cD133 + cells and the expression of MUc1 were significantly increased in a549/tax cells, indicating that the high expression of MUc1 and the stemness of cancer cells may be involved in paclitaxel resistance.NaNOG, Oct4, sOX2, and MYc are stem cell transcription factors that are believed to promote the acquisition of stemness in cancer cells and are overexpressed in cscs [35][36][37].here, the expression of NaNOG, Oct4, and sOX2 were significantly increased in a549/tax cells, indicating that cancer cell stemness was stronger in the a549/tax cells, and the development of paclitaxel resistance is related to the stemness of cancer cells.
to further investigate the correlation between MUc1 and paclitaxel resistance and stemness in lUaD, MUc1 was knocked down in a549/taX cells, and the expressions of eGFR, il-6, Oct4, NaNOG, and sOX2 were significantly down-regulated, suggesting that knocking down MUc1 affected the stemness of pac litaxel-resistance lUaD cells.the ccK8 assay showed that the cell activity in the knocked down MUc1 cells was significantly lower than that in the negative control, indicating that knocking down MUc1 affected the cell activity of paclitaxel-resistance lUaD cells.to further investigate its molecular mechanism, mRNa-seq was conducted to analyze the changes in the RNa expression profile after MUc1 knockdown in a549/tax cells.the first mechanism of the paclitaxel-resistance reported that p-glycoprotein encodes the aBcB1 gene, and can actively pump paclitaxel out of cells to induce paclitaxel-resistance.Paclitaxel is also transported by MRP2 and BcRP, which encoding aBcc2 and aBcG2, respectively [38].however, in our mRNa-sequencing data, the expression of aBcB1, aBcc2, and aBcG2 were no significant difference between a549/taX-Nc cells and a549/taX siRNa MUc1 cells.similarly, a study in human lung cancer cell lines has shown that no correlation between the expression of aBcB1 in paclitaxel sensitive and resistant cell lines at the mRNa level, and there may be more than one mechanism for the development of paclitaxel-resistance [39].Notably, some DeGs were enriched in NF-κB and MaPK pathway, which are involved in the regulation of cancer stem cells [40].therefore, MUc1 may regulate paclitaxel resistance not by p-glycoprotein encodes the aBcB1 but by NF-κB and MaPK pathway-mediated cancer stem cells.the activation of NF-κB signaling pathway is related to the initiation and progression of many human cancers, inhibiting NF-κB can prevent eMt in lung cancer cells and induce apoptosis in cscs [41,42].Previous studies have shown that MUc1 may help cancer cells self-renew through NF-κB pathway, while the silence of MUc1 reduces the activity of NF-κB [27,43,44]. in inflammatory breast cancer, syndecan-1/Notch/eGFR activates akt-mediated NF-κB signaling pathway crosstalk regulates the expression of interleukin 6 (il-6) and other inflammatory factors to promote the formation of cscs colony [45].consistently, the KeGG pathway analysis in this study indicates that some DeGs were enriched in the NF-κB pathway, indicating that MUc1-eGFR crosstalk with il-6 may be through activation of the NF-κB pathway.Besides, the overexpression of MUc1 enhances the activation of the MaPK pathway and reduces cell adhesion through its cytoplasmic domain [46].the p38 MaPK signalling pathway has been proven to regulate NF-κB pathway [47,48], while the DeGs in this study were also enriched in the MaPK pathway.therefore, based on previous studies and the mRNa seq of this study, we speculate that MUc1-eGFR may lead to the activation of NF-κB and MaPK pathways, and their crosstalk with il-6 promote the enrichment of cscs, thereby inducing the development of paclitaxel resistance.therefore, targeting NF-κB and MaPK pathways may provide multiple benefits in response to paclitaxel resistance, and finding reasonable drugs to regulate paclitaxel resistance combined with chemotherapy is a promising treatment strategy for lUaD. in the future, we will conduct further experiments to identify key targets for chemotherapy-induced paclitaxel resistance of lUaD.
in conclusion, our study confirms the key role of MUc1 in the regulation of paclitaxel resistance.MUc1 is highly expressed in paclitaxel resistance lUaD cells, and MUc1-eGFR crosstalk with il-6 may be due to the activation of NF-κB and MaPK pathways, which promote the enrichment of cscs and lead to paclitaxel resistance. in the treatment of lUaD, regulating the expression of MUc1 and the involved molecules or pathways in combination with chemotherapy may help overcome chemotherapy-induced paclitaxel resistance of lUaD.
Figure 1 .
Figure 1.The expression characteristics of MUC1 in lung adenocarcinoma (LUAD).(A) the expression of MUc1 in normal and lUAd tissues; (B) the expression of MUc1 in different stages; (c) the expression of MUc1 in different T stages; (d) the expression of MUc1 in different n stages; (e) the expression of MUc1 in different M stages; (f) the expression of MUc1 in different genders; (G) the expression of MUc1 in different smoking history; (H) the protein expression of MUc1 in normal and lUAd tissues.
Figure 2 .
Figure 2. The immune infiltration analysis of MUC1 expression.(A) The immune cell infiltration in high-MUc1 group and low-MUc1 group; (B) the correlation between MUc1 expression and differential immune cells, the length of the line represents the correlation, the direction of the line represents the positive or negative correlation, and the colour and size of the end point represents p-value.
Figure 8 .
Figure 8. GO and KEGG analysis of DEGs between A549/Tax-NC and A549/TAX-siRNA-MUC1. (A) The Go enrichment of up-regulated deGs; (B) The KeGG pathway of up-regulated deGs; (c) The Go enrichment of down-regulated deGs. | 2024-02-09T06:17:34.590Z | 2024-02-07T00:00:00.000 | {
"year": 2024,
"sha1": "9b787ea4bcdf69a1bfc7d1f176f5a1dfa663ddd3",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/07853890.2024.2313671?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "75af342654cb61bf0d49b176cfbd558eef9b87ed",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249834001 | pes2o/s2orc | v3-fos-license | Microplastics do not affect bleaching of Acropora cervicornis at ambient or elevated temperatures
Microplastic pollution can harm organisms and ecosystems such as coral reefs. Corals are important habitat-forming organisms that are sensitive to environmental conditions and have been declining due to stressors associated with climate change. Despite their ecological importance, it is unclear how corals may be affected by microplastics or if there are synergistic effects with rising ocean temperatures. To address this research gap, we experimentally examined the combined effects of environmentally relevant microplastic concentrations (i.e., the global average) and elevated temperatures on bleaching of the threatened Caribbean coral, Acropora cervicornis. In a controlled laboratory setting, we exposed coral fragments to orthogonally crossed treatment levels of low-density polyethylene microplastic beads (0 and 11.8 particles L−1) and water temperatures (ambient at 28 °C and elevated at 32 °C). Zooxanthellae densities were quantified after the 17-day experiment to measure the bleaching response. Regardless of microplastic treatment level, corals in the elevated temperature treatment were visibly bleached and necrotic (i.e., significant negative effect on zooxanthellae density) while those exposed to ambient temperature remained healthy. Thus, our study successfully elicited the expected bleaching response to a high-water temperature. However, we did not observe significant effects of microplastics at either individual (ambient temperature) or combined levels (elevated temperature). Although elevated temperatures remain a larger threat to corals, responses to microplastics are complex and may vary based on focal organisms or on plastic conditions (e.g., concentration, size, shape). Our findings add to a small but growing body of research on the effects of microplastics on corals, but further work is warranted in this emerging field to fully understand how sensitive ecosystems are affected by this pollutant.
INTRODUCTION
Coral reefs provide recreational, commercial, and ecological services, which makes them a valuable marine habitat (Woodhead et al., 2019). Despite their importance, coral reefs are threatened by a suite of global and local stressors. Globally, climate change is affecting ocean temperatures which are expected to increase by 2.6-4.8 • C at the surface by 2100 (Pachauri et al., 2014;Rogelj, Meinshausen & Knutti, 2012), and can result in coral bleaching. The global effects of ocean warming on coral reefs are evidenced by the significant degradation and collapse of reef ecosystems since bleaching can lead to coral mortality (Pratchett et al., 2018). Due to the continual rise in ocean temperatures, there has been an increase in the frequency and intensity of coral bleaching events (Hughes et al., 2018;Riegl et al., 2009). The Florida Keys and Caribbean are among the most degraded reefs, with a 63% continuous decline in coral cover between 2007 and 2016 (Jones, Figueiredo & Gilliam, 2020), however reef degradation began decades before the recent changes (Schutte, Selig & Bruno, 2010). In addition to rising water temperatures, there is a growing concern about the effects of microplastics on coral-reef systems. Although some early studies have demonstrated that microplastics can negatively harm corals (Hankins, Duffy & Drisco, 2018;Reichert et al., 2018;Tang et al., 2018;Tang et al., 2021), the responses have been equivocal among species examined (Reichert et al., 2019;Reichert et al., 2018). We therefore lack an understanding of how microplastics may interact with elevated ocean temperatures, and how this emerging stressor may affect bleaching in most coral species. Addressing this research gap will help us to broaden our understanding of the generalities of the individual and combined effects of these two anthropogenic stressors on sensitive coral-reef ecosystems.
Exposure to microplastics in corals has been demonstrated to cause a variety of negative effects. Adhesion of microplastics to a coral's surface can cause localized bleaching and tissue necrosis (Reichert et al., 2018), but further harm can occur when corals ingest them. There is some evidence to suggest corals accidentally ingest microplastics when they try to capture food (Axworthy & Padilla-Gamino, 2019). This prevents corals from obtaining real food due to time spent handling the plastic (Savinelli et al., 2020) and imparts satiation by filling their gastrovascular cavity (Rotjan et al., 2019). These responses can have important implications on their energy budgets because the movements involved with capturing, ingesting, and egesting microplastics are energetically costly (Reichert et al., 2019). In addition, a reduction in food consumption to replenish energy lost when handling the microplastics could ultimately cause an energy deficit (Savinelli et al., 2020). This may have profound repercussions when corals are stressed, such as in ocean warming conditions, since they need energy to cope with these stressors. However, only three studies to date have examined how microplastics and ocean warming interact in corals. Reichert et al. (2021) found equivocal effects of microplastics on five species of coral. Although microplastics exacerbated the effects of temperature on bleaching in one species, it did not affect bleaching in three species, and even reduced it in one (Reichert et al., 2021). Increased photosynthetic efficiency, upregulation of heat shock proteins, or increased heterotrophic feeding were potential explanations for why Montipora digitata bleached less when thermally stressed (Reichert et al., 2021). However, Axworthy & Padilla-Gamino (2019) found corals reduced feeding on Artemia but not on microplastics following thermal stress and suggested this could cause an energy deficit. Additionally, Mendrik et al. (2021) observed reduced photosynthetic activity in Acropora spp. exposed to microplastic fibers at ambient temperature likely due to an increase in reactive oxygen signaling species, an indicator of stress, but this effect was not found at high temperatures. The authors suggested the corals acclimated to thermal stress by producing oxidative enzymes which also protected them from the microplastic stress (Mendrik et al., 2021). Ultimately, the stress and energy deficits caused by microplastics combined with stress from elevated temperatures could interact to produce either an additive or synergistic effect on coral bleaching, but further work is needed to examine this.
Acropora cervicornis is an important reef-building species in the tropical western Atlantic region that provides ecosystem services such as habitat for organisms and storm protection of shorelines (Moberg & Folke, 1999;Woodhead et al., 2019). This species is particularly susceptible to bleaching and other stressors and has been declining in abundance over time (Langdon et al., 2018). In fact, A. cervicornis has been listed as critically endangered by the International Union for Conservation of Nature (Aronson et al., 2008), and some estimates have suggested it may not survive past 2035 due to its susceptibility to bleaching (Langdon et al., 2018). Its recent decline in abundance, combined with fast growth rates, reliance on asexual propagation, and ecological importance have made it a focal species for restoration efforts in the Caribbean (Johnson et al., 2011;Young, Schopmeyer & Lirman, 2012). A. cervicornis has been shown to ingest microplastics (Hankins, Moso & Lasseigne, 2021) but the effects of doing so remain unclear. Given the prevalence of microplastics in Caribbean waters (Garces-Ordonez et al., 2021;Rose & Webber, 2019), the warming trend in the region (Chollett et al., 2012;Kuffner et al., 2014), and the drastic declines of A. cervicornis (Aronson et al., 2008), it is imperative to assess the effects of the combined stressors (microplastics and elevated temperatures) on this sensitive coral. To address this knowledge gap, we asked: Does microplastic exposure interact with elevated water temperatures to exacerbate bleaching in A. cervicornis? To test this study question, we performed controlled laboratory experiments where we manipulated temperature and microplastic concentrations and quantified the amount of bleaching or tissue loss.
MATERIALS & METHODS
We conducted experiments in the University of South Florida's College of Marine Science (CMS) aquarium facility. Mote Marine Laboratory (Summerland Key, Florida, USA) donated A. cervicornis fragments comprising two genotypes from several colonies each; the genotypes had moderate to high tolerance to heat stress (Muller et al., 2021). Corals were obtained from Mote Marine Laboratory under National Marine Sanctuary Permit FKNMS-2015-163-A3. We performed the experiments on coral fragments of the moderately heat-tolerant genotype in November 2020 and the high heat-tolerant genotype May-June 2021. We glued coral fragments to ceramic tiles upon arrival at the CMS and fed the corals 2.5 g per 100 gallons of a dried zooplankton mix per manufacturer recommendations (Reef-roids, PolypLab). We stored the fragments in a 190 L acclimation tank at the CMS for two weeks at 28 • C prior to the experiments. Lighting consisted of T5 High Output fluorescent lights (two 440 nm wavelength and two 15,000 K bulbs in each fixture in each tank) with an 8:16 h (light:dark) photoperiod. We used this photoperiod due to mortality associated with longer light periods in preliminary experiments, but this photoperiod is consistent with previous studies on corals maintained in the laboratory (Schutter et al., 2011). We used two submersible pumps (Model 3, Danner, Islandia, NY) to maintain circulation throughout the tank and a titanium heater to maintain temperature (Titanium 800+, Finnex) and controller (Apex Lite, Neptune Apex Systems). We made seawater with Reef Crystals Reef Salt (Instant Ocean, Blacksburg, VA) mixed with deionized water to a salinity of 35.
We used a fully orthogonal design to test the effects of temperature and microplastic exposure on coral bleaching. Specifically, we crossed two temperatures, 28 • C and 32 • C, with two microplastic concentrations, 0 microplastics L −1 and 11.8 microplastics L −1 . We choose 28 • C to match the ambient water temperature at time of collection since Mote raised the corals in an offshore nursery. The higher temperature (32 • C) was within the predicted range for the tropical western Atlantic region by the year 2100 (Pachauri et al., 2014;Rogelj, Meinshausen & Knutti, 2012). The microplastic concentration reflected the global average of 11.8 microplastics L −1 (Barrows, Cathey & Petersen, 2018). We placed two-three coral fragments in each of the eight 8.26 L experimental tanks per treatment combination. We kept experimental tanks within a water bath to keep their temperature stable. Freytes-Ortiz & Stallings (2018) developed this system to examine the effects of ocean warming on marine organisms. We placed the heater in the water bath with pumps on opposite ends to circulate the water. Each experimental tank contained a wave maker (JVP-110 528 gallons hr −1 , Sunsun, Zhoushan City, China) to generate flow and an airstone. We performed water changes of approximately one-third the tank volume every other day and measured water quality for eight parameters: temperature, calcium, alkalinity, nitrite, salinity, pH, nitrate, and ammonia. We randomly selected two tanks from each treatment four times throughout the experiment to test the water, and all tanks were ultimately examined. Water quality throughout each experiment was within an acceptable range except on the last day of the experiment for the moderately heat-tolerant genotype (Fig. 1). Two tanks had high levels of ammonia, nitrite, and nitrate caused by the tissue necrosis and mortality of the coral fragments in those tanks due to the elevated water temperature.
For the high temperature treatment, we increased the water temperature 0.5 • C each day until it reached 30 • C. We held the temperature at 30 • C for four days, then increased by 0.5 • C per day until it reached 32 • C where it remained constant for six days. This rate of temperature increase mitigated any effects of thermal shock. When the temperature was held at 32 • C, the tanks were maintained at 28 ± 0.02 • C (mean ± SE) and 32 ± 0.02 • C (mean ± SE). We added fluorescent green low-density polyethylene microbeads with a diameter range of 212-250 µm (1.025 g cc −1 ) and 300-355 µm (1.010 g cc −1 ) directly to the tanks at a concentration of 11.8 microplastics L −1 (5.9 particles L −1 of each size) (Barrows, Cathey & Petersen, 2018). We chose these microplastic sizes based on what the small-polyp A. cervicornis (1.26 mm−2.03 mm) can ingest. Prior to the experiments, we kept the microplastics in saltwater for at least one week to accumulate a biofilm. We added the microplastics to both the elevated and ambient temperature treatments after the first temperature increase along with food to initiate a feeding response. After the microplastics were added, they mostly floated on the surface on the first day and then were suspended in the water column for the remainder of the experiment. During water changes, we separated the microplastics and added them back to the tank to ensure consistent microplastic concentrations throughout the study duration. As a result of microplastic exposure, ingestion was an assumed response due to evidence by Hankins, Moso & Lasseigne (2021) and video we collected (Fig. 2, Video S1). We used a protocol to minimize contamination (Brander et al., 2020;Cowger et al., 2020), that we modified for corals. We separated the tanks from the rest of the room with a heavy-duty tarp to limit airborne contamination. We wore 100% cotton clothing to limit fiber shedding, thoroughly rinsed hardware (e.g., containers, glassware) with deionized water before use, and covered them in aluminum foil if not used immediately. We also rinsed our arms thoroughly with deionized water up to the elbows and wiped down all other surfaces with paper towels and deionized water.
To visually compare treatments throughout the experiments, we measured the response to thermal stress daily based on severity of coral bleaching and a visual estimate of percent surface area affected by tissue loss (i.e., necrosis). Coral bleaching occurs when the tissue loses its color due to the expulsion of zooxanthellae which makes the coral appear white, Figure 2 (A-B) Images of coral capturing a microplastic. The black arrow in A points to the microplastic that was captured in B. Time stamps for each picture are at the bottom (note: these images were taken from Video S1 which is sped up 20x). Note that there is a second microplastic visible in A, but it was not captured by the coral during this recording. Both microplastics are circled in black in A.
Full-size DOI: 10.7717/peerj.13578/ fig-2 whereas tissue necrosis is the loss of tissue (Hoegh-Guldberg & Smith, 1989;Rodolfo-Metalpa et al., 2005). The ordinal bleaching scale we used was none (0), low (>0-25%), partial (25-50%), high (50-75%), and total (75-100%). Immediately following the conclusion of the experimental trials, we placed all corals in a −20 • C freezer for at least one hour, then removed them one at a time, and sprayed them with artificial seawater to remove the tissue (Johannes & Wiebe, 1970). We preserved collected tissue in 2% formalin. Next, we recorded the total homogenate volume (i.e., the volume of the zooxanthellae, seawater, and formalin), homogenized it, and counted zooxanthellae on 10 grids of a Neubauer-improved hemocytometer under a light microscope. To obtain the total zooxanthellae count for each fragment, we divided the average cell count per grid by the volume of the hemocytometer chamber, then multiplied by the total homogenate volume. We used the aluminum foil method from Marsh Jr (1970) to calculate the surface area of each fragment. To do this, we completely and snugly covered each coral skeleton in aluminum foil with no overlap, and then weighed the foil. Then we weighed five 100 cm 2 foil sheets and calculated their mean mass as a reference. Next, we calculated the coral surface area by multiplying the reference foil surface area and coral foil weight then dividing by the reference foil weight.
Finally, we quantified zooxanthellae density by dividing the zooxanthellae count of each coral fragment by its surface area. To examine the additive and synergistic effects of temperature (fixed effect) and microplastics (fixed effect) on zooxanthellae density (response), we performed a generalized linear mixed model (GLMM) with tank included as a random effect. We determined the zooxanthellae response data were zero-inflated, and therefore examined several models that are capable of handling a large number of zeros (Zuur et al., 2009). We performed all analyses in R (R Development Core Team, 2021) using glmmTMB (Brooks et al., 2017) for the GLMM and DHARMa (Hartig & Hartig, 2021) for residual diagnostics. We used Akaike information criterion (AIC) to determine the best model then tested for diagnostics. We also determined that genotype did not affect zooxanthellae density (p = 0.55), and because we were not interested in its effects, per se, we pooled the data across genotypes. Our final model, that was deemed the best, was a zero-inflated, negative binomial model that examined the main effects of temperature and microplastic as well as an interaction between the two (AIC = 3,893.1).
RESULTS
Bleaching did not occur in the ambient temperature (28 • C) treatment but was extensive in the elevated one (32 • C). Indeed, 97.5% of corals in the high temperature treatment were visibly bleached and 75.3% experienced tissue necrosis (Fig. 3). These observations held regardless of microplastic presence. Further, zooxanthellae density was strongly affected by elevated temperature (z = −8.15 p < 0.001, Table 1). However, zooxanthellae density was not affected by either microplastics alone (z = 1.07, p = 0.29) or in combination with elevated temperature (z = 1.04, p = 0.30). Neither elevated temperature (z = 0.01, p = 0.99) nor microplastic presence (z = 0.17, p = 0.87) contributed to excess zeros in the zero-inflated model.
DISCUSSION
Using a short-term laboratory experiment, we have shown that the presence of microplastics, when combined with thermal stress, did not alter the bleaching response of A. cervicornis. Importantly, these experiments were conducted using environmentally relevant microplastic concentrations. Research focused on the potential effects of microplastics on corals is an emerging field, and this study was one of the first to examine the orthogonal effects of microplastics with thermal stress (Axworthy & Padilla-Gamino, 2019;Reichert et al., 2021). As expected, elevated temperature reduced the zooxanthellae densities of the coral, but we found no individual or interactive effects of the microplastics.
The literature to date has been equivocal regarding the effects of microplastics on coral bleaching. The results from our study are consistent with previous research on Porites lutea and Heliopora coerulea at ambient temperature (Reichert et al., 2019;Reichert et al., 2018), but in contrast with studies that have found microplastic exposure can cause bleaching and tissue necrosis in A. muricata and Pocillopora verrucosa (Reichert et al., 2019;Reichert et al., 2018;Syakti et al., 2019). Similar to our study design, Reichert et al. (2021) examined the combined effects of microplastics and climate-change induced ocean warming, and found more severe bleaching in microplastic-treated fragments of Pocillopora verrucosa at elevated temperature. However, consistent with our results, Reichert et al. (2021) did not find an additive or synergistic effect of microplastics at elevated temperatures in A. muricata, Porites cylindrica, and Stylophora pistillata. The contrasting results among species highlights the species-specific responses corals have to microplastics. Previous studies have attributed the different responses to microplastics among coral species to variation in their reliance on heterotrophic feeding (Reichert et al., 2019;Tang et al., 2021). Corals typically rely on photosynthesis to meet their energy demands but can supplement this with heterotrophic feeding (Grottoli, Rodrigues & Palardy, 2006), which makes them vulnerable to microplastics through ingestion. Microplastics have been shown to be stressful to corals (Tang et al., 2018), which can deplete their energy (Hankins, Moso & Lasseigne, 2021). In response to reduced energy, corals may increase heterotrophic feeding which leads to increased interactions with microplastics, additional Table 1 Output of GLMM to evaluate the effects of temperature and microplastic exposure on zooxanthellae density. Ambient temperature (28 • C) and MP (absent) were used as model reference (α = 0.05, bold values indicate p < 0.05).
Coefficient
Std stress, and energy depletion, subsequently causing bleaching (Reichert et al., 2019). Some coral species rely more on heterotrophic feeding than others, thus they are more vulnerable to microplastics while species that do not rely as much on heterotrophic feeding limit their interactions with microplastics and suffer less bleaching (Reichert et al., 2019;Reichert et al., 2018). This is especially concerning at elevated temperatures where corals can have heterotrophic plasticity in response to thermal stress (Grottoli, Rodrigues & Palardy, 2006), however we did not see an effect at either ambient or elevated temperatures. Microplastics were not stressful to A. cervicornis, possibly because they have small polyps that ingest less microplastics than large-polyp corals (Hankins, Duffy & Drisco, 2018;Hankins, Moso & Lasseigne, 2021). Despite a reliance on heterotrophic feeding (Towle, Enochs & Langdon, 2015), the smaller polyp size could have led to lower rates of microplastic ingestion which limited the interactions A. cervicornis had with the microplastics. Therefore, the stress and energy consumption associated with microplastic exposure was limited which prevented bleaching. However, it is unclear how many microplastics these corals ingested since the goal of this study was to assess the effects of microplastic exposure on coral bleaching rather than to specifically measure ingestion. Microplastic ingestion has been observed in this coral species, so we assumed it occurred throughout the experiments. Experimental conditions may have also played a role in the lack of a microplastic effect in our study. For example, the response of corals to this pollutant has been shown to be dependent on microplastic concentration (Reichert et al., 2021;Syakti et al., 2019). The choice of concentration(s) to use in experimental studies can be complicated since they are dynamic both spatially (Barrows, Cathey & Petersen, 2018) and temporally (Courtene-Jones et al., 2020). Microplastic concentrations range from 0 to 220 particles L −1 in the global ocean (Barrows, Cathey & Petersen, 2018), 3 ×10 −5 to 14 particles L −1 in the tropical western Atlantic Ocean, and approximately six particles L −1 in the Caribbean (Barrows, Cathey & Petersen, 2018;Ivardo Sul, Costa & Fillmann, 2014). Due to the large range of microplastic concentrations found in the global ocean, we used the global oceanic average to make it applicable to a broader range of locations. Our results align with previous work that did not find an effect of microplastics at concentrations reflective of current oceanic conditions (Bucci, Tulio & Rochman, 2020;Reichert et al., 2021;Syakti et al., 2019), whereas studies that have found stronger effects on zooxanthellae densities used 17 times, and higher, the concentration we used (Reichert et al., 2019;Reichert et al., 2018). For example, Reichert et al. (2021) found lower photosynthetic efficiency, mortality, and bleaching in two coral species when exposed to 2,500 microplastics L −1 at ambient and elevated temperatures but not at lower concentrations (2.5, 25, and 250 microplastics L −1 ). Our finding is important because it indicates that bleaching in A. cervicornis is not exacerbated by realistic microplastic concentrations observed on average in the global ocean, and ocean warming remains a larger threat. It is important to consider our experiments took place in a controlled laboratory setting and used a single, static microplastic concentration. However, corals can be exposed to temporally variable microplastic levels due to ocean dynamics which could result in a different response locally compared to a controlled laboratory setting. Microplastic size can also play an important role in the effects on organisms. For example, Syakti et al. (2019) found smaller microplastics had a stronger effect on bleaching compared to larger ones. Indeed, studies that assessed the effects of microplastics on corals have used a range of microplastic sizes from 1-500 µm, which could lend to the varying results. In this study, we used a mixture of two different microplastic sizes (212-250 and 300-355 µm) to simultaneously expose the corals to different sizes of plastic which is more representative of actual ocean conditions. In addition to microplastic concentration and size, the particle shape could have played a role in the lack of a response to the microplastics (Bucci, Tulio & Rochman, 2020;Mendrik et al., 2021). Photosynthesis in two coral species were altered in different directions (increase and decrease) by different microplastic shapes (fibers and spheres; Mendrik et al., 2021). Additionally, it remains unclear whether polymer type could affect responses to microplastics (Bucci, Tulio & Rochman, 2020). Indeed, most studies on corals, including ours, have used polyethylene microplastics (Axworthy & Padilla-Gamino, 2019;Hankins, Duffy & Drisco, 2018;Hankins, Moso & Lasseigne, 2021;Lanctôt et al., 2020). In contrast, few have used other polymer types (e.g., polystyrene, polypropylene) (Corona et al., 2020;Mendrik et al., 2021;Tang et al., 2018), so it is difficult to determine the role it may have on how corals respond to microplastics.
CONCLUSIONS
In our study, we orthogonally crossed temperature and microplastics to assess the effects of these combined stressors on bleaching in A. cervicornis. We found that microplastics had no effect on the bleaching response of A. cervicornis at ambient and elevated temperatures. Based on the minimal effect of microplastics observed in this study, A. cervicornis could be more tolerant to microplastics; however, further research will need to be conducted on this species to discern this. Also, our experiment assessed the short-term effects of microplastics combined with thermal stress on corals. Long-term experiments are needed to determine how organisms may respond to prolonged exposure to microplastics. While rising ocean temperatures remain a known major threat to corals, microplastic research on corals is still in its infancy. Future work should continue to test for the combined effects of microplastics and other stressors (e.g., ocean acidification, disease) in other coral species to understand how microplastics interact with previously identified stressors in coral-reef ecosystems. Additionally, studies should focus on using realistic microplastic concentrations to make their studies relevant to current and near future conditions but could also use a range of concentrations to identify whether response thresholds exist. Indeed, such efforts could be important since microplastic concentrations will likely continue to increase in the ocean as plastic production continues to grow. Such an effort would also add to the well-studied and often modeled effects of two other major anthropogenic stressors, global warming and ocean acidification. | 2022-06-19T15:10:09.873Z | 2022-06-17T00:00:00.000 | {
"year": 2022,
"sha1": "4034524107ee0e5663d0c71800e54c7311b7de25",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.13578",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e56d29d3178cbd43cda2664d7996792a3f58b69e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
230574207 | pes2o/s2orc | v3-fos-license | Changing paradigm for treatment of heavily calcified coronary artery disease. A complementary role of rotational atherectomy and intravascular lithotripsy with shockwave balloon: a case report
Abstract Background Management of heavily calcified coronary arteries is still a major challenge in interventional cardiology. Inadequate stent expansion in calcific lesions is the single most important predictor of stent thrombosis and in-stent restenosis. Rotational atherectomy (RA) is an important tool to modify the calcium burden but is associated with limitations and requires specific skills. Intravascular lithotripsy (IVL) is a novel technique to treat calcified stenotic lesions and has been proposed as an alternative to RA with promising results. Case summary We report a case of a patient with severely calcified right coronary artery stenosis successfully treated with combination of RA and IVL. Discussion In this case, we demonstrate that the RA and IVL are complementary strategies, not sufficient on their own and not alternative to each other.
Introduction
Calcified coronary lesions increase the complexity of percutaneous coronary intervention (PCI). They are associated with stent underexpansion and subsequent stent thrombosis, restenosis and hence the adverse outcomes. 1 Therefore, optimal preparation of the calcific lesions prior to stent deployment is crucial. Among the available techniques to disrupt the underlying calcium, rotational atherectomy
Learning points
• Rotational atherectomy and intravascular lithotripsy are not mutually exclusive but can be complementary strategies.
• Shockwave intravascular lithotripsy is a major development but not a substitute for rotational atherectomy in the treatment of calcified coronary lesions. (RA) is an effective tool but with limitations and potential complications. Intravascular lithotripsy (IVL) (Shockwave Medical Inc.) is a new technique for calcium modification with promising results that has been proposed as an alternative to RA. [2][3][4][5][6] We hereby present a case of a severely calcified right coronary artery (RCA) successfully treated with combination of RA and IVL and demonstrates that RA and IVL are complementary strategies and not alternative to each other.
Case presentation
A 62-year-old man with background of type 2 diabetes mellitus, peripheral vascular disease, and hypertension developed inferior STsegment elevation during induction for femoral popliteal bypass surgery ( Figure 1). The operation was abandoned and he was urgently transferred to the cardiac catheterization laboratory. Coronary angiography was performed via right radial access route using 5 Fr JR4 and JL3.5 diagnostic catheters. Coronary angiography showed critical heavily calcified stenosis in the RCA ( Figure 2, Video 1) and long segment of moderate disease in the left anterior descending artery (LAD). The left main and circumflex arteries showed non-obstructive disease. Percutaneous coronary intervention to the RCA was performed using a 6 Fr AL1 guide catheter and PT2 moderate support guidewire. It was not possible to fully inflate 2.5 mm semi-compliant and non-complaint balloons (NCB) ( Figure 3). Therefore, decision was made to proceed to IVL with Shockwave Balloon. However, passing the optical coherence tomography catheter or 2.5 mm IVL balloon was not possible. Subsequently, RA with 1.5 and 1.75 mm burrs was carried out followed by NCB inflation (Figure 4), which facilitated passage of the 2.5 Â 12 mm IVL balloon. Sequential 8 cycles of 10 pulses each of lithotripsy were delivered. Optical coherence tomography post-IVL confirmed micro-fractures in the calcium ( Figure 5). Following this, 3.0 Â 12 mm and 3.5 Â 20 mm NCBs were used to further prepare the lesion. A Resolute Onyx 4.0 Â 38 mm stent was positioned with the support of a Medtronic Telescope guide extension catheter. The stent was post-dilated using a 4.0 Â 20 mm NCB to high pressure. Final angiography demonstrated excellent stent expansion and apposition ( Figure 6, Video 2). The patient remained haemodynamically stable throughout the procedure and did not require any support. Echocardiography post-PCI showed normal left ventricular systolic function without regional wall motion abnormalities. Staged pressure wire assessment of the LAD showed FFR level of 0.8, therefore, PCI was not performed. The patient has been followed up in the outpatient clinic and has made a good progress.
Discussion
Treatment of severely calcified coronary arteries is a challenge in interventional cardiology. Sub-optimal preparation of calcific vessels results in stent under-expansion, struts malapposition, damage to the polymer, and subsequent stent thrombosis and restenosis. 7,8 Although use of RA has significantly improved success rates in tackling heavily calcified lesions, this procedure requires specific skills and is sometimes associated with potentially serious complications including slow flow, no-reflow, vessel perforation, and dissection. 2 Furthermore, effectiveness of RA depends on the luminal area and burr size with predictable luminal gain in the presence of circumferential calcifications and when the luminal area is smaller than the burr size. 7 Calcium modification is less predictable when the burr used is smaller than the luminal area. 2,7 Intravascular lithotripsy (Shockwave Medical Inc.) is a recently introduced balloon-based technique, established on the principles of renal stone treatment. Intravascular lithotripsy uses circumferential sonic pressure waves to disrupt intimal and medial calcium, or in another terms to fracture both superficial and deep wall calcification, prior to low-pressure balloon expansion. It is safe, user friendly, and is not associated with significant risks. 4,5,9,10 The balloons are largely deliverable and compatible with 6 Fr Guidecatheters. Due to the absence of interaction with surrounding soft tissue, there is minimal vessel wall injury. 7 Intravascular lithotripsy delivers immediate plaque modification by creating micro-fractures in the calcium deposits. 6 The feasibility of IVL to modify vascular compliance in calcific coronary arteries was first demonstrated in Disrupt CAD I study. 4 Its safety has been demonstrated in Disrupt CAD II study. 10 Introduction of IVL has indeed transformed calcium modification. Unlike RA, it does not require specific training and is not associated with complications. It is not biased by the guidewire, therefore, fractures the calcified plaques circumferentially. 10 Prospective studies and real-world experiences have shown high effectiveness of IVL in crossing the lesions and facilitating stent delivery. 5,10 In this case, however, we demonstrated that this single technology is not always feasible in the presence of large calcifications.
In our case, the severely calcific lesion was not dilatable with NCB and IVL could not cross the lesion. Rotational atherectomy allowed adequate lesion preparation and delivery of IVL balloon. Rotational atherectomy is more efficient in modification of superficial calcium and IVL Shockwave balloon treats both superficial and deeper calcium. Therefore, occasionally 'Rota-Shock' or 'Rota-tripsy' might be required to modify the calcium burden sufficiently to facilitate stent delivery and expansion.
Conclusion
This case has highlighted the challenges associated with treating heavily calcified coronary arteries and the steps taken to overcome these as recommended in the calcium algorithm. Rotational atherectomy and IVL are not mutually exclusive but can be complementary strategies. Shockwave IVL is a major development but not a substitute for rotational atherectomy in the treatment of calcified coronary lesions.
Lead author biography
Shana Tehrani fellow in Interventional Cardiology, UK. | 2020-12-17T09:11:09.893Z | 2020-12-13T00:00:00.000 | {
"year": 2020,
"sha1": "1f756825a85a3ee98b1e54df22842048df52af82",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/ehjcr/ytaa456",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f57887e4e3259e6ffa349ff9d1f837c8fa466d0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233702936 | pes2o/s2orc | v3-fos-license | Knowledge flows in global renewable energy innovation systems: the role of technological and geographical distance
ABSTRACT Understanding the global knowledge dynamics of renewable energy technologies requires consideration of both technological and geographical dimensions. This paper assesses the relative importance of technological and geographical distant knowledge in the future knowledge development of technological innovation systems (TIS) of renewables. Using global renewable energy patents, we quantify the absorptive capacity of countries as the degree of knowledge accumulation in the knowledge diffusion between domestic actors in a TIS. Our results show that international knowledge flows within a renewable energy TIS are more important for countries with smaller absorptive capacity, whereas countries with larger absorptive capacity benefit more from domestic knowledge originating in other TISs. Consequently, each country faces unique opportunities and constraints with respect to global technological developments when developing renewable energy technologies. These findings lead to policy implications that are specific to developing renewable energy technologies in different countries.
Introduction
The development and deployment of renewable energy technologies play a key role in the global energy transitions (Gallagher et al. 2012). Technological change is considered a cumulative process in which new technologies result from the recombination of existing technologies in novel ways (Arthur 2007). This process requires interactions between actors with different backgrounds for knowledge development and diffusion, which is a key mechanism highlighted in the innovation system approaches (Carlsson et al. 2002).
Among the different innovation system approaches, the technological innovation system (TIS) concept contributed significantly to the understanding of the emergence of renewable energy technologies (Bergek et al. 2015;Markard, Hekkert, and Jacobsson 2015). A renewable energy TIS evolves in interactions between actors embedded in various national innovation systems (NIS) and in various existing TISs (Bergek et al. 2008;Carlsson and Stankiewicz 1991;Hekkert et al. 2007). More specifically, knowledge from both actors in other TISs (Andersen and Markard 2020;Malhotra, Schmidt, and Huenteler 2019) and actors in other NISs (Binz, Truffer, and Coenen 2014) are found to be important for the knowledge development of renewable energy TISs.
However, these exogenous factors have been under-conceptualised in current TIS literature (Bergek et al. 2015). First, knowledge from other TISs may originate in different countries resulting from their distinct knowledge development trajectories (Boschma 2017;Hidalgo et al. 2018;Li, Heimeriks, and Alkemade 2020;Petralia, Balland, and Morrison 2017;Sbardella et al. 2018). Second, most of the early empirical TIS applications were carried out within national boundaries (Coenen, Benneworth, and Truffer 2012). Although the recent global innovation systems concept acknowledges these multi-scalar knowledge dynamics between different TISs and NISs (Binz and Truffer 2017), there is a lack of systematic evidence on how different knowledge flows influence future knowledge development in global renewable energy innovation systems.
In order to address this gap, we investigate the relative importance of different knowledge flows along both technological and geographical dimensions in countries of different levels of absorptive capacity. We base our analysis on patents, and their backward and forward citations. We proxy the absorptive capacity of a country with the degree of knowledge accumulation in the knowledge diffusion between domestic actors in a focal renewable energy TIS. Insights of our analyses help to identify the opportunities and constraints for the development of successful renewable energy technologies at different locations.
The paper is structured as follows. In Section 2, we review the literature and establish our conceptual framework. In Section 3, we describe the data, variables and specifications of econometric models. In Section 4, we present the results of econometric analyses. We conclude with our findings in Section 5.
Theoretical background
The production of economically useful new knowledge is considered to result from the collective actions of different actors embedded in an innovation system connected by linkages ranging from informal to formalised network relationships (Lundvall 1992). Based on different delineations of system boundaries, several innovation system approaches have emerged over the years. National innovation systems (Lundvall 1992;Nelson 1993) set system boundaries along the geographical borders of countries. In other cases, system boundaries are set along a technology (Carlsson and Stankiewicz 1991) or a sector (Malerba 2002).
In recent years, the technological innovation systems (TISs) concept help understand how renewable energy technologies have emerged from interaction with firms and knowledge institutions (Bergek et al. 2008;Hekkert et al. 2007;Hekkert and Negro 2009). Initially, emerging technologies have to rely on the available knowledge and institutions of existing technologies (Arthur 2007). Gradually, they develop their own technological trajectories and supporting institutions, thereby reducing their reliance on other technologies over time (Cohen and Levinthal 1990;Dosi 1982). However, the interaction between a focal TIS and other TISs is less studied but equally important (Bergek et al. 2015).
Knowledge flows between TISs are therefore considered important, as they often underlie successful new knowledge recombination (Mowery and Rosenberg 1998;Scherer 1982). Several studies have aimed at identifying their impacts on subsequent technology development in renewable energy TISs. Both Nemet (2012) and Battke et al. (2016) found that knowledge from actors in other TISs are more likely to increase the impacts of renewable energy technologies innovations.
The geographical dimension of TIS
The TIS approach was originally formulated as a critique of national innovation system approach by explicitly claiming that new technologies may emerge in fluid global networks with actors simultaneously operating at multiple geographical scales (Carlsson 2006). However, most earlier empirical applications of the TIS concept were carried out within national boundaries (Coenen, Benneworth, and Truffer 2012). Recent systematic empirical evidence shows that countries contribute markedly different new knowledge to the global knowledge base of renewable energy technologies (Li, Heimeriks, and Alkemade 2020;Perruchas, Consoli, and Barbieri 2020;Sbardella et al. 2018), suggesting that countries differ in their capabilities for identifying and exploiting global technological opportunities.
Rooted in evolutionary economics, the recent evolutionary economic geography literature highlights the path-and place-dependence of knowledge production . The unique knowledge base of countries and regions constrains, as well as opens up, opportunities for the development of new technologies (Boschma, Heimeriks, and Balland 2014;Heimeriks and Boschma 2014). Countries and regions are more likely to develop new technologies that are related to their existing knowledge bases (Boschma 2017;Hidalgo et al. 2007;Petralia, Balland, and Morrison 2017). This related diversification process is also observed in the development of renewable energy technologies (Li, Heimeriks, and Alkemade 2020;Perruchas, Consoli, and Barbieri 2020).
The place-dependence matters for the knowledge development of renewable energy TISs in two ways. First, although the renewable energy technologies are often considered radical and disruptive (Markard and Truffer 2006), the skills and competences required may still emerge from the existing technologies in the country (Geels 2018;Hansen and Coenen 2015). Van den Berge, Weterings, and Alkemade (2020) found that some clean energy technologies may even partly have developed out of fossil fuel knowledge. The recent case study of the Norwegian oil and gas industries and their roles in the development of offshore wind technology also supports this argument (Mäkitie 2020;van der Loos, Negro, and Hekkert 2020).
Second, the uneven distribution of knowledge across countries also points to the importance of global knowledge networks in tapping into knowledge developed elsewhere (Coenen, Benneworth, and Truffer 2012;Hansen and Coenen 2015;Meckling and Hughes 2018). For example, Binz, Truffer, and Coenen (2014) analysed how actors in the TIS of membrane bioreactor technology are connected through knowledge networks at the global scale. Furthermore, the international linkages are particularly important for the formation of renewable energy TISs in emerging economies (Bento and Fontes 2015;Binz and Anadon 2018;Gosens, Lu, and Coenen 2015;Quitzow 2015).
Multi-scalar knowledge dynamics and absorptive capacity
The recent theoretical and empirical attempts in bringing a geographical dimension into the TIS literature cumulated into the formation of the global innovation systems concept (Binz and Truffer 2017). Global innovation systems can be understood as resulting from two dynamics, the generation of resources in different locational subsystems, and the structural coupling among them (Bergek et al. 2015;Binz and Truffer 2017).
The multi-scalar dynamics are important for analysing the knowledge development processes of emerging TISs. Along the technological dimension, innovations vary in the extent to which they build knowledge along a technology's own trajectory (within a TIS) or other technologies (between TISs). Along the geographical dimensions, innovations vary in the extent to which they build on domestic sources of knowledge (within a NIS) and international sources of knowledge (between NISs).
Knowledge development processes in renewable energy TISs often need to bridge technological and/or geographical distance to bring together knowledge developed by actors embedded in different TISs and NISs (Andersen and Markard 2020;Binz, Truffer, and Coenen 2014). First, renewable energy technologies are considered as complex technologies which require knowledge input from various technologies (Barbieri, Marzucchi, and Rizzo 2020;Malhotra, Schmidt, and Huenteler 2019;Nemet 2012); Second, knowledge in renewable energy technologies is unevenly distributed across countries (Sbardella et al. 2018), suggesting the importance of international knowledge flows (Li, Heimeriks, and Alkemade 2020;Verdolini and Galeotti 2011). Thus, a proper analysis of the knowledge development of global renewable energy innovation systems has to take into account both technological and geographical dimensions.
Although the novelty associated with external knowledge tends to increase with the technological or geographical distance (Boschma 2005), the difficulties in absorbing and integrating external knowledge also increases (Cohen and Levinthal 1990;Nooteboom 2000). The impacts of technologically or geographically distant knowledge thus depend on the absorptive capacity of countries (Guan and Yan 2016;Mancusi 2008;Phene, Fladmoe-Lindquist, and Marsh 2006).
The original concept of absorptive capacity introduced by Cohen and Levinthal (1990) focused on the learning and innovation of firms. Mancusi (2008) aggregated the knowledge accumulation of firms to proxy the absorptive capacity of countries in different industries. However, Criscuolo and Narula (2008) argued that the aggregation of absorptive capacity from firm level to national level should also consider institutional factors. The innovation system approaches are therefore helpful for delineating the absorptive capacity of countries (Carlsson et al. 2002). The knowledge interactions between actors in an innovation system can create positive feedback loops that are important for knowledge accumulation (Hillman et al. 2008). In other words, countries will have larger absorptive capacity if the knowledge diffusion between domestic actors are more prominent. Consequently, we expect the impacts of technologically or geographically distant knowledge on future technology development to differ for countries with different levels of absorptive capacity.
Patent data
The data used in this paper are patent applications filed at the European patent office (EPO), the United States Patent and Trademark Office (USPTO) and through the Patent Cooperation Treaty (PCT-route) from 1980 to 2015. Patent applications are extracted from the European Patent Office Worldwide Patent Statistics Database PATSTAT 2018 Autumn Version . Since multiple equivalent patent applications can be filed at different patent offices to protect the intellectual property rights of the same invention, we use the simple patent family in the PATSTAT as the unit of analysis in this paper. A simple patent family is a collection of patent documents that share identical technical content and are considered to cover a single invention (Martínez 2011). Using patents from different patent offices and patent family as a unit of analysis can also reduce the home country bias in the citation practice of individual patent office (Bacchiocchi and Montobbio 2010).
The year of a simple patent family is based on the application year of the earliest patent in the family. In the following, one 'patent' represents one 'simple patent family', and citations between patents represent citations between patent families. Moreover, we only focus on patents assigned to companies and institutions following Mancusi (2008) that individual applicants often apply patents for low quality inventions. The type of applicant is identified using the PATSTAT Standardized Name Patents relating to different types of renewable energy technology are identified using the Y02 class in the newly launched Cooperative Patent Classification (CPC). 1 The Y02 class is developed by EPO experts by combining existing International Patent Classifications (IPC) and European Patent Classifications with a lexical analysis of abstracts or claims (Veefkind et al. 2012), and has been widely adopted by researchers to study climate change mitigation and adaptation technologies (Haščič and Migotto 2015;Sbardella et al. 2018).
Dependent variable: technological impact within TIS
We aim to assess the impacts of technologically or geographically distant knowledge on subsequent knowledge development in renewable energy TISs. Counts of forward citations received by patents have been widely used as a proxy for the technological impact of inventions (for a review, see Jaffe and de Rassenfosse (2017)). Following Nemet (2012) and Battke et al. (2016), we count the number of forward citations a patent received from patents in the same type of renewable energy technology as its impact on future knowledge development of the focal TIS. The Y02 class helps identify the boundary of each type of renewable energy technology to evaluate the technological impacts within each renewable energy TIS. We count the number of forward citations within the 5-year citation buffer window. As a result, we include patents applied until 2010 in the analyses. In the robustness check, we also use a 10-year citation buffer window.
Technological and geographical distance
Backward citations of patents are frequently used as an indicator of the extent to which an invention relies on previous technology (Jaffe and de Rassenfosse 2017). We identify a backward citation as a knowledge flow between TISs (i.e. an innovation building on technologically distant knowledge) when the cited patent is not labelled as the same type of renewable energy technology as the citing patent following Battke et al. (2016). Since our sample started from 1980, we only consider patents applied after 1990 to ensure that each patent has a minimum of ten years of patent history from which it can cite prior art following Nemet (2012).
Backward citations are also frequently used to trace knowledge flows across geographical boundaries (Jaffe, Trajtenberg, and Henderson 1993). The inventor's address can better identify where the R&D was performed given the significant presence of multinational corporations (Alkemade et al. 2015;de Rassenfosse and Seliger 2020). We assign each patent to the country of residence of the first named inventor in the patent document which can be the best proxy of the location where innovation activities take place following Mancusi (2008). A backward citation is considered as an international knowledge flow (i.e. an innovation building on geographically distant knowledge) when the focal patent cited a foreign patent.
We then classify each of the backward citations into four mutually exclusive categories along both geographical and technological dimensions: Domestic Proximate (domestic knowledge flows within the focal TIS), Domestic Distant (domestic knowledge flows between TISs), International Proximate (international knowledge flows within the focal TIS), and International Distant (international knowledge flows between TISs). We count the numbers of backward citations of a focal patent in all four categories and include them in the regression as independent variables. To avoid strategic citations to prior art, self-citations are excluded by removing backward citations to patents assigned to the same applicant as the citing patent (Hall, Jaffe, and Trajtenberg 2005).
Of the independent variables representing the different types of knowledge, renewable energy innovations typically relied extensively on technologically distant knowledge and/or geographically distant knowledge. On average, one renewable energy patent cites 1.93 Domestic Proximate patents, 3.17 Domestic Distant patents, 3.10 International Proximate patents, and 4.70 International Distant patents. This is in line with the existing literature suggesting that renewable energy technologies are more reliant on technological distant knowledge than technological proximate knowledge (Barbieri, Marzucchi, and Rizzo 2020).
Absorptive capacity of countries
We proxy the Absorptive Capacity of a country in a specific type of renewable energy technology with the average number of backward citations to domestic patents in this technology per patent. We calculate this variable using patents applied in the five years prior to the application year of the focal renewable energy patent. This variable is adapted from the absorptive capacity variable used in Mancusi (2008) who counted the average number of self-citations per patent in a country in an industry and argued that self-citations indicate knowledge accumulation internal to the firm, and thus are a good proxy for absorptive capacity resulting from internal R&D.
Similarly, we use the domestic knowledge flows within the focal TIS per patent to capture the domestic knowledge accumulation within the focal TIS. Lee and Yoon (2010) argued that the degree of domestic knowledge flows represents the degree of internalisation of innovation capability of countries, in other words, absorptive capacity. Wu and Mathews (2012) applied the domestic knowledge flows to capture the absorptive capacity of countries in solar photovoltaic technology. Since self-citations are excluded in the calculation, this variable captures the positive feedback loops resulting from the knowledge diffusions between domestic actors in an innovation system (Hillman et al. 2008). This variable thus captures both the size of the knowledge stock of a focal renewable energy TIS in a country, and how strongly the actors in a country build upon knowledge created by other domestic actors in the focal TIS. Thus, countries will have larger value of Absorptive Capacity if the knowledge diffusion between domestic actors is more prominent.
Control variables
Following earlier work on knowledge flows and forward citations (Battke et al. 2016;Nemet 2012;Stephan et al. 2019), we included five control variables. First, we control for the size of the patent family (Family size). Since filing patent applications at different patent offices is costly, companies will only do so for their important innovations. A positive effect of Family size is therefore expected. Second, we control for the size of the team by including the number of inventors (Team size). A positive effect of Team size is expected since larger teams tend to have a more diverse knowledge pool to tap from previous inventions, and larger teams also tend to have larger collaboration network which increases the likelihood of the invention being used by other inventors in the future (Singh and Fleming 2010). Third, we include the dummy variable Public to indicate whether a patent is assigned to universities or public research institutes. Nemet (2012) found that patent assigned to companies are more likely to receive more citations. Fourth, the existing literature shows that patents incorporating scientific knowledge are more likely to receive more forward citations (Sorenson and Fleming 2004). We control for this influence by including the number of citations to the non-patent literature by the focal patent (Non-Patent Literature). Finally, following Nemet (2012), we control for the average backward citation lag. A negative effect of Citation lag is expected since the value of the cited patent decreases with its age (Criscuolo and Verspagen 2008). For patents without any backward citation, we use zero for citation lag, following Battke et al. (2016). In the robustness check, we exclude the patents without backward citations. The results are consistent.
Empirical strategy
Since our dependent variable, the number of forward citations, is a count variable, we use the negative binominal regression model to test our hypotheses. We included the country, technology, and time dummies to control for unobserved heterogeneities. Since the distribution of our four main independent variables capturing knowledge flows and the number of non-patent literature citations are highly skewed, we include the log transformed value of them in the model. Summary statistics for variables used in the regression are presented in Table 1. The independent variables are not highly correlated. Table 2 presents the results of the econometric analyses. Domestic_Proximate and International-_Proximate are positively associated with the technological impacts of renewable energy inventions in both columns, suggesting the cumulative knowledge development within a TIS (Hillman et al. 2008). The coefficient of International_Proximate is larger than the coefficient of Domestic_Proximate in column (1), suggesting a more important role of international knowledge. Domestic_Distant and International_Distant are negatively correlated with the technology impacts of renewable energy technology inventions. Absorptive_Capacity is positively correlated with the technological impacts of renewable energy inventions in both columns, indicating that countries with larger absorptive capacity are more likely to introduce high impact renewable energy inventions. This is in line with existing literature that most high impacts renewable energy inventions are still introduced by countries with well-functioning TIS (Dechezleprêtre et al. 2011). The interaction term International_Proximate*Absorptive_Capacity is significantly negative, suggesting a more important role of international proximate knowledge in latecomer countries. In contrast to previous findings that latecomer countries are mostly recipients of international knowledge (Mancusi 2008), our results suggest that the knowledge development of a focal TIS in latecomer countries can generate significantly new insights for global technology development. Latecomer countries like China and India have unique social-technical systems which can help to further improve the existing technological trajectories of renewable energy technologies (Hansen and Coenen 2015).
Econometric results
The interaction term Domestic_Distant*Absorptive_Capacity is significantly positive. This result first points out the importance of the geographical origin of knowledge flows from other TISs, which is understudied in previous studies focusing on the technologically distant knowledge in TIS research (Battke et al. 2016;Malhotra, Schmidt, and Huenteler 2019). Given the disruptive role of renewable energy technologies in the energy sector, bringing together technologically distant technologies within an economy will face less pressure from existing institutions (Frenken 2017; Note: *Significant at 0.1, **significant at 0.05 and ***significant at 0.01 Janssen and Frenken 2019). Furthermore, the result suggests that sufficient absorptive capacity of a country is required for identifying and utilising technological opportunities outside the focal TIS (Carlsson et al. 2002;Cohen and Levinthal 1990). Since actors may be active in multiple TISs at the same time, the knowledge diffusion between domestic actors in a TIS can facilitate the learning between different TISs to generate the positive feedback loops which are important for system functioning and growth (Hillman et al. 2008;Malhotra, Schmidt, and Huenteler 2019). The interaction Figure 1. Interaction plots.
term International_Distant*Absorptive_Capacity is significantly negative, indicating that integrating knowledge of both geographically and technologically distance are less likely to be successful. Figure 1 shows the interaction effects between absorptive capacity of countries with Domestic Distant knowledge and International Proximate knowledge. We plotted the predicted counts of forward citations for absorptive capacity of countries at the one standard deviation below mean value, mean value and one standard above mean value. The results show that the marginal effects of Absorptive_Capacity are substantial as the number of Domestic_Distant knowledge or Inter-national_Proximate knowledge increases. Moreover, the contribution of international proximate knowledge is much larger than domestic distant knowledge.
Of the controls, Family size and Team size are positively correlated with the technological impact of renewable energy innovations, as expected. Citation Lag is negatively correlated with the technological impact, also as expected. Non-Patent Literature is positively correlated with the technological impact, probably indicating the role of science as an especially relevant source for invention. The coefficient of Public is, however, significantly negative. This indicates that universities or public research institutes have a relatively minor role in generating high-impact inventions compared to businesses. Note: *Significant at 0.1, **significant at 0.05 and ***significant at 0.01
Robustness check
Several complementary analyses were performed in order to check the robustness of our findings. Table 3 presents the results. First, we use a ten-year citation buffer window. Consequently, we limit the period to 1990-2005. Second, we exclude patents without any backward citation following Battke et al. (2016) to test the robustness of using zero for citation lag in patents without any backward citation. Third, we change the estimation strategy and employ a logit regression model to explore the correlation between different types of knowledge flows of a patent with the likelihood of being highly-cited. Following Arts and Veugelers (2015), we consider a patent as highly-cited if the number of its forward citations is larger than the mean plus two standard deviations of the number of forward citations in the cohort of the same type of renewable energy technology patents applied in the same year. The results of these robustness checks in Table 3 show that our findings considering the heterogenous impacts of different types of knowledge flows on the technological impact of renewable energy innovations are highly robust.
Conclusion
Both geographical and technological dimensions which are considered important in understanding the impacts of external knowledge on future knowledge development in renewable energy TISs (Andersen and Markard 2020;Bergek et al. 2015;Binz, Truffer, and Coenen 2014;Köhler et al. 2019). This paper provides a systematic empirical analysis of the multi-scalar knowledge dynamics in the global renewable energy innovation systems proposed by Binz and Truffer (2017). Most importantly, our results show the relative importance of different external knowledge critically depends on the absorptive capacity of countries, which represents knowledge accumulation results from the knowledge diffusion between domestic actors in a TIS. Countries with larger absorptive capacity benefit more from domestic knowledge from other TISs, whereas international knowledge flows within a TIS are more important for countries with smaller absorptive capacity.
Since many countries intend to build innovation systems to better deploy renewable energy technologies at home, understanding this place-specificity in the global renewable energy innovation systems is crucial for formulating country-specific transition pathways and facilitating future technology development. The focus of latecomer countries should be on how to facilitate the knowledge diffusion between both domestic and international actors within the renewable energy TISs since they can contribute to the global technology development by utilising international knowledge. However, for countries with well-functioning renewable energy TISs (i.e. advanced countries like United States, Germany and Japan), the focus should be shifted to bringing in knowledge, skills and experiences from domestic actors in other TISs to facilitate further knowledge development.
Our study is not without limitations. Several studies have found that the knowledge dynamics of renewable energy technologies are also affected by technology-specific characteristics (Binz and Truffer 2017;Schmidt and Huenteler 2016). It might be more difficult for latecomer countries to develop more complex renewable energy technologies by utilising international knowledge since the spatial diffusions of complex knowledge is difficult (Balland and Rigby 2017;Sorenson, Rivkin, and Fleming 2006). The increasing globalisation of supply chains of renewable energy technologies offers opportunities for latecomer countries to move from less complex components to more complex components of renewable energy technologies through learning-by-doing (Malhotra, Schmidt, and Huenteler 2019;Meckling and Hughes 2018). Future research therefore should take into account knowledge flows between different components of renewable energy TISs. Note 1. We focus on six types of non-hydro renewable energy technology: solar photovoltaic (Y02E10/5), solar thermal (Y02E10/4), wind (Y02E10/7), Ocean (Y02E10/3), biofuel (Y02E50/1) and geothermal (Y02E10/1). | 2021-05-05T00:08:33.600Z | 2021-03-22T00:00:00.000 | {
"year": 2022,
"sha1": "ab912fce5c584d1d019b69b32c4a72e000aada41",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/09537325.2021.1903416?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "6a1d73aaef9397900229bcd0a3eb0d46ebbe787c",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Engineering",
"Geography"
],
"extfieldsofstudy": [
"Computer Science",
"Business"
]
} |
54946913 | pes2o/s2orc | v3-fos-license | The Comparison of the Skill of Understanding Complex Syntax at Children Attending to a Preschool Education Institution ( TRNC-TR Sample )
The current study was carried out to compare the skills of understanding complex syntax at children attending to preschool education institutions. In the current study, relational screening model, a model providing to determine the current study, was used. The working group of the study was made up of 224 children at the age of 4-5 attending to a preschool education institution in Turkey and Cyprus in the educational year of 2015-2016. In order to gather general information regarding the children and their families, “General Information Form” was used and “The Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills (ECDCSUS)”, which was developed by Akoğlu and Acarlar (2012), was used to compare the skills of understanding complex syntax at children as a data collection instrument. So as to explain what level children’s skills of understanding complex syntax, standard deviation and mean values of children’s skills of understanding complex syntax were given in the study. In order to determine whether there was a difference between the skills of understanding complex syntax of the children attending to a preschool education institution in Turkey and Cyprus and the skills of understanding complex syntax of children at the age of 4 and 5, Mann Whitney U-Test was used in terms of the items of ECDCSUS. At the end of the research, it was found that the scores of the skills of understanding complex syntax at children attending to a preschool education had a statistically significant difference in terms of country and age variables.
Introduction
The skill of understanding syntax is defined as understanding new sentences bearing messages by getting information regarding grammar or syntax [20].At the same time, the skill of understanding syntax is a complex skill being able to use the words within the former context that requires analysing at the level of words of the written or verbal knowledge.For that reason, more than one mechanism is busy with the level of the process during understanding.These processes are influenced from a greater language content and close surrounding in terms of current knowledge and amount [23].
From the start of modern psycholinguistic onwards, individuals have focused on syntactic processes in understanding language [13].Understanding complex structures of a sentence is supported by syntactic and semantic knowledge [12].For that reason, during the process of understanding the structure of sentences, the interaction between syntactic differentiation and semantic integration is of importance [17].Psycholinguistics deals with the phenomenon of understanding sentence structures in different ways.Some theorists focus on the fact that there is a system of understanding the structure of a sentence in setting up new representations.Some others point out the importance of grammar, words and contextual information in understanding the structure of sentence.Understanding the sentence that is set up in written or verbal form needs analysing.The syntax of a language is the psycholinguistic field determining the rules of both synthesis and analysis of the meaningful units [20; 4].In this sense, understanding the structure of a sentence in a correct way requires the skills of complex psycholinguistic processing.The skills of complex psycholinguistic processing have various definitions with regard to phonetical, syntactical and semantical process, however, the chronological age of the individual is said to have an impact on his understanding and defining a skill [14].It was stated in some researches that different structures of sentence could be developmentally distinctive at different age periods, also active and reversible sentences are perceived earlier than passive ones, and the importance of syntactic knowledge increased compared to contextual clues during understanding the sentence with the age of five, and that chronologies age could be a significant determinant [1].The task of setting up a sentence is the process of transferring serial elements into stigmatic dimension, in other words, synthesising them.For that reason, the task of setting up a sentence comprises more than making a series of storage of words and sounds.The brain estimates the following word in the sentence depending on syntactic design, grammar and pragmatic clues.Meaning is found in words one by one or as a combination [15].
In early childhood period, the skill of analysing the phonetical, syntactical and semantical structure of a sentence and perceiving the meaning of it is limited to the assessment of observable behaviours.It was pointed out that the features affecting the understanding of a sentence structure are the complex relations resulting from the message itself such as the difficulty level of the sentence, the length and the content of information it has, as well as semantic frequency rather than the listener himself.The place of the pausing, the speed of the speaker and also the features determining the skill of a child have an impact on the quality of information regarding syntactical structure.It is stated that psychological, linguistic and acoustic features have an effect on understanding the structure of a sentence and that it is necessary to support the strategies of understanding from early ages onwards in language acquisition [14].
Language acquisition is a must in understanding and a process continually going on in early childhood period [10].Language development is important in language acquisition.The fields of language development (phonology, lexicology and syntax) is set up over understanding the spoken language earlier than the active production of phonemes, words and grammar structures.Early linguistic skills of children are of great importance on their development of verbal language and understanding the spoken language [11; 18].Children acquire a great many words and grammar structures in both language production and understanding it at about six years of age.Some complex structures (such as passive, transitive and relative sentences) could not be used correctly before the age of eight or nine.For that reason, it was pointed out that neurobiological support given to the ongoing language development in recent years has been so widespread.The areas in understanding and producing language were determined in the 19 th century.There are a great many functional screening studies with regard to the fact that more structures are included, not only Broca field in left inferior lob and left angle girus Wernicke field in understanding language production and understanding [11].
It has been found in some studies carried out into brain recently that some information of the individual's brain, particularly prefrontal cortex do not mature until early adulthood.Complex cognitive functions also carry on developing towards adolescence or early adulthood [22].Language development is a basis for social and academic success, at the same time.Language acquisition at children could differ depending on social and cognitive variables [10].
In some studies, it was found that the skill of syntactical understanding is closely related to academic success.The action of the body of rules comprising syntactical structure enabling meaningful units to turn into bigger meaningful units proves the existence of mental grammar thought to exist in the brain of child in the process of acquisition [1].In order to be able to use and understand language in an effective way, it is necessary to know the rules comprising lexical structure [21].Syntactical structure represents the system of rules giving the rank of words in the sentence, sentence arrangement and the relation between words [15].
In every language system, the body of rules arranging the combination of words in that language in order to set up meaningful sentences and clauses is called "syntactical knowledge" [1].Vocabulary is important in syntactical knowledge.It is expressed that vocabulary is one of the variables affecting understanding the sentence.Even though there is a relation between vocabulary and reading skills, syntax is also of importance for particularly reading competence in understanding.Syntax is related to decoding in both verbal language and understanding what is read.The relation between the two skills is explained as the fact that syntax leads to understanding verbal language and that a developed language leads to understanding what is read.It was found in some studies that understanding sentence at the age of 7 is related to understanding sentence at the age of 11 [18].As a matter of fact, some types of words could be used in in different forms such as nouns and verbs.When a new word is taught, children put it in a syntactical category temporarily.They either approve putting it in syntactical category or make suitable changes by indicating how the word is used by others.As an example; a 20-month-old baby is given human and nonhuman shapes.With the figures shown the baby in an order saying "show me x" or "show x to me", syntactic structures that guides in learning words are determined.Making necessary explanations when children are read books where different words are used help them enrich their vocabulary, learn the meanings of words and acquire syntactical category.In this way, both semantic and syntactic factors play an important role in the emergence of language form [15].
Defining pictures containing syntactical factors, works used in the manipulation and sentence imitation of objects all affect the performance variables of audial, visual, motor and intermodality.All of them show the competence that an individual must have in the analysis of a sentence [14].With the syntactical rules, the structure or form of a sentence is determined.These rules comprise words, the rank of words, sentence organization, word groups and sentence components.Sentences are arranged according to general functions.Basic components of a sentence include such expressions as noun, verb, adjective etc. comprised of various classes of words.The group of words could be erased or added in a certain expression.As long as there are noun and verb, it is possible to set up a sentence.Hence, it is necessary that every sentence must have a noun and a group of verbs.In this way, syntactical structure is formed with combining the words in a sentence in a way that they can The Comparison of the Skill of Understanding Complex Syntax at Children Attending to a Preschool Education Institution (TRNC-TR Sample) have a certain meaning [15].Syntactical structure overlaps with receptive and expressive language skills.The complex sides of syntactical skills has an important effect upon understanding what is read.In particular, it is stressed that receptive language is of importance to understand the verbal language and use it, and for the skills of reading and writing as well.It is indicated that the methods supporting the development of syntactical skills also supports the skill of understanding at children [18].; since the skill of understanding syntax is expressed as sensibility against the structure of language [20].
For that reason, it is of great importance to examine the skills of understanding syntax which has a great impact on the quality of communication in different groups.In this sense, the current study was carried out to compare the skills of understanding complex syntax at children attending to preschool education institutions.a. Which level are the skills of understanding complex syntax of the children attending to a preschool institution in Turkey and Cyprus?b.Is there any difference between the skills of understanding complex syntax attending to a preschool education institution in Turkey and in Cyprus?c. Do the skills of understanding complex syntax at children differ in terms of age variable?
Materials and Methods
The model, working population and sampling, data collection instrument, the collection and the analysis of the data were given in this part.
The Model of the Study
In the current study, relational screening model, a model providing to determine the current study, was used.This survey method is a research approach aiming at describing a status in the past or present or an event as it is.Relational screening model is a kind of model aiming at determining the covariance existence and level between two or more variables [9].
Working Group
The working group of the study was made up of 224 children at the age of 4-5 attending to a preschool education institution in Turkey and Cyprus in the educational year of 2015-2016.It was found that 43.3% of the children chosen for the working of the study randomly were at the age of four and 56.7% were five.
Data Collection Instrument
In order to gather general information regarding the children and their families, "General Information Form" was used and "The Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills", which was developed by Akoğlu and Acarlar (2012), to compare the complex syntax understanding skills of children was used as a data collection instrument.
In the formation of the sentences given in The Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills the words of name and action and suffixes widely used by children at the ages of 4-6 at the database of the language sample analysis program of SALT were used.Two different sentences were formed for the each suffix chosen.The sentences comprising infinitives, participles, adverbs, conjunctions, prepositions, negative suffixes and conditional suffixes, as well as reversible sentences were given in the list.For each sentence in the list comprising simple and complex structures, a total four pictures, one showing the sentence uttered and the others prepared with three distracted pictures, were depicted by a professional painter.While preparing distracted pictures, the action in the sentence, the people carrying out this action and the way of carrying out the action were changed.In the rank of the sentences and the distracted pictures, it was paid attention on the fact that the sentences having the same suffixes should not follow each other and the pictures representing the sentence uttered should not take place at the same line.There were a total sum of 34 evaluation sentences differing in terms of syntactical sense and two exercise sentences in the evaluation instrument.During the application, it was expected that the picture representing the sentence uttered by the applicator should be shown and the answers, whether true or false, were marked in the registration form [1]. The Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills was applied in three sets in a way to provide equal distribution.The results regarding One Way Variance Analysis (ANOVA) showed that there was no statistically significant difference between the results obtained through The Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills applied in the working group in different sets (F2, 45 = .038,p > .05).[2].
Data Collection and Data Analysis
A consent allowing children to participate the study was taken from the families of the children participating in the working group of the study in order to investigate the skill of understanding complex syntax at children attending to a preschool education institution.The evaluations made with the children given a permission by their families to participate the study were conducted by the researcher individually with each child in the working group and in the institutions where they have their education.During the application of the Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills, the children were asked to show the correct picture suiting the sentence uttered the most among four pictures and it was marked in the registration form with the picture number children showed.Total number of the pictures shown correctly was calculated for each children.The evaluations were completed in one session for each child and each session lasted about 35 minutes.
The data of the research was analysed on SPSS 20, the statistical program.In order to make a descriptive analysis of the data obtained through the Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills, arithmetic mean and standard deviation were used and the normality distribution of the data was analysed through Kolmogorov-Smirnov test.In the case of group mass was smaller than 50, Shapiro-Wilks was used, in the case of being bigger, Kolmogorov -Smirnov test was used.In the case an equality both tests were used [6].At the end of the normality test of Kolmogorov-Smirnov test, it was found that the data did not exhibit normal distribution (p<0.05).As given in Table 1, it was found that the data did not have a normal distribution for both the general sense of the instrument [df (224)=0,110; p=0,00], and all items.For that reason, non-parametric tests were used in the analysis of the data.
At the end of the normality test of the data analysis: Mann Whitney U-Test was used in terms of the items of the Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills in order to determine whether there was a difference between the skills of understanding complex syntax of the children attending to a preschool education institution in Turkey and Cyprus and the skills of understanding complex syntax of children at the age of 4 and 5.While investigating the difference between categorical variables, 0.05 was used as the significance level and in the case of being p<0.05, it was found that there was a significant difference between the groups and in the case of p> 0.05, it was found that there was no significant difference between the groups [7].
Findings and Discussion
As a result of the analysis of the data obtained in the research, the findings of the research regarding the analysis of the data that was carried out to determine whether there was a significant difference between the skills of understanding complex syntax of the children attending to a preschool education institution in Turkey and Cyprus and to determine the skills of understanding complex syntax of children at the age of 4 and 5 with the arithmetic mean and standard deviation table regarding the analysis of data in order to make a descriptive analysis of the data obtained in the Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills in compliance with the purpose of the research while forming the part were given below.
The arithmetic mean and standard deviation distribution of the scores regarding the understanding skill of complex syntax of the children attending to an education institution were given in Table 2.As given Table 2, the mean scores regarding the total performance of understanding complex syntax of the children were 24.22 (SD=4.35).
Akoğlu (2014), found in the work called the Skills of Understanding Syntax at Turkish Children at 4-7 ages that the means regarding total performance of understanding syntax was 25.01 (SD=4.17)[1].
It is likely to say that the findings of the research match up with those obtained at the Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills in terms of the means and standard deviations regarding the total performance of understanding complex syntax of the children as given in Table 2.In line with these results, it is likely to think that the skills of understanding complex syntax are very close to each other in terms of the mean scores of the children attending to preschool education institutions in Turkey and Cyprus.
Mann Whitney U-Test results of the skills of understanding complex syntax of the children attending to a preschool education with regard to country variable were given in Table 3.As shown in Table 3, it was found that the skills of understanding complex syntax of the children attending to a preschool education institution had a statistically significant difference in terms of country variable between the scores of the understanding skills of general syntax (U=4650.0;p<.05).It is striking that the rank mean of the scores of the understanding skills of complex syntax regarding the items was higher in children attending to a preschool education institution in Turkey.
Upon the reviewing the correct and incorrect answers given to the evaluation instrument of the understanding skills of complex syntax of the children attending to a preschool The Comparison of the Skill of Understanding Complex Syntax at Children Attending to a Preschool Education Institution (TRNC-TR Sample) education institution in Turkey and Cyprus in the working group, it was found that the understanding performance of syntax regarding the sentences of "The man isn't wearing his watch cap even though it is snowing/The mother makes the child wear an apron in order not to dirt her dress/If the girl had had a pencil and paper, she would have been able to draw a picture/The child wants to buy an ice-cream/If the child had worn his coat, he would not have been cold/As the child has given flower, his mother kisses him" were higher for the children in Turkey.As for the children attending to a preschool education in Cyprus, it is striking that the number of correct answers given regarding the understandings skills of complex syntax sentences of "The balloon with which the dog was playing burst/The child isn't sleeping even though it is night/The girl took the ball after wearing her shoes/The girl took her coat and bag from the wardrobe/The child took the toothbrush to brush his tooth" were higher than those in Turkey.In addition, it was found that children both in Turkey and in Cyprus gave similar answers to the sentences of "While the mother is cooking, the father is reading a book/The man looks like the bird flying/The dog caught the ball and sat/The child opened the door of the cage so that the bird can go cat and its kitten are drinking milk/The child is kissing his mother/The child is asking for food by crying".
In their studies, Arciuli & Simpson (2011) found that there is some individual difference in learning visual series of the non-linguistic stimulus in children at the age of 5-12.These differences could be related to language competence and understanding sentence structures.[3].
In their studies, Kidd & Arciuli (2016) investigated the individual differences in statistical learning included in language acquisition.They applied a test of understanding four syntax structures in children at the age of 6-8.As a result of the study, the individual differences in statistical learning in children are related to the syntax acquisition of the natural language [10].
Upon the review of the related literature, there is an emphasis on the fact that the understanding skill of complex syntax is one of the cognitive functions regarded as a must for behaviour to the purpose and carrying on developing depending on prefrontal cortex growing.Considering the efficiency of the information transfer, it is considered that children at the ages of 5 -15 could perceive complex sentence structures [22].
Again, it is indicated in the literature that language acquisition varies according to social and cognitive variables and that it is also realized in a coordination.Such social environment and current communication and technological means as family, surrounding, school, TV, community, the Internet have an impact to a great extent on the language acquisition of children in healthy way.For that reason, it is necessary that close circles, family at the first rank, and educators should be sensible [8; 10].
It is likely to say that the results of the current study and the related literature are in compliance with the findings obtained depending on the country variable of the understanding skill of complex syntax of the children attending to a preschool education institution given in Table 3.In line with these results, it is likely to think that the individual differences in children in Turkey and Cyprus bring about the differences in their understanding skills of syntax depending on the fact that the environmental conditions where children live and the opportunities offered to them.
Mann Whitney U-Test results of the understanding skill of complex syntax of the children attending to a preschool education institution in terms of the age variable was given in Table 4.As shown in Table 4, it was found that the skills of understanding complex syntax of the children attending to a preschool education institution had a statistically significant difference in terms of age variable between the scores of the understanding skills of general syntax (U=2369.5;p<.05) and that the scores of the understanding skills of complex syntax was higher in children at the age of 5.
Upon the reviewing the correct and incorrect answers given to the evaluation instrument of the understanding skills of complex syntax of the children attending to a preschool education institution in the working group, it was found that children at the age of 5 answered the sentence "Mouse and cat are playing together" correctly at the rate of 96% and 4 year children answered the sentence "The dog caught the ball and sat" correctly at the rate of 88.6%, and it is striking that these sentence are among the ones having no difficulty.As for the incorrect sentences children answered (distracted pictures are shown instead of the one representing the sentence uttered), it was found that 5 year old children had a difficulty in the sentence "The child will pick up the apple if he climbs up the ladder" and 4 year old ones had a difficulty in the sentence "The child will wash his hands if he turns the tap on".As for the sentences with the similar sentences by the children, it was found that 4 year old children answered the sentences "If the child had worn his coat, he would not have been cold/The child had washed his hands before eating his meal" 64.9%/67.0%and 5 year old children answered them 66.1% and 68.5%, respectively.
In their studies called event and possibility effect of understanding sentence in children, non-verbal content and syntactic form and strategies, Strohner & Nelson (1974), investigated sentence understanding applications by children at the age group of 2-5.It was found in the study that three year old children had continual mistakes during application of syntactical strategies while those at the age of five had reliable information in syntax and they commented the sentences in a correct way [19].
In their study, Booth, MacWhinney & Harasaki (2000) found that older children developed more in terms of correctness compared to younger ones in the skills of understanding complex language [5].
In their studies called the development of the skill of understanding complex syntax according to age, Wassenberg, Hurks, Hendriksen, Feron, Meijis,Vles & Jolles (2008) investigated the correctness of understanding complex syntax of children attending to kindergarten, second, fourth, sixth, seventh and eighth grades.At the end of the study, it was found that the correctness of understanding language went on depending on the age [22].
In a study called the skills of understanding syntax in Turkish children at the ages of 4-7 by Akoğlu (2014), it was found that the performance of understanding syntax of children differed in terms of age and as the age interval increased, so did the means [1].
With the review of the related literature, it was indicated that the skill of understanding complex language structure goes on growing up to the start of adolescence period, chronological age is an important factor on the skill of understanding and that there is an interaction between the type of the sentence and chronological age during the process of understanding syntax [22; 1].
It is likely to say that the results of the current study and the related literature are in compliance with the findings obtained depending on the age variable of the understanding skill of complex syntax of the children attending to a preschool education institution given in Table 4. Depending on these results, it is also likely to think that the skills of understanding syntax regarding the developmental features of children is realized together with the chronological age and as the age increases, so do the means.
Conclusions and Recommendations
At the end of the research investigating the skills of understanding complex syntax at children attending to a preschool education, it was found that the mean scores regarding the total performance of understanding complex syntax of the children were 24.22 (SD=4.35).In addition, the scores of the skills of understanding complex syntax at children attending to a preschool education had a statistically significant difference in terms of country variables and it was found that it was higher in children attending to a preschool education institution in Turkey.When it comes to investigating the skill of understanding complex syntax of children it terms of age variable, there was a statistically significant difference between the skill scores obtained in "The Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills", and that the scores of the skill of understanding complex syntax was higher at 5 year old children.
In the light of the data obtained in the study, the following recommendations are offered: In order to increase the competency levels of children regarding language acquisition, the acquisitions and indicators in language development field taking place in the educational program of preschool education could be prepared in a way to make children attain the skills of understanding syntax in the process of learning.
Because of the fact that there are so many variables affecting the skills of understanding complex syntax at children, these skills could be worked in terms of different variables and carried out with larger and different groups.
The current study developed for the children attending to a preschool education institution could be carried out with different age groups in a comparative way.
Table 1 .
Kolmogorov-Smirnov Test Results Regarding the Normality of
Table 2 .
The Arithmetic Mean and Standard Deviation Distribution of the Scores Regarding the Understanding Skill of Complex Syntax of The Children
Table 3 .
Mann Whitney U-Test Results of the Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills in terms of
Table 4 .
"The Instrument of Evaluating the Criteria Dependent Complex Syntax Understanding Skills" Mann Whitney U-Test Results in terms of | 2018-12-10T03:20:26.618Z | 2016-11-01T00:00:00.000 | {
"year": 2016,
"sha1": "c4213356e377980813dd39a0a57e94be4d608553",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20161030/UJER17-19507830.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c4213356e377980813dd39a0a57e94be4d608553",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
235239810 | pes2o/s2orc | v3-fos-license | Causal-Comparative Macroeconomic Behavioral Study: International Corporate Financial Transfer Pricing in the United States
This research paper summarizes the ideas of maximization of corporate welfare and basic firm theory, transfer prices among corporate subsidiaries have been found to complicate performance evaluations of subsidiaries and the parent company. The research problem addressed the lack of understanding of transfer price policy and its application to impact firm profits within three specific measures: investor return, earnings per share, and effective tax rate. The main purpose of this study was to ascertain an empirical relationship between transfer pricing policies and these financial performance measures within a study of two multinational firms. The research paper presents of an empirical result indicated statistically significant differences between the measures for each firm and allowed further comparative analysis based on other collected data. Overall, results indicated each measure of performance affected transfer pricing tax liabilities and transfer pricing may be a vehicle to improve company profitability. The results of this study may contribute to positive social change by bringing a focus to efficiency in transfer pricing, which could yield positive impacts on the economy through the reduction of international transaction costs stemming from minimization of tariffs, income tax liabilities at home and abroad, foreign exchange risk and conflicts with foreign governments’ policies. Positive social change may also be affected by providing investors a new perspective on corporate financial data based on transfer price policies and corporate performance
transfer pricing issues in the company to maximize profit with an eye toward determining optimal national tax policy. Göx and Schiller (2007) provide an overview of how management accounting researchers hence started focusing on frictions such as information asymmetry to this research.
Background and Hypotheses
A central question in global economics is how the intercompany transactions policy affects real activity over the business cycle. This study describes two projects aimed at investigating both the importance of the transfer pricing practice and the implications of theoretical models of the transfer pricing for the conduct of transfer pricing policy (Adams, L. & Drtina, R. (2008). The first part of the research is an ex post facto study that analyzes two companies' multinational business cycle characteristics of the basic financial information, such as corporate missions, strategies, financial goals, transfer pricing practices, income taxes, and tax planning strategies (Baldenius, T., Melumad, N. & Reichelstein S., 2004). The first part of the research also contains analysis of other measures of the profit performance, such as investor return, earnings per share, current tax rate, and effective tax rate. The focus on the study is to set up better reconciliation of the intercompany transactions. The primary characteristics of multinational firms are constructed using the financial data from publicly available sources (Adams, C., & Coombes, R.,2003).
The second part of the research builds a model consistent with the findings of the first part of the research. In the model, the relationship between profit performance and investor return, or earnings per share, affects the transfer pricing tax liabilities. The broader impacts are that the research will help improve the methods of global transfer pricing policy over the business cycle (Chan, K. & Hung -Chow, L. 1997). The research could also help in the design of policy aimed at regulating the leading manufacturers of highly engineered products in the United States and abroad. The research described here will be disseminated broadly in academic journals as well as publications intended for the new global accounting regulatory policymakers (Alm, J., Martinez-Vazquez, J., & Rider, M., 2006) The following research questions guided this study: Research Question 1: What is the relationship between profit performance and investors' return in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry in the U.S.?
Null Hypothesis 1: There is no relationship between profit performance and investors' return in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry.
Test Hypothesis 1: There is a statistically significant relationship between profit performance and investors' return in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry Research Question 2: What is the relationship between profit performance and earnings per share in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry in the U.S.?
Null Hypothesis 2: There is no relationship between profit performance and earnings per share in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry.
Test Hypothesis 2: There is a statistically significant relationship between profit performance and earnings per share in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry.
Research Question 3: What is the relationship between current tax rates and effective tax rates in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry in the U.S.?
Null Hypothesis 3: There is no relationship between current tax rates and effective tax rates in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry.
Test Hypothesis 3: There is a statistically significant relationship between current tax rates and effective tax rates in the two multinational companies, Eaton and Whirlpool, in the electronics, electrical, and equipment industry.
Research Design
This section will explain the statistical methodology adopted to analyze the research questions. For purposes of this study, the t test statistical analysis is used as part of the research methodology. The overall research method is one of ex post facto design to collect financial data from two multinational companies for the two years prior to December 31, 2010. External financial data were collected on the official database financial statement from January 1, 2008 to December 31, 2009.
The first set of comparisons examined the profit performance and investor returns separate for both multinational companies. Three years of financial statement data were collected for both groups. The following accounting based measures served as independent variables for purposes of this study and were used as points of comparison: profit performance (i.e., profits net after taxes, after extraordinary credits, or charges if any appear on the income statement, and after cumulative effects of accounting changes) and investor returns (i.e., total return to investors, including price appreciation and dividend yield to an investor in the company's stock). By comparing the means of two independent samples on these measurements through a series of t tests, it was possible to determine in which areas, if any, the first group was outperforming the second group for each separate business entities. By evaluating the group results on these measures as a whole, it will be possible to determine the overall impact of the transfer pricing policy to profit performance and investor return in multinational companies in the electronics, electrical, and equipment industry (Avi-Yonach, R.S., 2007).
The second set of comparisons examined the profit performance and earnings per share separate for both tested multinational companies. Three years of financial statement data were collected for both experimental groups. The following accounting-based measures served as independent variables for purposes of this study and will be used as points of comparison: profit performance and earnings per share (EPS is the diluted EPS appearing on the income statement of each company). By comparing the means of two independent simples on these measurements, through a series of t tests, it was determined in which areas, if any, the first group had outperformed the second group for each separate business industry (Elliott, J. & Emmanuel, C. 2000). Both entities were provided the same type of business industry; however, the global transfer pricing approach will be different. By evaluating the group results on these measures as a whole it was possible to determine the overall impact of the transfer pricing policy to profit performance, and earnings per share in multinational companies in the electronics, electrical, and equipment industry (Baker, J.C. & McKenzie, G.W., 1993).
The third set of comparisons examined the profit performance, and earnings per share separate for both tested multinational companies. Three years of financial statement data was collected for both experimental groups. The following accounting-based measures serve as independent variables for purposes of this study and were used as points of comparison: Current tax rate (i.e., figure applied to the tax base), and effective tax rate (i.e., tax rates based on economic income rather than taxable income). By comparing the means of two independent samples on these measurements, through a series of t tests, it was possible to determine in which areas, if any, the first group will be outperforming the second group for each separate business entities. By evaluating the group results on these measures as a whole it was possible to determine the overall impact of the transfer pricing policy to current tax rate and effective tax rate in multinational companies in the electronics, electrical, and equipment industry (Bergstrand, J., 2012).
Instrumentation and Data Collection Procedures
The two samples were independent of each other in the sense that they came separate samples containing different sets of individual subjects. The individual measures in Group A was in no way be linked with, or related to any of the individual measures in Group B, and vice versa. The version of a t test examined in these sections assessed the significance of the difference between the means of two such samples, providing (a) that the two samples are randomly drawn from normally distributed populations and (b) that the measures of which the two samples will compose as the equal-interval. TI used a nondirectional research hypothesis due to the expectation of a difference in one direction (Ma>Mb), or the other (Mb>Ma) (Boos, M., 2003).
Data Analysis and Test of Assumptions
A list of hypotheses related to the research questions are included below. All hypotheses were tested utilizing t tests of significance on the means of the samples. Null Hypothesis 1: There is not be a significant difference between firms, at a 0.05 level of significance, in profit performance and investor's return Test Hypothesis 1: There is be a positive significant difference firms, at a 0.05 level of significance, in profit performance and investor's return The financial results for all business industry were collected for a period of 3 years prior to this study. The mean of each subgroup were calculated on the variables of investor return. Then, a t test of significance with a confidence level of 95% will performed on all variables in order to test the null hypothesis.
Null Hypothesis 2: There is not be a significant difference between firms, at a 0.05 level of significance, in profit performance and earnings per share Test Hypothesis 2: There is a positive significant difference firms, at a 0.05 level of significance, in profit performance and earnings per share The financial results for all business industry where be collected for a period of 3 years prior to this study. The mean of each subgroup was calculated on the variables of earnings per share. Then, a t test of significance with a confidence level of 95% will perform on all variables in order to test the null hypothesis.
Null Hypothesis 3: There is not a significant difference between firms, at a 0.05 level of significance, in the current tax rate and effective tax rate Test Hypothesis 3: There is be a positive significant difference firms, at a 0.05 level of significance, in the current tax rate and effective tax rate The financial results for all business industry were collected for a period of 3 years prior to this study. The mean of each subgroup was calculated on the variables of effective tax rate. Then, a t test of significance with a confidence level of 95% will be performed on all variables in order to test the null hypothesis. The major assumption is that the company will achieve the outstanding performance and the corporate policy over which the subsidiary manager has the proper control. Poor performance may be attributed to environmental variability or corporate policy over which the subsidiary manager will have no control (Buckley, P..J, 2004).
Results
Whirlpool is the world's leading manufacturer and marketer of major home appliances makers. The company divides its corporate structure into four regionally based segments of North America, Europe, Latin America and Asia. It manufactures washers, dryers, refrigerators, air conditioners, dishwashers, freezers, microwave ovens, ranges, trash compactors, air purifiers, and more. In addition to Whirlpool, the company sells its products under a bevy of brand names, including KitchenAid, Maytag, Jenn-Air, Roper, Amana, and Magic Chef (Walden, 2003). As a global enterprise, Whirlpool has intrafirm trade of goods and services between operational segments. The company uses standard full cost-plus markup as the transfer price for finished goods. Full cost has been defined to include transfer cost from factory, warehousing, engineering, inland freight, and interest charge.
Discussion
These two corporations are a diverse group of products companies. Basic information about these two companies is summarized in Table 1. As shown in the table, both belonged in the Fortune 500 directory in 2009 and 2008. Whirlpool Corporation, the largest of both, had about $17 billion in revenue in 2009. Eaton Corporation, the smallest of the group, had revenue of about $12 billion in the same year. As shown in, these companies' assets and stockholders' equities also vary significantly (Case, K.E., Fair, R.C., and Oster, S.M., 2009). Despite the diversity of the products between these two corporations, they share some common characteristics; they are both multinational companies with subsidiaries in Canada, Europe, Asia, South America, and Australia, but are decentralized so that their strategic business units such as products or regional groups are autonomous units. The headquarters of these companies are located in the Midwestern states of Michigan (Whirlpool) and Ohio (Eaton).
Research Question 1 is a comparison of profit performance and investor's return conducted to addresses the impact of transfer pricing accounting treatment on investor's return. Net income percentage is used to determine the proportion of income derived from all operating, financing, and other activities that an entity has engaged in during an accounting period. This figure is the one most used as a benchmark for determining a company's performance. In this study, the author measures the impact of the principle transfer pricing in the company on the investor's return and the profit performance. Net income is the positive difference between income and expenses in the period under review and is thus the bottom line of the income statement. It shows a corporation's income for a period (Samuelson, L. 1982). When calculating this figure profits or losses carried forward and additions to, or withdrawals from open reserves are not considered. Net income is the initial figure for calculating other important ratios such as earnings per share, return on equity or return on sales (Cadwalader, W.T., 1992). Return on equity percentage is used by investors to determine the amount of return they are receiving from their capital investment in a company. The return on equity is calculated by dividing net income excluding extraordinary items by total equity (i.e., the common shareholder's equity). This indicator shows the rate of return on shareholder's capital for the period. Given constant profits, the return on equity increases the lower level of equity employed as leverage effect. A company's goal must be to generate a return that corresponds to the interest rate on the capital markets plus an industry-dependent risk premium in total generally between 5 and 10%. Investor's return is a strong indicator for investments and highly relevant in practice. Table 1 through Table 8 Electronic copy available at: https://ssrn.com/abstract=3852309 The data collected was analyzed using statistical tools. The t test was conducted with using the Mann-Whitney statistics to test the null hypothesis that two independent samples come from the same population (distribution free). Their advantage over the independent-samples t-test is that Mann-Whitney and Wilcoxon do not assume normality and can be used to test ordinal variables. A non-parametric test "The Mann-Whitney and Wilcoxon" is distribution free used to compare two independent groups of sampled data.
Alternative hypothesis: F-Test for the Significance of the Difference between the Variances of the Two Independent Samples. P > .05 indicates no significant difference detected between the variances of the two samples. The test result of the one-tailed U test are given, including the U value (9.000), degree of freedom (DF1 = 3, and DF2 = 3) and the probability value (P), which in this example is <0.05 and indicates that we accept the hypothesis that the median for Set 1 (Profit Performance) is greater than the median for Set 2 (Investor's Return). If the null hypothesis had been that the median for Set 1 (Profit Performance) was less than the median for Set 2 (Investor's Return), then the one-tailed U test would have had a U value of 0.0 (degrees of freedom of 3 and 3) with probability (P) of 1.000, and we would reject the null hypothesis that Set According to the finding, IR of Eaton is higher than IRs of Whirlpool, which is 0.50 %. In the study, an analysis of performance of profit performance and investor's return conducted to addresses the impact of transfer pricing accounting treatment on investor's return was undertaken. Whether there is any significant performance difference between the PP and IR for both listed companies were reviewed. In the paper, t-test statistics confirms the hypothesis that Eaton Corp. in respect to IRs have performed better than Whirlpool Inc. Therefore, the results show that Eaton Corp. with foreign ownership performs better than Whirlpool Inc. with foreign ownership for the period 2007-2009.
Comparison of Profit Performance and Earnings Per Share
Research Question 2 is a comparison of profit performance and earnings per share to address the impact of transfer pricing accounting treatment on earnings per share. It is useful for shareholders to determine changes in earnings per share held over a period. When calculating earnings per share, the company's profits (net income), adjusted for extraordinary items, are divided by the average number of total common shares outstanding. The impact of the principle transfer pricing in the company on the earnings per year and the profit performance is measured. This indicator is used most often to describe a company's performance over time and is one of the basic of company valuation. EPS is used in company valuations and to many analyst estimates are freely accessible. Tables The t test is conducted with using the Mann-Whitney statistics to test the null hypothesis that two independent samples come from the same population (distribution free). The advantage over the independent-samples t test is that Mann-Whitney and Wilcoxon do not assume normality and can be used to test ordinal variables. A non-parametric test 'the Mann-Whitney and Wilcoxon' is distribution free used to compare two independent groups of sampled data. H₀: There is no significant difference between Profit Performance (PP) and Earnings Per Share (EPS), H₁: There is significant difference between Profit Performance (PP) and Earnings Per Share (EPS).
Model-2 T-test shows the result that, for year 2007-2009, H₀ is rejected (p=0.100 <0.05). When PPs & EPSs are compared, it is found that the company Eaton Inc. is different significantly after the t-test was performed (t₀. ₀₅; ₃ = 25.24). According to the finding, EPS of Eaton is lower than EPSs of Whirlpool, which is 0.9 %. In the study, an analysis of performance of profit performance and earnings per share conducted to addresses the impact of transfer pricing accounting treatment on earnings per share was undertaken. Whether there is any significant performance difference between the PP and EPS for both listed companies were reviewed. In the study, the t-test statistics confirms the hypothesis that Whirlpool Inc. in respect to EPSs have performed better than Eaton Corp. Therefore, the results show that Whirlpool Inc. with foreign ownership performs better than Eaton Corp. with foreign ownership for the period 2007-2009. Also, the study found that the adjustments to the transfer pricing were incorporated incorrectly in the Eaton Corp. The United States tax court had the evidence that Eaton Corporation and Subsidiaries didn't make the necessary adjustments pursuant to I. R. C. se. 482 under Docket No. 5576-12 in 2017.
Economic Partners (2017) wrote: "In December 2011, the IRS notified Eaton that it was cancelling the first APA effective January 1, 2005, and the second APA effective January 1, 2006. The IRS contended that Eaton had material deficiencies in APA compliance including noncompliance with the terms of the APAs, errors in the supporting data and computations used in the transfer pricing methodologies ("TPMs") specified in the APAs, a lack of consistency in the application of the TPMs, the use of distortive accounting, and material facts that were misrepresented, mistakenly presented, or not presented in Eaton's submissions to the APA office. With the APA cancellation, the IRS determined deficiencies in Eaton's Federal income tax for 2005 and 2006 of USD 19.7 million and USD 55.3 million along with accuracy-related penalties of USD 14.3 million and USD 37.3 million." It confirms that this study was able to predict this situation between 2007 through 2009 by the appropriate financial analysis as presented in this research study.
Comparison of Current Tax Rate and Effective Tax Rate
Research Question 3 is a comparison of current tax rate and effective tax rate conducted to addresses the impact of transfer pricing accounting treatment on effective tax rate. For example, the federal income tax on individuals is imposed on taxable income. The tax base times the tax rates have produced the tax liability. Effective tax rate is tax rates based on economic income rather than taxable income. The tax effectively connected taxable income is used specifically to describe taxable income earned in the United States that is attributable to a foreign partner within multinational companies. The tax rate describes the relationship between income taxes and income before taxes, in order to show the relative impact on earnings from taxation. In the global economy, the tax rate is often a key criterion when companies chose where to locate. It is a reasonable indicator for a company's future tax rates in relatively stable tax systems. Table 17 Research question 3 is a comparative investigation conducted to addresses the impact of accounting treatment on tax transfer pricing. General speaking, the current tax rate should be higher than the effective tax rate. The overall benefit of the low effective tax rate is achieved at the lowest of the current tax liabilities. As a result, the collected data possess a fundamental estimation error. Since the errors in estimation reduce the beneficial role of effective tax, it should be more volatile than real taxes. including the U value (9.000), degree of freedom (DF1 = 3, and DF2 = 3) and the probability value (P), which in this example is <0.05 and indicates that we accept the hypothesis that the median for Set 1 (Current Tax Rate) is greater than the median for Set 2 (Effective Tax Rate). If the null hypothesis had been that the median for Set 1(Current Tax Rate) was less than the median for Set 2 (Effective Tax Rate), then the one-tailed U test would have had a U value of 0.0 (degrees of freedom of 3 and 3) with probability (P) of 1.000, and we would reject the null hypothesis that Set 1 (Current Tax Rate) decreased compare to the Set 2 (Effective Tax Rate).
In this study, a t test is performed to test whether there are any significant differences on Current Tax Rate (CTR) and Effective Tax Rate (ETR) ratios between the firms with foreign ownership participation in capital structure and domestic subsidiaries.
The empirical findings obtained from t-test are summarized as follow: Whirlpool data yielded a T statistic of 0.11 and P value of 0.1 while Eaton returned a T statistic of -2.35 and a P value of 0.049.
The hypotheses to test the effect of Current Tax Rate on Effective Tax Rate are: H₀: There is no significant difference between Current Tax Rate (CTR) and Effective Tax Rate (ETR), H₁: There is significant difference between Current Tax Rate (CTR) and Electronic copy available at: https://ssrn.com/abstract=3852309 Effective Tax Rate (ETR).
Model-3 T-test reveals that, for year 2007-2009, H₀ is rejected (p=0.100 <0.05). When CTRs & ETRs are compared, it is found that the company Eaton Inc. is different significantly after t-test was performed (t₀. ₀₅; ₃ = -2.35). According to the finding, ETR of Eaton is higher than ETRs of Whirlpool, which is 43.2 %.
In the study, an analysis of performance of current tax rate and effective tax rate conducted to addresses the impact of transfer pricing accounting treatment on effective tax rate was undertaken. Whether there is any significant performance difference between the CTR and ETR for both listed companies were reviewed.
In the paper, t-test statistics confirms the hypothesis that Whirlpool Inc. in respect to ETRs have performed better than Eaton Corp. Therefore, the results show that Whirlpool Inc. with foreign ownership performs better than Eaton Corp. with foreign ownership for the period 2007-2009. The statistical measurement shows that Whirlpool did not use the appropriate level of the charging the transfer pricing because the total tax liabilities increased in 2008 instead of a decrease. This indicates that the company charged the foreign subsidiary transfer pricing amount that was too low.
Conclusions
The U.S. balance of payment summarizes all the economic transactions between the tax home country and the rest of the world. These transactions include goods, services, transfer payments, loans, and investments. Inflation and interest rates, national income growth, and changes in money supply have a significant impact on the U.S. currency and future exchange rates. All these factors may affect the balance of payments. The balance of payments shows the net effect of all currency global transaction of a country over a given period. When the balance of payments of a country decline over a given period of several years then it is an indication of weakening of the value of the dollar as a national currency. It will be a threat to the currency's stability. National income, money supply, employment and foreign exchange rates are among the most important variables that are affected by a deficit or surplus. Whirlpool and Eaton should be aware that when the foreign country's balance of payments shows deficits year after year, the foreign government will eventually turn to different tools to reduce its deficit. Therefore, Whirlpool and Eaton should be alert for possible restrictive fiscal policies, such as currency for the purpose of controlling inflation. Whirlpool and Eaton may change their global transfer pricing policies to alleviate the impact of the new national policies of the United States on cash movements, the value of goods or services transferred into or out of the country, and the foreign exchange risk (Cole, R.T., 2006). | 2021-05-27T02:11:31.758Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "53528ef9562624f6242cfe8054f641785c3bf88e",
"oa_license": "CCBY",
"oa_url": "https://essuir.sumdu.edu.ua/bitstream/123456789/77488/1/Kasztelnik_Causal_Comparative_Macroeconomic.pdf",
"oa_status": "GREEN",
"pdf_src": "ElsevierPush",
"pdf_hash": "6610063dd7f84c34570337966246ec87f87d810d",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
} |
231673736 | pes2o/s2orc | v3-fos-license | Application of Artificial Neural Network in seismic reservoir characterization: a case study from Offshore Nile Delta
The Prediction of the reservoir characteristics from seismic amplitude data is a main challenge. Especially in the Nile Delta Basin, where the subsurface geology is complex and the reservoirs are highly heterogeneous. Modern seismic reservoir characterization methodologies are spanning around attributes analysis, deterministic and stochastic inversion methods, Amplitude Variation with Offset (AVO) interpretations, and stack rotations. These methodologies proved good outcomes in detecting the gas sand reservoirs and quantifying the reservoir properties. However, when the pre-stack seismic data is not available, most of the AVO-related inversion methods cannot be implemented. Moreover, there is no direct link between the seismic amplitude data and most of the reservoir properties, such as hydrocarbon saturation, many assumptions are imbedded and the results are questionable. Application of Artificial Neural Network (ANN) algorithms to predict the reservoir characteristics is a new emerging trend. The main advantage of the ANN algorithm over the other seismic reservoir characterization methodologies is the ability to build nonlinear relationships between the petrophysical logs and seismic data. Hence, it can be used to predict various reservoir properties in a 3D space with a reasonable amount of accuracy. We implemented the ANN method on the Sequoia gas field, Offshore Nile Delta, to predict the reservoir petrophysical properties from the seismic amplitude data. The chosen algorithm was the Probabilistic Neural Network (PNN). One well was kept apart from the analysis and used later as blind quality control to test the results.
Introduction
The Sequoia field, the case study, is one of the major gas fields in both WDDM and Rosetta concessions ( Fig. 1) Samuel et al. (2003). The field is located on the north-western margin of the outer slope of the Nile Delta, approximately 50 km from the nearest shoreline Mohamed et al. (2017). Different inversion methodologies have been proposed for reservoir properties characterization to predict rock/fluid properties from seismic amplitude data. Generally, rock properties (porosity for example) are better-resolved than pressures and saturations. Russell (2014) presented a comprehensive review of the modern AVO and inversion techniques. Each of these methods has its advantages and disadvantages. However, one of the common challenges is the prediction of the petrophysical properties, especially the saturation, considering their no direct relationships with seismic elastic attributes.
To overcome this challenge, we implemented one of the Artificial Intelligence (AI) algorithms for seismic reservoir characterization. AI is a modern branch of computer sciences. It has a wide range of applications that cover almost every aspect of our modern lifestyle. Many algorithms were proposed to solve geophysical problems. Artificial Neural Network (ANN) is one of the most promising algorithms. Neural networks were first inspired by the architecture of neurons in the human brain. ANN inversion gained popularity over the last decades because of its ability to establish nonlinear relationships between the input and the target property. At well locations, it "learns" the relationships that link the target log and the seismic attributes. Then, it applies that to In this study, we have implemented an integrated approach that combines the Hampson et al. (2001)proposal for the ANN training and validation, and Mohamed et al. (2014) proposal for the data conditioning. The output of the study was 3D volumes of and Shale Volume (V sh ), Effective Porosity (Φ), and Water Saturation (S w ). Blind-well tests were performed to the resulted volumes for quality control (QC) and assessment purposes.
Geologic setting
Nile Delta basin is one of the emerging gas provinces worldwide. Numerous gas discoveries of multi-trillion cubic feet (TCF) during the last decades proves its remaining reserves. The estimation of the Nile Delta basin oil and gas reserves, in 2010, was reported Kirschbaum et al. (2010) as follows: the estimated mean value for the recoverable oil is 1.8 billion barrels, the estimated mean value for the recoverable gas is 223 TCF, and finally, the natural gas liquids are about 6 billion barrels. Five years later, the giant "Zohr" discovery proved the high hydrocarbon-potentiality of this basin. The first well found a 654 m of biogenic gas. The calculated volume reaches 30 TCF Cozzi et al. (2018).
The first phase of exploration across the Nile Delta Basin targeted the onshore Messinian incised valleys Adel et al. (2017a, b). The subsequent exploration phases targeted the offshore extension of this play and other Pliocene submarine slope reservoirs Tharwat et al. (2014). Many discoveries were made and currently producing gas such as the gas fields of Rosetta concession and West Delta Deep Marine (WDDM) concession ( Fig.2) Rio et al. (1991).
The Sequoia field was discovered by an exploration well, in 2000, and subsequently appraised by three wells (2000)(2001)(2002). All wells were drilled based on seismic direct hydrocarbon indicators (i.e. bright and flat Fig. 1 An index map shows the offshore Nile Delta basin. The study area is defined by the red box. Sequoia field is colored in red while the other Pliocene gas fields are colored in grey. Modified from Samuel et al. (2003) spots) and found gas sand reservoirs. Later in 2008, the field was developed with six wells. In 2009, the production had started, and the cumulative production reached approximately 665 billion cubic feet.
The Sequoia field is a Pliocene (El-Wastani formation) submarine slope canyon system. This canyon is filled with many turbiditic channelized reservoirs Cross et al. (2009). The southern part of Sequoia is confined to a relatively narrow and well-defined valley incision (approximately 5 km wide). In contrast, the northern part of the field occupies a much wider incised valley which, in addition to the main central channel, contains many branches (more than 20 km wide). Rio et al. (1991) The total length of the field exceeds 30 km (Fig.3). The wells penetrate areas with different gas-water contacts (GWC), indicating compartmentalization and complexity within the channel complex. GWCs get progressively deeper to the north. This is most likely due to aquifer perching as a result of fault compartmentalization (Fig. 3).
The Sequoia reservoir, generally, is a thick succession (up to 200 m) that is fining upward of sandstones and mudstones. The reservoir's base is defined by a major incision that represents the base of the canyon (Fig. 4). As presented by Mohamed et al. (2017), the canyon is filled by many smaller channels that are stacked together to form the final shape of the reservoir (Fig.3). The pay gas sand is approximately 77 m. The average water saturation for the reservoir is 34% while the average effective porosity is 24%.
Methodology
The input data for this study include; seismic amplitude, seismic inversion, and well-log data sets. The seismic amplitude data is represented by a reprocessed 3D full-stack seismic volume that covers the area of interest. The acquisition of this survey was in 2006 while the reprocessing was in 2014. The total record duration is 6 s and the sample rate is 4 ms. The seismic prestack inversion volumes include the fundamental elastic volumes; P-Impedance (I P ), S-Impedance (I S ), and Density (D n ). In addition, other derived volumes include; Pwave velocity (V P ), S-wave velocity (V S ), V P V S ratio, Lambda-Rho (λρ), and Mu-Rho (µρ) volumes. The wells used in the study are sorted into two exploration Mohamed et al. (2017) wells and six development wells. The exploration wells are: Sapphire-2 and Rosetta-10, and the development wells are: Sequoia-D1 to -D6. Sequoia-D5 well was not included in the study but used as a QC well. All wells have full suit of wireline logs include S w , Φ, and V sh logs. Figure 5 shows the workflow steps. As a start, well logs QC and conditioning were applied to the input logs to make sure that the well logs are spike-free and con-sistent with the seismic data. After de-spiking the well logs, all logs were resampled at 4 ms to match the seismic scale and smoothed. Then, the full-stack seismic volume was used as an engine to generate many internal attributes (i.e. amplitude-related attributes, frequencyrelated attributes, phase-related attributes …etc.). The seismic-generated attributes (internal attributes) were used with the inverted volumes (external attributes). Using the stepwise-regression method, the best set of The proposed workflow steps to predict the reservoir properties (S w , Φ, and V sh ) via the PNN algorithm seismic internal/external attributes were found. With these attributes, the prediction error is, statistically, the lowest. The best set of attributes for S w , Φ, and V sh prediction are listed in Table 1.
The conditioned well-log data and the best set of seismic (internal/external) attributes at well locations were fed to the PNN analysis for the training and validation of the networks. During the training step, the weights of the input attributes are modified to fit the target log. At the same time, the networks are validated using the cross-validation technique, in which some wells were hidden, intentionally, and being predicted using the trained network. After minimizing the errors, the trained networks are implemented to predict S w , Φ, and V sh 3D volumes through three separate runs.
Results
To measure the accuracy of the PNN results, two quantitative analyses method were applied. In the first analysis method, we check the similarity/correlation between the originally recorded logs and the modeled logs at well locations. The average normalized correlations for the S w , Φ, and V sh were 0.90, 0.94, and 0.94, respectively. The second analysis method is the blind well test, in which Sequoia-D5 well was not included in the analysis for QC of estimation products.
The average correlations at the Sequoia-D5 well location for the S w , Φ, and V sh were 0.86, 0.81, and 0.82, respectively. Figures 6, 7 and 8 show the results at the blind well location. The PNN resulting volumes (S w , Φ, and V sh ) are aligned well with the originally recorded well logs. And apart from the well locations, these volumes are honoring the reservoir lateral variability and show an interesting number of details. These details are very crucial for investigating reservoir compartmentalization and to reassess the remaining hydrocarbon reserves.
Conclusions
Considering the drawbacks of the conventional reservoir characterization methods, we used a PNN approach to shale-volume, effective porosity, and water saturation 3D volumes. Blind-well tests were performed to the resulted volumes. The predicted reservoir properties showed very good tie to the original logs, contain fine details, and honor reservoir heterogeneity between the wells. The resulted volumes can be used to refine the construction of a reservoir model and to reassess the remaining hydrocarbon reserves. The proposed approach provides a short and efficient approach to reservoir properties prediction.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2021-01-19T15:03:47.700Z | 2021-01-19T00:00:00.000 | {
"year": 2021,
"sha1": "8b61035662c047039f02698cb09bcad4a1d91b75",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12145-021-00573-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "8b61035662c047039f02698cb09bcad4a1d91b75",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
196473233 | pes2o/s2orc | v3-fos-license | Large Retrosternal Goiter: An Otolaryngological Perspective-Case Series and Review of Literature
Introduction: Retrosternal goiter can cause respiratory distress, dysphagia, compression of great vessels, and even sudden death. Surgery is the only effective treatment. The presence of retrosternal extension is an indication for surgery even in the absence of clinical symptoms as it will continue to grow and cause pressure symptoms eventually. Intra operative and post operative airway management also poses a challenge in such cases. Most retrosternal goitres can be excised via a cervical approach; however, in cases of large size with very inferior extent, abnormal vasculature, in cases of recurrent goitres or thyroid cancer, a midline sternotomy may be necessary for exposure, safety, and completion of the excision.
Introduction
Retrosternal goitre (RG) was first described by Albrecht von Haller in 1749 and first operated by Klein in 1820 [1][2][3][4]. There is no uniform definition of retrosternal goiter. Extension of the thyroid gland below the thoracic inlet has been also described in literature variously, like substernal, retrosternal, intrathoracic, or mediastinal goiter. However, various authors have defined different criteria to call it as RG. These include a thyroid gland extending 3cm below cervicothoracic isthmus below at CT, performed with hyper extended neck [5] or extension of the gland below the fourth thoracic vertebra [6,7]. The most commonly accepted definition of RG describes a goitre as substernal or retrosternal when ≥50% total bulk of thyroid tissue resides below thoracic inlet [8,9].
Management of RG, depending upon their size and extension can be a daunting task for surgeon, where the surgical team has to be in coherence with multidisciplinary team including anaesthetist and cardiothoracic surgeon who has to be on standby in case of difficulty in extrusion of the tumour. Most of RG can be resected safely through a cervical incision [8]. In minority of cases 1-11%, a sternotomy is necessary to have complete exposure of the goitre and remove mediastinal compression [4]. Also, combined cervical with sternotomy approach can lead to more intra-operative complications and slower post-operative rehabilitation.
Patient information
We treated 05 female patients, in the age group of 38-60 years (median age of 51). Mean duration of disease among patients was 6-36 months. The chief complaint of all patients was anterior neck mass, without any compressive or pressure symptoms, except one patient who had breathlessness on exertion. On examination, all patients had firm, non tender palpable anterior neck mass which moved on deglutition, with multiple nodules could be palpated involving both lobes. The inferior border of the mass was not palpable. Vocal cords were mobile B/l on Hopkins telescopy. There were no signs of compression/pressure due to retrosternal extension. Our patients with lesion had minimum dimension of 4cm x 5cm x 8cm to the largest measuring 4cm x 8cm x 14cm (AP x TR x CC) with retrosternal extension ranging from 8cm to largest of 10cm ( Figure 1).
Workup
All patients underwent thyroid function test, US guided FNAC, USG neck, CECT scan, Thyroid scan. On FNAC, all patients were reported as benign nodular goiter. All patients were biochemically and clinically euthyroid.
Imaging in form of radiographs of the neck and chest revealed a soft tissue density lesion in the superior mediastinum with well-defined lateral margins; however the inferior margin was not well defined. The trachea was central with no evidence of any compression ( USG neck and CECT scan in view of retrosternal extension, revealed minimum extension of 8cm to maximum of 10cm. All retrosternal extension were descending to the anterior mediastinum, just abutting the level of aortic arch. The mass was causing mild tracheal luminal compromise in 03 cases and in the postero inferior aspect, the lesion was indenting and slightly displacing the aortic arch in 02 cases; however, the intervening fat planes were maintained. Laterally, the lesion was causing indentation of bilateral Internal Jugular Vein in all cases, 03 on right and 02 on left. Rest of the great vessels were normally visualised ( Figure 3A, 3B & 3C). Tc 99m thyroid scan was done which was suggestive of grossly enlarged thyroid gland with retrosternal extension. The uptake was heterogeneous with multiple interspersed cold regions, the overall uptake ranging from 2-8% ( Figure 4).
Treatment
After thorough workup, cases were discussed in multidisciplinary clinics with cardio-thoracic surgeons, in case need arise for sternotomy, and anaesthesiologist in view of anticipated airway complications arising intra operative or post operative period. Patients were counselled adequately and informed consent was sought for possible RLN injury, post op tracheostomy, hypocalcemia, with specific consent for possible sternotomy. Rigid bronchoscope with ventilator connector was kept on standby in anticipation of difficult intubation. Trachea reinforcing (flexo-metallic) endotracheal tube was inserted in all cases so that even the manipulation pressure would not cause any significant compression of the tracheal tube. The left radial artery was canulated to monitor airway pressure to ensure a safe surgery. A continuous intra-arterial pressure monitoring was done to see the compression of the great vessels if any during the course of the surgery in all cases. Preparation for possible sternotomy was also done. Prophylactic antibiotic was administered prior to incision as per existing guidelines and institutional protocols.
All patients underwent total thyroidectomy. Hereafter, the surgical technique employed and introp findings for the case with maximum retrosternal goiter (upto 10cm) are described. A low lying horizontal neck crease incision about 1-1.5cm above sternal notch was taken in a fully extended neck and the strap muscles retracted after the midline split. The dissection started from the superior pole from the left side being comparatively smaller to right side which was occupying the major part of retrosternal and making the inferior most part. This was done to ensure safety of atleast one RLN. Superior thyroid artery was identified and its branches ligated individually after identifying and preserving EBSLN. In view of large dimensions of the thyroid gland it was judged prudent to identify RLN via superior dissection after identifying the Cricothyroid (CT) joint. The superior parathyroids identified approximately 1cm caudally to CT joint and dorsal to RLN on both the sides and preserved. The inferior thyroid artery branches were ligated on the left side after preserving left parathyroid ventral to RLN but on the right side inferior thyroid artery was ligated in its main trunk. On Right, inferior parathyroid gland could not be identified. The left lobe along with its retrosternal extension was removed first. The right lobe which had the main retrosternal extension was dissected completely from the lateral side and then was cut into two halves after identifying the right RLN and the carotid sheath as to give more space for the manipulation of main retrosternal part. Blunt but gentle manipulation of the retrosternal extension in the superior mediastinum was done with the fingers above the plane of the trachea and the carotid artery. The assistant surgeon applied the traction superiorly and releasing it intermittently to avoid a shearing effect or excessive pressure on the great vessels. The manipulations were avoided lateral and below the carotid vessels or the trachea, to avoid devascularisation of trachea. The tracheoesophageal groove was not violated. In the intrathoracic part all the thick bands or adhesions were palpated for any pulsation with the two fingers and they were dissected into thinner bands before their removal as to avoid any inadvertent trauma to any vessels. The lower pole was palpable intraoperatively and it was gently manipulated by the index finger to free any adhesions and the lobe was pulled up. A thickened thyroid ima artery was visualised and at the lower pole of the retrosternal extension was ligated and the entire retrosternal part delivered. Haemostasis was achieved and a surgical drain was placed and wound closed in layers. The entire thyroid gland was sent for HPE.
All patients remained intubated electively in the post-operative period for observation in anticipation of tracheomalacia and were extubated on post-operative day one. A tracheostomy set was kept ready at the time of extubation in case of any airway emergency. In the post-operative period, 03 patients developed transient hypocalcaemia on post-op day 2 which lasted for a day and was managed effectively with IV Calcium Gluconate and oral calcium supplements. The patient was discharged after drain removal by post-op day 5. In post op period, b/l vocal cord movements were assessed by fibre-optic laryngoscope, the cords were mobile in all cases, except in one case where there was Right vocal cord paresis, which recovered within six weeks. Tracheomalcia was not identified in any case. HPE was consistent with colloid goiter in all cases. All patients are on regular follow up with levothyroxine supplements and 01 patient are on calcium supplements (Table 1). Figure 5E: Near total thyroidectomy specimen. Note the right lobe, left lobe and the retrosternal part.
Discussion
RG occurs when the thyroid enlarges downwards into the chest. Majority of RG are extensions from the neck, however intrathoracic goitres has also been reported. The definition of RG has been a matter of debate and it varies from author to author.
Katlic et al. [9] suggested that RG was when more than 50% of goitre is below plane of thoracic inlet. Candela et al. [10], defined as any goitre that descended below the plane of thoracic inlet or going into anterior mediastinum for more than 2cms [10]. Goldenburg and co workers as early as 1957 defined RG as one reaching the level of fourth thoracic vertebra [11]. Most widely accepted classification of RG and management depending on its extension into mediastinum has been proposed by Huins et al. [1]. They defined three grades of goitre namely, grade I to the level of the aortic arch, grade II to the level of the pericardium and grade III below the level of the right atrium.
RG has been reported in 1-20% of all patients undergoing thyroidectomy [12,13]. RG is most frequently found in the fifth or sixth decade of life, with a female/male ratio of 4:1 [14]. The vast majority of RG (85-90%) are located in the anterior mediastinum. Posterior mediastinal goiters are uncommon comprising 10-15% of all mediastinal goiters [15].
RG show, in most cases, a slow growing enlargement, which usually remain asymptomatic for many years; about 20-40% of retrosternal goitres are symptomatic [14]. They carry a risk of malignancy similar to the cervical goiters [4]. The most common symptoms are related to compression of the airway, esophagus, RLN and represented by dyspnoea, choking, inability to lie straight, dysphagia and hoarseness [2]. Superior vena cava syndrome due to SVC obstruction and Horner's syndrome due to compression of sympathetic chain is less common.
Anterior mediastinum is most common site for retrosternal goitre extension. It displaces trachea causing its compression. The majority of patients, in addition to neck swelling may present with shortness of breath or stridor. Other symptoms include hoarseness, dysphagia, and about 50% of patients may be asymptomatic [16]. Thyrotoxicosis symptoms were reported in less than 10% of case [17].
In the diagnostic management of RG, CECT Thorax is the gold standard radiological investigation and preoperative CT scan should be done routinely for suspicious retrosternal goitre [5]. The relationship of RG, with trachea, esophagus, great vessels is well appreciated and helps in planning the surgical approach, which may require inputs from other surgical team and also provides information to anaesthesiologist about the airway. Contrast enhanced MRI can also be performed in conjunction with CECT, which may add information regarding surrounding tissue planes to the thyroid mass. Casella et al. [15] found that extension of the goitre below the level of the aortic arch appeared to be a significant predictive factor for the need for sternotomy. Conversely, the lack of radiologic extension beyond the aortic arch predicted successful trans cervical removal of mediastinal goiters without sternotomy [15].
Surgery is the definite treatment of RG, the earlier RG is tackled, more safer it is for patient, in terms of pre op, intra op and post op morbidity.
Majority of RG can be delivered and resected safely through standard cervical approach, however, this approach has inherent risks of damage to great vessel during blind manipulation and poor control of haemorrhage thereafter due close space and therefore this approach is generally avoided in RG with extension beyond aortic arch. In our experience the retrosternal goitres upto the size of 10cm can be removed via cervical route with careful digital manipulations. De Perrot et al. [5] has also highlighted the need of sternotomy in goitres larger than 10cm, in case of malignancy, involvement of posterior mediastinum and or ectopic goitre [5], which has also been identified by Cohen et al. [6].
Preservation of parathyroids, RLN, EBSLN, as in regular thyroidectomy cannot be emphasised more. RLN should localised preferably early and by superior approach or lateral approach, as it is not uncommon for the nerve to be riding over a hyperplastic nodule, making it vulnerable.
The role of assistant surgeon is important while manipulation as he gives the necessary traction with intermittent release so that the pressure is just adequate for the surgeon to continue digital dissection. The assistant surgeon also helps in pulling out the retrosternal part by his digital pressure from the opposite side when the surgeon manipulates from the lateral and the anterior sides or while palpating the planes for any pulsating vessels. During the final delivery one has to keep in mind by palpating the fibrous attachments of any aberrant blood vessels like high riding innominate artery which may be asymmetry in its position and thyroid ima artery which may be thick and can cause troublesome bleeding requiring sternotomy exploration. Sometimes there can be difficulty in delivering the retrosternal segment due to narrow retrosternal space. In such situation Charles Proye described that the normal or the smaller lobe should be excised first to provide more space in the neck, similar technique was employed in our case series. Charles Proye also described Toboggan technique, which is done by placing heavy silk into the cervical component to provide traction and then placing more sutures in series in order to bring the retrosternal extension into view [18]. However this technique has limited application in presence of soft colloid goitre [19].
The most important predictive factor as to whether goitre can safely be removed through a cervical approach is the presence of a clear tissue plane around the nodule in the mediastinum on preoperative imaging. If such a clear plane is not present, preparations should be made for sternotomy [13]. Ahmed et al. [12] used extension beyond the aortic knuckle on chest X-ray as their landmark for the depth of substernal extension [12].
Mussi et al. [20] believed that sternotomy should be employed when a goitre could not be extracted from the chest with ''gentle manoeuvres,'' as well in cases of all recurrent and aberrant goiters [20]. Sand et al. [21] employed sternotomy when excessive traction is required during surgery, when the most inferior extent of the nodule cannot be palpated, in cases of revision surgery, in the setting of acute tracheal compression, severe venous obstruction, malignancy, and uncertain preoperative diagnosis [21]. Sancho felt that nodules that extended inferiorly to the level of the carina placed patients at high risk for sternotomy [22]. Randolph recommended sternotomy for malignant sub sternal nodules, posterior mediastinal goitre with contra lateral extension, mediastinal goitres with mediastinal blood supply, goitres causing superior vena cava syndrome, revision cases, in the setting of difficult delivery from the chest, significant haemorrhage, and when the diameter of the mediastinal nodule significantly exceeds the diameter of the thoracic inlet [23].
Sternotomy approach is also associated with complications and post op morbidities, like trauma to mediastinal structures, pmeumothorax, mediastinitis, sternal dehiscence and osteomyelitis. In cases of long standing large goiters, the trachea could lose its structural strength and can lead to tracheomalacia. Therefore, tracheomalcia should be anticipated in post op period, and it is advisable that patient should remain intubated for atleast a day and during extubation tracheostomy set should be kept ready [24].
In our case series of five patients, the recurrent laryngeal nerves both sides could be identified in all cases and the both superior parathyroid glands were preserved in all cases; however, post-operative complications in the form of transient hypoparathyroidism occurred in three cases which resolved over a period of one week and transient vocal cord paresis right side occurred in two cases which gradually resolved over a period of a four weeks. All the five cases postoperative were kept overnight on endotracheal tube intubation and after extubation on the same day they all underwent fibreoptic laryngoscopy to assess the vocal cord mobility and tracheal airway examination. None of our cases required postop tracheostomy or prolonged intubation and none of our cases showed any signs of tracheomalacia. There were no post-operative respiratory complications or any post-operative bleeding and neither was there any permanent RLN palsy.
Conclusion
Large retrosternal goitre presents a challenge to otolaryngologist, whether to go for cervical or sternotomy approach. Cervical approach provides benefits in term of patient morbidity but not without its set of possible complications described earlier. In our case series we preferred cervical approach though it is challenging. Cervical approach is possible with meticulous finger dissection, good assistance without much morbidity to patient, hence it is recommended to start with cervical approach with a standby thoracic team ready for thoracotomy if need arises. | 2019-03-16T13:11:43.692Z | 2017-01-18T00:00:00.000 | {
"year": 2017,
"sha1": "93743c34497da3051cfbadd4f4d0cdc19e488511",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/JOENTR/JOENTR-06-00147.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "73673250045859574a905c005410192cd037190a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257868330 | pes2o/s2orc | v3-fos-license | Dental Material Selection for the Additive Manufacturing of Removable Complete Dentures (RCD)
This research addresses the development of a formalized approach to dental material selection (DMS) in manufacturing removable complete dentures (RDC). Three types of commercially available polymethyl methacrylate (PMMA) grades, processed by an identical Digital Light Processing (DLP) 3D printer, were compared. In this way, a combination of mechanical, tribological, technological, microbiological, and economic factors was assessed. The material indices were calculated to compare dental materials for a set of functional parameters related to feedstock cost. However, this did not solve the problem of simultaneous consideration of all the material indices, including their significance. The developed DMS procedure employs the extended VIKOR method, based on the analysis of interval quantitative estimations, which allowed the carrying out of a fully fledged analysis of alternatives. The proposed approach has the potential to enhance the efficiency of prosthetic treatment by optimizing the DMS procedure, taking into consideration the prosthesis design and its production route.
Introduction
Material selection is a relevant issue that is solved in various branches of science and technology, inter alia dental treatment (for example, manufacturing RCD). In this case, components are calculated according to the strength criterion (by the finite element method [1], as an example) and assigned margin factors. Therefore, reference data (primarily, manufacturers' data sheets) should be taken into account. Typically, the most appropriate materials have to be selected for various functional applications. For this purpose, (i) the elastic modulus is considered to ensure a required stiffness level, (ii) crack resistance is controlled by fracture toughness, and (iii) corrosion resistance can be characterized either qualitatively or quantitatively according to the parameters measured by strictly regulated industrial standards, etc. In addition to physical and mechanical characteristics, designers consider (i) the material manufacturability (including the possibility of 3D printing, CAD milling, etc.), (ii) the variation of properties under heat treatment (for example, annealing)/postbuild processing (additional polymerization, as an example), and (iii) machinability with various types of tools (grinding), etc. [2].
DMS with such a formalized approach can be solved if several required (target) characteristics are considered, among them: (a) physical and mechanical; (b) biological; (c) functional (color, polishability, roughness, etc.); (d) technological (processing methods, machinability, warpage); (e) cost, etc. Nevertheless, medical treatment tactics for the use of (temporal) dental prosthetics are a multifactorial problem. Therefore, the mission of DMS, including prosthesis manufacturing methods, becomes more complex. In practice, it is greatly affected by the mostly subjective relationships in the group "dentist-dental technician-patient" [3].
Despite the necessity of using temporary dentures, the attitude of maxillo-facial surgeons, dentists and patients to such structures is rather dismissive. To this end, breakdowns and complications are frequent due to medical and technical errors. In practical dentistry, there is a need for enduring polymer prostheses in the treatment of complex dental pathologies that requires an accurate and long-term examination. In particular, this is relevant for diagnosing gnathic problems, especially in cases of muscular-articular dysfunction. Therefore, such therapeutic and prophylactic orthopedic constructions play an important role in first-stage rehabilitation measures. These include periods of (i) temporary filling of a defect in the dentition, (ii) programming a new occlusion, (iii) osseointegration of dental implants, etc. [4][5][6]. Hence, the practice of using temporary dentures remains very important.
The State of the Art in the Additive Manufacturing of RDC RCDs for edentulous patients are typically made of PMMA [7]. However, a high concentration of free monomer (methyl methacrylate) and the possible development of allergic stomatitis is a significant drawback of this material [8].
Up-to-date CAD/CAE/CAM systems are frequently implemented into dental practice. They provide (i) great shape and dimensional accuracy, (ii) reproducibility, (iii) minimization of medical and technical routine work, (iv) cost reduction, and therefore (v) the availability of highly efficient dentures and facial epithesis [9,10]. However, additive manufacturing (AM) demands novel classes of polymers. In addition, the CAM process (as a subtractive method) possesses the following drawbacks: (i) material waste due to grinding and milling, (ii) wear of cutting tools and expensive equipment, (iii) restrictions on the sizes of the blanks (in the form of blocks) which does not allow the fabrication, for example, of volumetric jaw prostheses-obturators [11].
In this way, 3D printing is an actual trend in dental practice. Studies of physicalmechanical characteristics and aesthetics of such volumetric structures are highly controversial when polymeric prostheses or medical devices must be designed for long-term use (for example, in the case of parafunctional phenomena, or when an occlusion is to be reprogrammed) [12]. The quality of the AM structures depends on 3D printing parameters, building accuracy, shape, and dimensions of a virtual model, among many others [13]. Currently, a wide variety of 3D printers are available, including various physical principles of layer-by-layer deposition. To this end, prosthesis accuracy is a tangible drawback [14,15]. However, the prospects of AM in dentistry are beyond dispute [16].
Laser stereolithography apparatus (SLA) employs liquid photopolymers that are cured with a laser beam or an ultraviolet (UV) source of a certain wavelength. Product quality is affected by the dimensions of a prototype, the angle of the 3D-printed object related to the platform, the locations of supports, etc. Besides economic efficiency, such procedures are convenient for planning surgical operations with parts of a complex shape and structure [17].
The durability of (temporary) RDC depends on their design features, physical-chemical nature of the structural materials, and production routes [18]. Karaokutan et al. studied the influence of manufacturing techniques for provisional PMMA-based crowns on their strength characteristics [19]. The authors reported that computer-controlled milling improves the strength of the temporary RDC, compared to those fabricated by a direct manufacturing method. Alt et al. presented a comparative study of the strength characteristics of temporary polymer bridges made by conventional and digital technologies and concluded that the manufacturing methods substantially affect their values [20].
Dikova showed that high dimensional accuracy and surface smoothness of fixed dentures can be achieved when the vertical axis of teeth coincides with the Z axis of a platform [21]. At the same time, the number of supports should be increased (at least four per tooth) to reduce warpage in 3D printing and post-build polymerization. Thus, to ensure a high-quality product in designing and planning the process, it is important to consider the following: (i) printer characteristics, (ii) model placement, (iii) number of supports, and (iv) dimensional variation during and after polymerization.
Li suggested that the high-quality manufacture of temporary polymer prostheses be provided by the SLA method based on temperature-controlled layer-by-layer deposition in 3D printing (TCMIP-SL) [22]. The TCMIP-SL process contributes to the deposition of high-viscosity polymers with excellent accuracy at high speeds.
Based on the above, it can be stated that searching for dental AM materials with improved quality has moved into the phase of developing optimal dental technologies that use industrial polymers and help to minimize fabrication process disruptions that deteriorate a product's characteristics.
This paper addresses the development of a formalized approach for DMS and the additive manufacturing of RCD. This issue was solved using a decision-making methodology. Rational ranking was illustrated on examples of three types of commercially available PMMA grades processed by the identical DLP method. The paper is structured as follows. Section 2 describes the 3D printing method and techniques for evaluating the key properties of the AM blanks. Section 3 contains the measurement results for various characteristics of dental materials; the calculation of the material indices is also provided. Section 4 proposes the approach to multicriteria optimization in DMS and examples of their ranking as well. Section 5 discusses the obtained data. The authors proposed, based on examples of certain industrially produced brands, an approach to (or the tool for) brand ranking; the variability of the results was emphasized. Recommendations to use one or another brand of dental materials remain for individual consideration.
Three-dimensional Printing (Materials and Equipment)
The test samples were fabricated in two stages: modeling and 3D printing. Virtual master models were created using the ExoCad Gateway 3.0 software (Align Technology, San Jose, CA, USA). The final files of the completed sample models (in the "*.STL" format) were imported into the (slicer) software package for the 3D printing preparation. Then, (i) the models were positioned relative to the plane of the 3D printer platform, (ii) the supports were placed, (iii) the models were divided into layers, and (iv) the 3D printing parameters were adjusted in line with recommendations of the feedstock manufacturers (Table 1). The samples took the form of rectangular plates with dimensions of 25 × 2 × 2 mm following the Russian State Standard GOST 31574-2012. An "Instron 5982" electromechanical testing machine (Illinois Tool Works Inc., Glenview, IL, USA) was used with a crosshead speed of 0.75 mm/min. The force gauge had an upper measurement limit of ± 5000 N (series 2580-108). The span was 20 mm. Before testing, the samples were conditioned in distilled water at a temperature of 37 ± 1 • C for 24 h. The tests were carried out until the sample failure.
Impact Strength Tests
The a n Charpy impact strength of specimens without a notch was measured with a "KM-5" pendulum impact tester ("ZIP" LLC, Ivanovo, Russia). Their sizes were 80 × 10 × 4 mm according to the Russian State Standard GOST 4647-2015. There were four specimens of each material. After the tests, the average a n values were calculated.
Biological Tests
For biological tests, the samples were in the form of disks 5 mm in diameter and 1 mm thick. The attachment points for supports were finished with the polishes of various abrasiveness in the following sequence: 9400.204.030, 9401.204.030, 9402.204.030 (Komet, Gebr. Brasseler GmbH & Co. KG, Germany). The time lag from sample fabrication to the biological tests did not exceed 72 h. Immediately before the start of the in vitro experiment, the samples were cleaned in an ultrasonic bath for 15 min, after they were treated with 70% ethanol.
To carry out the process of the primary adhesion of microorganisms, the samples were placed in test tubes with a suspension of the test strains of the corresponding species at a standard concentration. We used the optical turbidity standard of 0.5 U McFarland, which corresponded to 10 9 colony-forming units (CFU)/mL for bacterial cultures and 10 8 CFU/mL for yeast ones. After quantitative inoculation, bacteria were cultivated under anaerobic conditions at a temperature of 37 • C for 7 days, and fungi-at room temperature (25 • C) for 2 days. Adhesion indices were determined as a ratio of the decimal logarithm of the number of CFU obtained after sonication of the test samples to the decimal logarithm of the CFU of the initial microbial suspension. The authors described the experimental technique in detail in their previous paper [7].
Tribological Tests
In the point tribological contact according to the "ball-on-disk" scheme, the dry sliding friction tests were carried out at a load (P) of 5 N and a sliding speed (V) of 0.3 m/s. A "CH 2000" tribometer (CSEM, Neuchâtel, Switzerland) was used following ASTM G99. The tribological tests were conducted using a ceramic counterpart (a ball made of the ZrO 2 ) with a diameter of 6 mm and the R a surface roughness of 0.02 µm. The latter was assessed with the "New View 6200" optical interferential profilometer (Zygo Corporation, Middlefield, CT, USA). The testing distance was 1 km and the tribological track radius was 10 mm.
In the linear tribological contact according to the "block-on-ring" scheme, dry sliding friction tests were performed using a "2070 SMT-1" friction testing machine (Tochpribor Production Association, Ivanovo, Russia). A load (P) was 60 N, while a sliding speed (V) was 0.3 m/s. A ceramic counterpart was made of an Al 2 O 3 ring with a diameter of 35 mm and a width of 11 mm. The R a surface roughness was 0.20 µm. The counterpart temperatures were assessed with the "CEM DT-820" non-contact infrared (IR) thermometer (Shenzhen Everbest Machinery Industry Co., Ltd., Shenzhen, China). WR levels were determined by measuring the width and depth of wear tracks according to a stylus profilometry (KLA-Tencor, Milpitas, CA, USA), followed by multiplication by their length. They were calculated taking into account load and distance values: In the flat tribological contact, abrasion wear tests were conducted. The "MI-2" abrasion testing machine (Metroteks LLC, Moscow, Russia) was used to determine the weight loss values at abrasion by fixed particles, according to the "polymer pin-on-abrasive disk" scheme ( Figure 1b), regulated by the Russian State Standard GOST 426-77. The dimensions of the samples were 8 × 10 × 8 mm. The average grain size of an abrasive paper (P1000) was~17 µm. The angular sliding velocity was 40 rpm, and the load was 10 N. The test scheme is shown in Figure 1a.
ZrO2) with a diameter of 6 mm and the Ra surface roughness of 0.02 µm. The latter was assessed with the "New View 6200" optical interferential profilometer (Zygo Corporation, Middlefield, CT, USA). The testing distance was 1 km and the tribological track radius was 10 mm.
In the linear tribological contact according to the "block-on-ring" scheme, dry sliding friction tests were performed using a "2070 SMT-1" friction testing machine (Tochpribor Production Association, Ivanovo, Russia). A load (P) was 60 N, while a sliding speed (V) was 0.3 m/s. A ceramic counterpart was made of an Al2O3 ring with a diameter of 35 mm and a width of 11 mm. The Ra surface roughness was 0.20 µm. The counterpart temperatures were assessed with the "CEM DT-820" non-contact infrared (IR) thermometer (Shenzhen Everbest Machinery Industry Co., Ltd., Shenzhen, China). WR levels were determined by measuring the width and depth of wear tracks according to a stylus profilometry (KLA-Tencor, Milpitas, CA, USA), followed by multiplication by their length. They were calculated taking into account load and distance values: In the flat tribological contact, abrasion wear tests were conducted. The "MI-2" abrasion testing machine (Metroteks LLC, Moscow, Russia) was used to determine the weight loss values at abrasion by fixed particles, according to the "polymer pin-on-abrasive disk" scheme ( Figure 1b), regulated by the Russian State Standard GOST 426-77. The dimensions of the samples were 8 × 10 × 8 mm. The average grain size of an abrasive paper (P1000) was ~17 µm. The angular sliding velocity was 40 rpm, and the load was 10 N. The test scheme is shown in Figure 1a. Weight loss was determined every 5 min during the total test duration of 20 min. The samples were weighed using the "Sartogosm LV 120-A" (Sartogosm LLC, Saint Petersburg, Russia) with an accuracy of 0.1 mg.
Polishability (via Roughness)
The protocol for grinding and polishing in the sample preparation procedure is presented below.
1. Surface treatment with a carbide cutter for polymers until the required configuration or shape. 2. Surface treatment with a carbide cutter for polymers to remove surface irregularities. 3. Sanding with 180-220 grit sandpaper for extra fine finishing. Weight loss was determined every 5 min during the total test duration of 20 min. The samples were weighed using the "Sartogosm LV 120-A" (Sartogosm LLC, Saint Petersburg, Russia) with an accuracy of 0.1 mg.
Polishability (via Roughness)
The protocol for grinding and polishing in the sample preparation procedure is presented below.
1.
Surface treatment with a carbide cutter for polymers until the required configuration or shape.
2.
Surface treatment with a carbide cutter for polymers to remove surface irregularities.
4.
Finishing with a felt and a moistened polishing powder.
5.
Brushing with a grinder using a coarse bristle and a moistened polishing powder for a smooth surface. 6.
Processing with a grinder using a thread brush and a fine-grained polishing paste to a mirror finish.
The R a surface roughness was assessed with the "New View 6200" optical interferential profilometer (Zygo Corporation, Middlefield, CT, USA).
Mechanical Properties
The mechanical properties of the SLA 3D-printed PMMA samples, registered in the three-point bending tests, are presented in Table 2. For the Dental Sand (DS), the flexural strength and strain values exceed by~10 % and two times the corresponding characteristics for Free Point (FP), as well as by 2.5 and 3.6 times for Nolatech (NT), respectively. For all the studied PMMA grades, the flexural modulus values were at the same level of about 2.7 ± 0.1 GPa. Since PMMA RCD could experience impact in use (for example, being accidentally dropped on ceramic tiles), their impact strength could be an important performance characteristic. The conducted Charpy impact tests showed that a n values were 0 J/cm 2 at the minimum applied impact energies regardless of the PMMA grade. Thus, it was impossible to differentiate the materials by this parameter. Please note that an increase in impact strength could be achieved by loading PMMA with fibers or nanoparticles in various concentrations, but this would change the manufacturability of the materials, including the possibility of their AM processing [24][25][26].
Biological Properties
The adhesion indices (AIs) of the normal, periodontopathogenic, and fungal microbiota to the studied materials are presented in Table 3. In the normal microbiota case, no differences were found between the FP (0.55 ± 0.06) and the NT (0.56 ± 0.06). For the DS, a value of 0.43 ± 0.06 was noticeably lower than those for the other two materials. For the periodontopathogenic microbiota, an AI value of 0.42 ± 0.05 for the NT was noticeably higher than those for the DS and the FP, for which no significant differences were found (AI = 0.34 ± 0.05). In the fungal microbiota case, some variations were observed for all the materials. The maximum level was typical for the NT (AI = 0.49 ± 0.05), and the minimum was detected for the DS (AI = 0.34 ± 0.05). The point tribological contact involved a local impact of the ceramic ball on the polymer sample (in the form of a disk) under dry sliding friction conditions. Table 4 presents the values of the tribological properties of the materials under study. The FP and DS had the lowest coefficient of friction (CoF) values (0.276 and 0.271, respectively), while it was higher by 10 % (0.303) for the NT ( Figure S1). The wear rate (WR) levels were also evaluated. For the FP, it was half that for the NT and DS. Roughness on the wear friction track surfaces was approximately at the same level of 0.19 µm regardless of the PMMA grade. In the linear tribological contact, the ring-shaped ceramic counterpart slid relative to the stationary polymer samples, along the "non-renewable" surface of the wear tracks. Therefore, the specific pressure was noticeably lower compared to that in the point tribological contact. As a result, the average CoF values were lower by~3 times for all the studied materials (0.131, 0.096, and 0.122, respectively, according to Table 5). The NT had the lowest WR level of 0.078 × 10 -6 mm 3 /N·m, which was 2.3 times lower compared to that for the DS and 1.5 times than that for the FP ( Figure S2). In contrast to the point tribological contact, the WR values were an order of magnitude lower (10 -5 and 10 -6 , respectively).
The Flat Tribological Contact, Abrasive Wear
Since PMMA prostheses could be worn out by hard particles, abrasive wear tests were conducted according to the "polymer pin-on-abrasive disk" scheme. It was shown that the least wear (weight loss) was observed for the FP (0.12 mg) after 20 min of testing (with abrasive particles fixed on the non-renewable surface of the abrasive counterpart), which was 1.6 times less compared to that for the DS (0.19 mg) and 2.0 times less compared to the NT (0.25 mg) ( Figure S3). Table 6 summarizes the data on the WR values in the point, linear, and flat tribological contacts used to calculate the material indices.
Technological Properties
A significant number of the parameters could be qualified as "technological properties". Since the study deals with DMS concept development, to simplify the process, the authors limited themselves to only three ones (Table 7). The first parameter was determined by the average duration of 3D printing and postbuild polymerization processing. Its minimum value was typical for the FP (t = 63 min), while the maximum for the DS was (t = 110 min).
The second parameter was polishability, which was related to the ability of the 3Dprinted PMMA products to achieve the required degree of gloss, determined by the surface roughness. In general, all the studied materials could be considered to be similar in terms of their values (R a~0 .048-0.051 µm).
The third parameter was shape distortion (warpage) [27]. This was assessed qualitatively, being associated with the ability of a 3D-printed product to retain its shape after cooling. From this point of view, the NT was the only material characterized by warpage after 3D printing and post-build polymerization processing. To use these data as a quantitative criterion, this parameter was assigned at a level of 0 in the absence of the shape distortion, and otherwise it was 1.
Economic Indicator (Cost)
The financial aspects of manufacturing PMMA prostheses via 3D printing could also be assessed by numerous criteria, including costs of logistics, purchasing licenses, deployment of a particular type of 3D printer and its maintenance, and many others. Nevertheless, the authors implemented the only criterion, namely the feedstock cost, for simplification (since the sample fabrication was carried out with the same 3D printer). Quantitative data are given in Table 8 for all the materials under study.
Ranking Materials by Indices
The following parameters were introduced in the study according to the concept of the material indices proposed by Ashby [2] (as a ratio of the data presented in Tables 2-7 to the feedstock costs in Table 8). The charts of the material indices are shown in Figure 2. Their values were obtained by dividing the factors by the feedstock costs and multiplying by 100, they are: • M1 is the ratio of the mechanical properties to the feedstock cost (namely flexural modulus, flexural strength, and flexural strain); • M2 is the ratio of the biological properties to the feedstock cost (all three types of the studied microbiota were considered); • M3 is the ratio of the tribological properties to the feedstock cost (a wear resistance for all three schemes of the tribological tests); • M4 is the ratio of the technological properties to the feedstock cost (the average duration of 3D printing and post-build polymerization processing, roughness after standard polishing, and warpage after 3D printing).
To this end, the use of the concept of material indices (Tables S1-S4) offered a clear tool for quantitative comparison of the dental materials in the case under study. Moreover, it was possible to choose (from the point of view of a user or an expert) a more or less significant one. However, the significance factor was very subjective, so the analysis had to be carried out either by considering the data in a multilevel space or using multicriteria optimization approaches [28].
In the first case, an efficient method could be implemented to reduce the dimension of the analyzed data space, e.g., down to two [29][30][31][32]. The solution using the second approach is introduced in the next section. • M4 is the ratio of the technological properties to the feedstock cost (the average duration of 3D printing and post-build polymerization processing, roughness after standard polishing, and warpage after 3D printing). To this end, the use of the concept of material indices (Tables S1-S4) offered a clear tool for quantitative comparison of the dental materials in the case under study. Moreover, it was possible to choose (from the point of view of a user or an expert) a more or less significant one. However, the significance factor was very subjective, so the analysis had to be carried out either by considering the data in a multilevel space or using multicriteria optimization approaches [28].
In the first case, an efficient method could be implemented to reduce the dimension of the analyzed data space, e.g., down to two [29][30][31][32]. The solution using the second approach is introduced in the next section.
Data Interpretation-The Combined AHP-Extended VIKOR Methods
In this section, some methods for DMS were compared, taking into account their production routes, which provided a trade-off between requirements for a set of mechanical, tribological, technological, biological, and economic criteria. The authors used informal subjective assessments of experts in the field of dental prosthetics, 3D printing, and the
Data Interpretation-The Combined AHP-Extended VIKOR Methods
In this section, some methods for DMS were compared, taking into account their production routes, which provided a trade-off between requirements for a set of mechanical, tribological, technological, biological, and economic criteria. The authors used informal subjective assessments of experts in the field of dental prosthetics, 3D printing, and the manufacture of CRD by subtractive and additive methods (primarily at A.I. Yevdokimov Moscow State University of Medicine and Dentistry, Russia).
The Problem Statement and Methods
Within the decision-making theory framework, the studied dental materials were qualified as decision alternatives with their designation as A i . The factors characterizing each alternative were quantitative assessments and qualitative indicators. Based on the factors, if the criteria of (i) quality, (ii) usefulness, (iii) reliability, etc. were put forward, then the alternatives could be compared. The problem of choosing an alternative arose when there was a contradiction between the results of comparison or the absence of an alternative with the best indicators of the factors (an ideal combination of the characteristics) [33]. In this case, the problem of multicriteria optimization arose, namely the choice of a rational alternative from the available finite set, i.e., an alternative that was closest to an "ideal" option.
To date, a large number of methods for solving multicriteria optimization problems are known [34,35], i.e., Multicriteria Decision-Making (MCDM) methods. They include the
Analytic Hierarchy Process (AHP), Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), VIKOR, ÉLimination Et Choix Traduisant la REalité (ELECTRE), Preference Ranking for Organization Method for Enrichment Evaluation (PROMETHEE), etc.
The key difference between these methods lies in the algorithms bringing different-scale, often qualitative, data into a single normalized space and the subsequent choice of a metric inside it. Examples of MCDM can be found both in tribology [36] and medicine [37][38][39], as well as in other areas [40,41]. Recently, MCDM, based on interval estimates, has been developed. For example, extended both TOPSIS and VIKOR methods were described, [42,43] while their advantages and drawbacks were reported in [44][45][46]. In this paper, the authors consider the possibility to implement the AHP and VIKOR ones for solving the problem of the DMS (PMMA-based) for manufacturing RCD (including the temporary ones).
Initial Data Analysis
All the data were divided into groups according to their physical meanings ( Table 9). The mechanical, tribological, technological, biological, and economic groups included the experimental data in the form of interval quantitative estimations with different scales. The remaining groups were described by point quantitative values (in contrast to the interval ones). The exception was the "Warpage after 3D printing" technological factor. It was qualitative (binary) in nature and could be coded as "0" ("no warpage"), and as "1" ("might be distorted") in this case. Since the "Roughness after polishing" technological factor turned out to be identical for all the materials, it was not used in analysis and decision-making. Table 9. The alternatives, their factors, and the criteria for the initial data analysis.
Group
Factor Criterion The material assessment criteria were selected separately for each factor and coded according to two principles: (+1) was the "utility" principle ("the more, the better") and (-1) was the "cost" one ("the less, the better").
Determination of Criteria Weights by the AHP Method
The AHP method was implemented to determine the weights (significance) of the criteria [47]. It referred to ones for supporting selection from a small number of alternatives based on pairwise comparisons. In this case, the formation of a matrix of the pairwise significance of the criteria was performed by an expert, and the calculation of the weights of the criteria was carried out by searching for the eigenvalues of this matrix. Due to the large number of the criteria and their different nature, the analysis of their pairwise significance was conducted within the groups, first, and between them, second. The following scale was used to assess the pairwise significance: 1-the criteria were the same; 3-the first criterion was slightly more important than the second one; 5-the first criterion was much more important than the second one; 7-the first criterion was undeniably more important than the second one, it was confirmed not only by experts but also in practice; 9-the first criterion was of absolutely greater importance than the second one. Tables of the pairwise comparison within the groups were filled by experts from the respective fields. Therefore, despite such an assessment being subjective, the spread of opinions within the groups was low and not of interest to the research. The results of the pairwise comparison and the calculation of the weights are presented in Tables 10-13.
Determination of the Criteria Weights by the VIKOR Method
To rank alternatives, the authors used the extended VIKOR method for the interval estimation [42]. The VIKOR method was based on the L p metric for normalized functions [42,46]: where f ij is a value of the j-th criterion for i-th alternative; f * j is the best value of j-th criterion among all the alternatives; f − j is the worst value of j-th criterion among all the alternatives; w j is the weight of the j-th criterion. In calculations, two special cases of this metric (2) were applied: was the weighted Manhattan distance to the ideal alternative consisting of the "best" factor values, was the weighted Chebyshev distance. Additionally, a weighted and normalized value was introduced as an intermediate one of the above metrics: , v is the weight of the "the majority of criteria" strategy. The S, R, and Q values (3)(4)(5), could be referred to as the pessimistic, optimistic, and rational assessments of the alternative position in the set, respectively. Their values were in the range of 0 to 1. For the S, R, and Q values, their equality to zero was the ideal combination, while the equality to 1 was the worst option.
Ranking of the alternatives was carried out by ordering Q i values and comparing their difference with the 1 (m−1) level, where m was the number of alternatives. For the interval values of the factors, distances were assessed according to their boundaries [42]: The calculation results are presented in Table S5. Figures 3 and 4 show the values of Q 1 and Q 2 at v = 0.5. The analysis of the lower boundary of the Q rational option reflected that the NT turned out to be the worst alternative, since it had the best factors only in the "economic" group out of the five ones. Both the FP and DS possessed the best factors in the two groups ( Figure 3). However, there was no obvious advantage of the DS over the NT considering the Q top boundary (Figure 4).
Ranking Analysis for All Criteria
The stage of the paired comparison of the groups was the most subjective p the analysis. As a result, there can be achieved a coordinated decision of several from different subject areas at once or a single decision maker. In the latter case, th erence for the advantage of a characteristics group of a final product could be deter for example, by the price-to-quality ratio. If the first four groups of the factors a criteria corresponding to them characterized the product quality, then the quan expression of the price-to-quality indicator in the table of the pairwise compariso groups is proposed to express in the form of the following preference options (Tab
•
Preference #1. The group equivalence assumption. • Preference #2. The small advantage assumption for the "economic" group o
Ranking Analysis for All Criteria
The stage of the paired comparison of the groups was the most subjective phase of the analysis. As a result, there can be achieved a coordinated decision of several experts from different subject areas at once or a single decision maker. In the latter case, the preference for the advantage of a characteristics group of a final product could be determined, for example, by the price-to-quality ratio. If the first four groups of the factors and the criteria corresponding to them characterized the product quality, then the quantitative expression of the price-to-quality indicator in the table of the pairwise comparison of the groups is proposed to express in the form of the following preference options (Table 14): • Preference #1. The group equivalence assumption. • Preference #2. The small advantage assumption for the "economic" group over all the others. • Preference #3. The "economic" group was considered less significant relative to all the others.
Ranking Analysis for All Criteria
The stage of the paired comparison of the groups was the most subjective phase of the analysis. As a result, there can be achieved a coordinated decision of several experts from different subject areas at once or a single decision maker. In the latter case, the preference for the advantage of a characteristics group of a final product could be determined, for example, by the price-to-quality ratio. If the first four groups of the factors and the criteria corresponding to them characterized the product quality, then the quantitative expression of the price-to-quality indicator in the table of the pairwise comparison of the groups is proposed to express in the form of the following preference options (Table 14): • Preference #1. The group equivalence assumption. • Preference #2. The small advantage assumption for the "economic" group over all the others. • Preference #3. The "economic" group was considered less significant relative to all the others. Table 15 summarizes the results of the calculation of the weights by the AHP method of pairwise comparison for all the studied cases. As expected, • the preference variability for the "economic" group affected the weight of the economic factor from the first rank (of importance) to the last one; • the criteria of those factors (excluding the "economic" ones) recognized as the most significant within their groups had the highest weights. In this example, they were (i) the "periodontopathogenic" parameter from the "biological" group, (ii) the "warpage after 3D printing" from the "technological" group, and (iii) the "flexural modulus" from the "mechanical" group. For all three preferences, the S, R, and Q values were calculated using both the VIKOR and the extended VIKOR methods [42,46]. The obtained results and the ranking data are presented in Table 16. According to the preferences: • under the assumption of the equivalence of the groups, the extended VIKOR method did not reveal any obvious advantage of the alternatives, while the VIKOR one recognized the equal advantage of the FP and NT over the DS. • under the assumption of the importance of the "economic" factors, the FP was recognized as a rational alternative according to the VIKOR method, but it was the NT according to the extended VIKOR one. • under the assumption of the significance of all groups over the "economic" factors, both methods recognized the FP and DS as rational alternatives, but the NT was the worst one. Comparing the ranks for all the preferences, it should be noted that the subjective phase of determining the significance of the criteria made a significant contribution, but the variability of the factors was no less important. As follows from Table 16, a large spread of the measured interval factors (Table 9) caused a great dispersion of S, R, and Q interval estimations and, accordingly, predetermined a lower "resolving capacity" of the extended VIKOR method (Table 16) at a few alternatives. Under MCDM resolving capacity, the authors meant the ability of the method to compare the alternatives and differentiate them [48].
Discussion
The photopolymerization process is well studied and widely used in industry [49][50][51][52][53][54][55][56][57]. SLA is based on the photopolymerization phenomenon as well. In particular, when the photoinitiator absorbs UV, the molecule splits into two radicals. The latter combines with monomers to form new radicals that group with other monomers. This reaction forms polymer chains to transform liquid photopolymerized resin into a solid state [58].
In dentistry, one of the challenges in the 3D printing of acrylate resins is residual monomers. After material curing, dental acrylates release various amounts of potentially toxic substances into saliva, where they dissolve and affect tissues of the mouth and the human body as a whole. The substances include unpolymerized, unreacted components of a chemical system, as well as secondary polymerization products. At high concentrations, they are very toxic, but their amount dissolved in the saliva is negligible when using dentures, depending on the possibility of their diffusion from the material. However, these substances may significantly affect a patient's well-being due to individual intolerance, since acrylates are cytotoxic substances.
It is generally accepted that the residual monomers (MMA, BuMA, EMA, and UDMA) and the crosslinkers (EGDMA, IBMA, etc.), which have not fully polymerized in the material curing procedure, are responsible for the toxic and allergenic effects of the acrylates. The amount of a monomer released into the patient's oral cavity is proportional to its total residual quantity in the matrix of the acrylates. The residual monomer diffuses rather quickly from the polyacrylate surface layer (it is released into saliva during the first day of using a denture). However, its certain amount remains "locked" inside polyacrylate for a long time, continuing to slowly diffuse outwards. The amount of the released MMA becomes stable two weeks on from the denture installation.
According to ISO 1567:1999, the maximum allowable residual MMA contents are 2.2% and 4.5% for thermal and cold-cured dental acrylates, respectively. The residual monomer amount can be reduced by post-curing (additional thermal polymerization in boiling water or a microwave oven) and extraction (immersion and holding of dentures in a water bath or sonication in water). Using microwave post-curing procedures, the residual monomer amount can be lowered by 25% (due to its polymerization and/or evaporation). According to the authors, the most promising method for its decreasing is preliminary polymerization under the action of ultraviolet or microwave radiation. In addition, new initiating systems (polymerizable monomers) should be developed.
The authors assume that the above aspect is very relevant from the standpoint of DMS and should be considered when developing such procedures. In the present research, this aspect was neglected for objective reasons. Nevertheless, it will be analyzed in a forthcoming paper by the authors, including the addition of appropriate quantitative indicators to the matrix for comparing the functional properties.
Dental materials used for the manufacture of RCD (for temporary use) must have a wide range of functional properties, which include bio-inertness; anti-allergenicity; specific color palette (including stability of shades and surface textures); physical and mechanical characteristics; good polishability; no negative reaction to hygiene products; manufacturability (simplicity and ease of processing; short duration); economic viability, etc. [59]. Some of these parameters can be characterized quantitatively, and some only qualitatively. Moreover, the achievement of the required level of some properties may be accompanied by the unattainability of those for others. Thus, the issue of DMS is very complicated and carried out empirically in most cases. This contributes to a great risk of errors, transforming into complications and aggravation of the patient's conditions, which is confirmed by the long-term practice of doctors, reflected in the literature [60,61]. Presently, the importance of production routes is a fact that significantly affects the quality characteristics of the fabrication of dental products.
Even though most of the study was devoted to the analysis of the properties of the dental materials and their ranking based on the assessment of their characteristics, it should be noted that they were mainly determined by the structure formed during the 3D-printing process. With this method and the applied conditions for AM, the achievement of the key mechanical and tribological properties was determined by the PMMA molecular structure and the pattern of macromolecule arrangement. For this reason, the revealed difference in the properties between the three types of the studied dental materials was not associated mostly with some variations in the feedstock compositions, but with the specifics of their polymerization during the 3D printing process (taking into account high rates of the product fabrication). In addition, the important influencing factors were: • compositions of processing additives (trade secrets of the manufacturers); • recommended time-depended modes of 3D printing and post-build polymerization processing (differed for the studied PMMA grades); • degrees of residual monomer contents, implemented in 3D printing and post-build polymerization processing; • residual stresses, characterized by strains of the 3D printed samples, etc.
The mathematical algorithm implementation could contribute to the consideration of structural characteristics in the DMS. However, this would complicate the approach (tool) applied in this research, which was based on the materials' ranking over the integral and experimentally determined characteristics (interval, quantitative or qualitative).
DMS was highly case-sensitive, depending on the preferences of an expert. Nevertheless, the proposed approach (with proper tuning) was rigorous and enabled the obtaining of quite weighted estimations. It could be effectively used for solving related problems, such as digital milling from blanks. The key aspects remained as follows: • correct selection of the factors (groups of factors); • ensuring the accuracy of their measurement and reducing errors (dispersions of the experimental data); • ensuring the most representative expert assessment; • if the risk of making a wrong decision remains informalized, the only way to minimize it is to form the right attitude of the decision maker toward expressing his/her preferences.
Note, in this research, the evaluation of the tribological performance was carried out according to the standards intended for testing structural materials (without taking into account the specifics of existing regulations for dental ones). This was not critical from the standpoint of developing an approach to DMS. However, the authors will be careful to follow the standard requirements for testing dental materials.
It should be also noted that presently the issue of DMS is solved in a very subjective way. It depends on a large number of factors: the dentist's experience, price, patients' budget, time availability, and particular values of a variety of functional properties. The papers proposed a concept that considers the factors formulated by experienced longterm practicing dentists. The developed approach presumes an option to make flexible corrections. In addition, it can be easily adapted for solving the DMS issue for dental implants as well. The paper illustrated it over the ranking of three commercially available PMMAs. The significance of the study concludes in attracting the mathematical tools for solving real problems of practicing dentists with a low amount of subjectivity in making a decision.
By way of summarizing, the following might be concluded. We have employed the AHP method for ranking the factors, i.e., more or less important. Furthermore, a compromise is possible to be found over the set of alternatives with the VIKOR method. The proposed approach (the tool) is of great promise to enhance the efficiency of prosthetic treatment by optimizing the DMS procedure, taking into account the prosthesis design and its production route.
Conclusions
The formalized approach to a rational ranking of dental materials aimed at RCD additive manufacturing was proposed within the framework of the multicriteria optimization algorithm. It was tested through three types of commercially available PMMAs, processed by the DLP. For this purpose, the combination of mechanical, tribological, technological, microbiological, and economic factors was assessed. The following results were obtained, and conclusions were drawn.
1.
The calculation of the material indices was carried out to compare the studied dental materials for a set of functional parameters related to feedstock cost. However, this did not solve the problem of simultaneous consideration of all the material indices, inter alia their significance.
2.
For the 3D printing of RCD, the problem of the DMS could be solved as a multicomponent optimization. This study solved the problem by combining the AHP and extended VIKOR methods with interval estimation.
3.
It was shown that the formulated preferences by experts exerted a significant impact on the decision-making results under the conflict of the adopted criteria. The proposed method of grouping the factors according to the expert competencies allowed the reduction of subjectivity, at least at the stage of ranking within the groups. However, uncertainty arose for all criteria at the stage of alternative analysis. 4.
The implementation of the extended VIKOR method, based on the analysis of interval quantitative estimations, allowed the carrying out of a fully fledged analysis of the alternatives. Its results were rather plausible. However, it was characterized by a lower "resolving capacity", i.e., the ability to separate the alternatives.
As an outlook, the authors consider it necessary to note the following. The AHP method was employed to rank the factors over the degree of importance. Furthermore, a compromise is possible over the set of alternatives with the VIKOR method. The proposed approach is targeted at enhancing the efficiency of prosthetic treatment by optimizing the DMS procedure if the prosthesis design and its production route are taken into account. The development of the proposed approach correlates with the attraction of the MCDM, which takes the experts' uncertainty in decision-making (estimation) into account. The methods based on fuzzy logic theory are among them [41,62,63]. | 2023-04-01T15:09:39.059Z | 2023-03-29T00:00:00.000 | {
"year": 2023,
"sha1": "d977223e47828ebfe9fc433fab0aec2262cd28dd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/7/6432/pdf?version=1680140844",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab90bb43a864f78ff147d36e560a9ada006ad943",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17831800 | pes2o/s2orc | v3-fos-license | Systemic EP4 Inhibition Increases Adhesion Formation in a Murine Model of Flexor Tendon Repair
Flexor tendon injuries are a common clinical problem, and repairs are frequently complicated by post-operative adhesions forming between the tendon and surrounding soft tissue. Prostaglandin E2 and the EP4 receptor have been implicated in this process following tendon injury; thus, we hypothesized that inhibiting EP4 after tendon injury would attenuate adhesion formation. A model of flexor tendon laceration and repair was utilized in C57BL/6J female mice to evaluate the effects of EP4 inhibition on adhesion formation and matrix deposition during flexor tendon repair. Systemic EP4 antagonist or vehicle control was given by intraperitoneal injection during the late proliferative phase of healing, and outcomes were analyzed for range of motion, biomechanics, histology, and genetic changes. Repairs treated with an EP4 antagonist demonstrated significant decreases in range of motion with increased resistance to gliding within the first three weeks after injury, suggesting greater adhesion formation. Histologic analysis of the repair site revealed a more robust granulation zone in the EP4 antagonist treated repairs, with early polarization for type III collagen by picrosirius red staining, findings consistent with functional outcomes. RT-PCR analysis demonstrated accelerated peaks in F4/80 and type III collagen (Col3a1) expression in the antagonist group, along with decreases in type I collagen (Col1a1). Mmp9 expression was significantly increased after discontinuing the antagonist, consistent with its role in mediating adhesion formation. Mmp2, which contributes to repair site remodeling, increases steadily between 10 and 28 days post-repair in the EP4 antagonist group, consistent with the increased matrix and granulation zones requiring remodeling in these repairs. These findings suggest that systemic EP4 antagonism leads to increased adhesion formation and matrix deposition during flexor tendon healing. Counter to our hypothesis that EP4 antagonism would improve the healing phenotype, these results highlight the complex role of EP4 signaling during tendon repair.
Flexor tendon injuries are a common clinical problem, and repairs are frequently complicated by post-operative adhesions forming between the tendon and surrounding soft tissue.Prostaglandin E2 and the EP4 receptor have been implicated in this process following tendon injury; thus, we hypothesized that inhibiting EP4 after tendon injury would attenuate adhesion formation.A model of flexor tendon laceration and repair was utilized in C57BL/6J female mice to evaluate the effects of EP4 inhibition on adhesion formation and matrix deposition during flexor tendon repair.Systemic EP4 antagonist or vehicle control was given by intraperitoneal injection during the late proliferative phase of healing, and outcomes were analyzed for range of motion, biomechanics, histology, and genetic changes.Repairs treated with an EP4 antagonist demonstrated significant decreases in range of motion with increased resistance to gliding within the first three weeks after injury, suggesting greater adhesion formation.Histologic analysis of the repair site revealed a more robust granulation zone in the EP4 antagonist treated repairs, with early polarization for type III collagen by picrosirius red staining, findings consistent with functional outcomes.RT-PCR analysis demonstrated accelerated peaks in F4/80 and type III collagen (Col3a1) expression in the antagonist group, along with decreases in type I collagen (Col1a1).Mmp9 expression was significantly increased after discontinuing the antagonist, consistent with its role in mediating adhesion formation.Mmp2, which contributes to repair site remodeling, increases steadily between 10 and 28 days post-repair in the EP4 antagonist group, consistent with the increased matrix and granulation zones requiring remodeling in these repairs.These findings suggest that systemic EP4 antagonism leads to increased adhesion formation and matrix deposition during flexor tendon healing.Counter to our hypothesis that EP4 antagonism would improve the healing phenotype, these results highlight the complex role of EP4 signaling during tendon repair.
Introduction
Flexor tendons (FT) in the hand run on the palmar side of the digits and transmit the forces that allow for finger flexion.Primary repair of FT injuries in zone II remains a challenging surgical problem with a high rate of post-operative complications [1][2][3][4].Fibrous adhesions between the tendon and surrounding tissue form to some extent in all cases of tendon repair, and up to 30-40% of cases are significant enough to result in loss of digit range of motion (ROM) and impaired hand function [5].There are more than 30,000 tendon repair procedures a year in the US, with billions in associated healthcare costs [6].Given this clinical challenge, there is significant interest in optimizing the repair process to improve functional outcomes following FT injury.
Great progress has been made through improving suture techniques and early rehabilitation protocols following FT surgery [7,8], and while functional outcomes have benefited from such protocols, there remains room for improvement.One area of interest for improving FT repair is the fibrous adhesions that form as a result of excessive inflammation around the injury site [1,9,10].While some degree of inflammation is necessary for repair, excessive extracellular matrix (ECM) deposition can disrupt the near-frictionless environment of the FT gliding within its synovial sheath [11,12].Thus, attenuating ECM deposition after injury is an apt target for improving outcomes after FT surgery.
Previous studies have targeted the inflammatory cascade in an effort to improve tendon healing [13][14][15][16].Common among these studies has been the use of Cox-2 inhibitors, which have repeatedly shown concomitant losses in the strength of the repair, a concerning outcome for tissues that experience high loads such as the flexor tendon.While inflammation is required for repair, including recruitment of new cells that synthesize granulation tissue and collagen, excessive inflammation contributes to adhesion formation between the tendon and surrounding structures [1].The challenge, therefore, becomes attenuating inflammation without weakening the biomechanics of the healing tendon.
Prostaglandin E2 (PGE2), an arachidonic acid metabolite, has been implicated as an inflammatory mediator in tendon injuries and tendinopathy [17][18][19][20][21]. PGE2 signals through one of four downstream receptors, EP1 through EP4, all of which belong to the super-family of Gprotein coupled receptors [22].Work by Thampatty et al., using human patellar tendon fibroblasts identified EP4 as the specific receptor mediating degenerative changes in tendinopathy [23], suggesting a potential therapeutic role for selective EP4 receptor antagonists.While significant work has been done investigating the fibrotic effects of modified PGE2-EP4 signaling in the lung [24,25], and kidney [26,27], its effects on tendon repair remain to be understood.
In an attempt to avoid the negative effects on mechanical properties seen in previous studies using anti-inflammatory treatment [13][14][15][16], we sought to modulate inflammation through inhibition of EP4, a downstream mediator of Cox-2/PGE2 inflammation.By targeting a specific receptor downstream of Cox-2/PGE2, only a subset of prostaglandin-mediated inflammation is inhibited, while non-EP4 mediated pathways are preserved.Further, EP4 antagonists do not have the same systemic side effects commonly associated with Cox-2 inhibitors [28], which is important to consider in any translational research.
In the present study we tested the hypothesis that inhibiting PGE2-EP4 signaling following flexor tendon injury will attenuate the inflammatory response and decrease adhesion formation without compromising the strength of the repair.To test this hypothesis, we utilized a murine model of flexor tendon laceration and repair in the hind-paw, and delivered a systemic EP4 antagonist during the late inflammatory and early proliferative phase of tendon healing.We analyzed the changes in digit flexion and gliding, the biomechanical strength of the repairs, the histologic changes within the area of repair, as well as changes in genes associated with tendon catabolism and repair.
Animal Ethics
This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health.All animal procedures were approved by the University Committee on Animal Research (UCAR) at the University of Rochester (UCAR Number: 2014-004).All surgery was performed in the morning in a designated small animal surgery room; animals were sedated using ketamine (60 mg/kg) and xylazine (4 mg/kg), and post-operative pain was managed with a single subcutaneous injection of 0.05mL extended-release buprenorphine (1.3 mg/mL).This protocol was based on previous studies from our group [29].Up to five mice per cage were housed in a secure animal room with a 12 h light-dark cycle in cages with standard bedding.Animals were provided ad lib food and water, and any singly housed animals were provided small shacks for environmental enrichment.The animals' health status was monitored throughout the experiments by a health surveillance program according to guidelines from the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC International).The mice were free of all viral, bacterial, and parasitic pathogens.Experimental animals were not used for breeding purposes.
Murine Flexor Tendon Healing Model
Eight-to-ten week old female C57BL/6J mice (Jackson Laboratories, Bar Harbor, ME) underwent surgical transection and repair of the flexor digitorum longus (FDL) tendon as previously described (average weight 20 g, range 16-21g) [29,30].Briefly, the proximal FDL tendon was transected along the tibia at the myotendinous junction to protect the distal repair.The distal FDL tendon was exposed using a longitudinal incision along the plantar hind foot.The tendon was transected and then repaired using two horizontal 8-0 nylon sutures (Ethicon Inc., Summerville, NJ) in a modified Kessler pattern.The hind foot and tibial incisions were closed using a single 5-0 nylon suture (Ethicon Inc., Summerville, NJ).Post-operatively, mice were returned to their cage and allowed free active motion and weight bearing.
To suppress EP4 signaling, intraperitoneal injection of 10mg/kg EP4 antagonist (L161,982; Cayman Chemical Co, Ann Arbor, MI; CAS 147776-06-5) was administered on post-surgery days 5-8.Delayed EP4 antagonist treatment is based on previous studies demonstrating that delayed inhibition is preferable to immediate inhibition, since excessive inflammation and tissue remodeling are inhibited without disrupting the initial phases of healing [13].Control groups were treated with the same weight-based doses of saline as a vehicle control.Mice were randomly assigned to treatment groups after surgery to avoid any surgeon-induced bias at the time of operation.Mice were sacrificed between post-operative days 3-28 for analysis of the outcomes described below.
cAMP enzyme immunoassay (EIA)
At seven days post-surgery, repaired tendons were harvested from the distal aspect of the tarsal tunnel up until the tendon bifurcated into the digits (n = 3 per treatment group).On the day of sacrifice, mice were given their respective treatments in the morning, and then sacrificed 8 hours later.Each group therefore received a total of three treatments.cAMP EIA was performed according to the manufacturer's protocol (Cayman Chemical Co, Ann Arbor, MI).Briefly, cAMP-acetylcholinesterase conjugate, mouse anti-cAMP monoclonal antibody, and either standard or sample was added to each well of pre-coated EIA plates.Standards and samples were both acetylated, and the samples were run in triplicates using 5-and 10-fold dilutions.Following 18h incubation at 25°C, the plate was washed and Ellman's reagent was added to each well.Absorbance was determined at 405 mm and 420 mm by Synergy Mx Monochromator-based Microplate Reader (BioTek Instruments, Winooski, VT).Concentrations are expressed as picomoles per milliliter (pmol/mL).
Adhesion Testing and Gliding Coefficient
Adhesion testing was performed at post-repair days 10, 14, 21, and 28 (n = 10-12 per treatment per time-point).Immediately following sacrifice, the hind limb was disarticulated at the knee, and the FDL tendon was released from the surrounding tissue proximal to the tarsal tunnel.The proximal end of the FDL tendon was secured between two pieces of tape.The limb was fixed in a custom apparatus with the tibia rigidly gripped to prevent rotation.To standardize the neutral position, the toes were passively extended by the examiner and allowed to return to an unloaded position before a digital image was taken to determine the neutral position (zero load) of the metatarsophalangeal (MTP) joint.Incremental loads were applied to the proximal end of the FDL and digital images were taken to quantify the MTP flexion angle relative to the neutral position.The MTP flexion angles were measured by two independent observers using ImageJ software (http://rsb.info.nih.gov/ij/), and plotted versus the applied load.The gliding resistance was determined by fitting the flexion data to a single-phase exponential equation where the MTP flexion angle = β x (1-exp(-m/α)); where m is the applied load (Prism Graph-Pad 6.0a; GraphPad Software, Inc., San Diego, CA).The curve fit was constrained to the maximum flexion angle (β) for normal tendons that was previously determined to be 75°for the 19g applied load [31].Non-linear regression was used to determine the gliding resistance (α), which has been previously shown to correlate inversely with the range of MTP joint flexion [31].Thus, the gliding resistance is a useful quantitative measure of the resistance to MTP flexion and correlates significantly with the work of joint flexion [32].In addition, the MTP flexion range of motion (ROM) was determined as the difference in flexion angle between the applied loads of 0g and 19g.
Biomechanical Testing
Following MTP flexion testing, the proximal extent of the FDL tendon was released from the tarsal tunnel, the calcaneus and tibia were removed, and the proximal end of the FDL was gripped in the Instron device (Instron 8841 DynaMight axial servohydraulic testing system, Instron Corporation, Norwood, MA), with the distal bones of the foot secured without disrupting the repair or branching tendon insertion into the phalanges.The tendon was tested in tension in displacement control at a rate of 30mm/minute until failure.Force-displacement data were automatically logged and plotted to determine the maximum load at failure (ultimate failure force) and tendon stiffness (slope of the linear portion of the load-deformation curve).
RNA Extraction and Real-Time RT-PCR
Repaired tendons were harvested from the distal aspect of the tarsal tunnel up until the tendon bifurcated into the digits (n = 3 per treatment per time-point).Tendons from each time-point (3, 7, 10, 14, 21 days post-repair) were pooled and homogenized in Trizol reagent (Invitrogen, Carlsbad, CA) using the Ultra Turrax T8 homogenizer (IKA Works, Wilmington, NC).Fivehundred nanograms of RNA was reverse-transcribed to single-stranded cDNA using the iScript cDNA synthesis kit (BioRad, Hercules, CA).This cDNA served as a template for realtime PCR using PerfeCTa SYBR Green SuperMix (Quanta Biosciences, Gaithersburg, MD) and gene specific primers (Table 1).Gene expression was standardized to the internal control β-actin and normalized to day 3 expression levels.Neither group had received any treatment in the first three days post-repair, as such, day 3 repairs are not specific to either treatment group.
Repaired tendons were harvested and analyzed in the same manner from a separate cohort of untreated eight-to-ten week old female C57BL/6J mice (Jackson Laboratories, Bar Harbor, ME) to characterize the temporal expression of EP4 in our FT repair model.
Histology
Whole hind limbs containing the repaired tendons were harvested as previously described on post-repair days 10, 14, 21 and 28 (n = 4 per treatment per time-point) [30].Briefly, the samples were fixed in 10% neutral buffered formalin for 48h with the tibia at 90°relative to the foot, then washed in PBS and decalcified in 14% EDTA (pH 7.2) for 14 days at room temperature.The decalcified tissues were put through a sucrose gradient, and embedded in OCT compound (Tissue-Tek, Sakura Finetek U.S.A., Inc., Torrance, CA).Serial eight-micron sagittal sections through the FDL tendon plane were then cut and stained with alcian blue/hematoxylin and Orange G or picrosirius red (Polysciences Inc., Warrington, PA).Picrosirius red sections were illuminated with monochromatic polarized light, providing a visualization of collagen fiber organization; collagen I appears red, while collagen III appears yellow/green [33,34].
Statistical Analysis
Results are shown as the mean ± standard error of the mean (SEM).Statistical significance was tested via multiple t-tests comparing antagonist and vehicle treated groups, correcting for multiple comparisons using the Holm-Sidak method with significance set at p<0.05.Group sizes were based on post-hoc power analysis of previous studies for biomechanical, gliding testing and qPCR analysis [30].
EP4 Expression Peaked Early After Repair
An intrasynovial FDL tendon repair model was used in untreated mice to determine the expression profile of EP4 (Ptger4), which would inform the timing for EP4 antagonist treatment in the study groups (Fig 1A).While prior studies have reported increases in EP4 following tendon injury, [20,21] it was important to characterize the temporal expression of EP4 in our flexor tendon injury model in order to appropriately target the peak timing of EP4 during the repair process.Expression peaked 7 days post-repair relative to day 3 (Day 3: 1.0 ± 0.3; Day 7: 3.3 ± 0.4,p<0.05);levels of EP4 expression had returned to day 3 levels by 10 days postrepair.This significant and transient peak in EP4 provided the rationale for treatment with the EP4 antagonist on days 5-8 post-repair.was maintained at 14 days, and ROM remained significantly lower in the antagonist group as far out as 21 days post-repair (Vehicle: 44.8°± 4.1°; Antagonist: 22.9°± 4.1°, p<0.05).There were no longer any significant differences in ROM between treatment groups at 28 days postrepair.At 10 days post-repair, EP4 antagonist treated repairs had significantly higher gliding resistance than vehicle treated controls (Vehicle: 27.5 ± 10.6; Antagonist: 70.6 ± 12.8, p<0.05), consistent with the MTP ROM at the same time-point (Fig 2B).This difference was also seen at 21 days post repair (Vehicle: 44.8 ± 4.1; Antagonist: 22.9 ± 4.1, p<0.05).By day 28, there was no significant difference in gliding resistance between vehicle and antagonist treated repairs.
Strength and Stiffness were Unchanged in Flexor Tendon Repairs of EP4 Antagonist Treated Repairs
The maximum tensile load at failure of repaired tendons was determined in the EP4 antagonist and vehicle treated groups to evaluate changes in biomechanical properties of the repair.Both groups exhibit an expected decrease in the maximum load at failure early after repair, with gradual increases in the maximum load between 10 and 28 days post-repair (Fig 3A).Maximum load at failure in the vehicle treated group increased from 10 (1.1N ± 0.2N) to 28 days (2.6N ± 0.1N), while the EP4 antagonist group saw a similar increase over the same time (Day 10: 0.6N ± 0.1N; Day 28: 2.6N ± 0.3N).In the EP4 antagonist repairs, this represented a 3.4-fold increase in the maximum load at failure, suggesting that increasing strength is conferred over time.There were no significant differences between the two groups at any timepoints.Similar to strength measurements, there is an initial decrease in stiffness early after repair (Day 10; Vehicle: 2.2 N/mm ± 0.2 N/mm; Antagonist: 2.2 N/mm ± 0.3 N/mm), with gradual increases through 28 days post-repair (Fig 3B).The stiffness of the repairs in the EP4 antagonist and vehicle treated groups were not significantly different at any time-points.No changes in mechanical properties were observed in un-injured contralateral control tendons from vehicle or EP4 antagonist treated mice.
EP4 Antagonism Increased Early Granulation and Matrix Deposition around the Repair
Histology was used to visualize changes in morphology and cellularity of the repair site over time, and as a function of EP4 receptor antagonism.At day 10 post-repair, both groups exhibited a large zone of granulation, with proliferating cells flanking the tendon (outlined in yellow) both superior and inferior to the repair site (Fig 4; "T" indicates proximal and distal ends of the FDL tendon).At 14 days, this response was exaggerated in the EP4 antagonist group, whereas the vehicle treated repairs began to transition to a remodeling phase with some resolution of the granulation zone.By 21 days, there was no longer a clear distinction between the tendon and granulation tissue in the vehicle treated repairs, indicating that the previously disorganized matrix had been substantially remodeled.In contrast, the EP4 antagonist repairs maintained a small degree of granulation tissue between the tendon and surrounding soft tissues (black arrow).Minimal areas of granulation remained at day 28, however both groups demonstrated greater organization of the repair site and more closely resembled the native tendon architecture.
Picrosirius red staining, which was used to assess changes in collagen organization, was consistent with histological observations.There was increased polarization for type III collagen at 10 days post-repair in the EP4 antagonist group (Fig 5)(white arrows), a finding that did not occur until 14 days post-repair in the vehicle group.Both groups had minimal or no evidence of organized collagen across the repair site at 10 and 14 days post-repair, though by 28 days there was evidence in both groups that the repair site was being reorganized with increases in mature Collagen I resembling the native tendon.
EP4 Antagonism Alters Macrophage and Collagen Gene Expression during Flexor Tendon Healing
The expression of genes associated with the inflammatory, proliferative and remodeling phases of healing in injured FDL tendons of EP4 antagonist and vehicle treated mice were analyzed by real-time RT-PCR and normalized to their expression level in day three repairs (Fig 6).Neither group had received treatment at this time-point; as such, day 3 has been arbitrarily labeled as vehicle at this time-point.A pan-macrophage marker, F4/80, was used to evaluate local changes in the macrophage response after tendon repair [36].F4/80 expression was highest at day 3 post-repair, and was significantly elevated in the EP4 antagonist group at day 7 relative to vehicle treated repairs at the same time point (Vehicle: 0.86-fold decrease ± 0.06; Antagonist: 0.28-fold decrease ± 0.16, p = 0.03) The differences were reversed 10 days post-repair, at which point F4/80 expression was significantly elevated in the vehicle group relative to the antagonist group (Vehicle: 0.38-fold decrease ± 0.1; Antagonist: 0.78-fold decrease ± 0.05, p = 0.02) (Fig 6A ).
Type III collagen (Col3a1) is associated with granulation tissue in the proliferative phase of tendon healing, and it is later remodeled to the more organized type I collagen that makes up the majority of the tendon structure during homeostasis.Both treatment groups had a transient and significant peak in Col3a1 expression in the early stages of healing (Fig 6B).Vehicle treated repairs had a 1.7-fold increase at 10 days post-repair (Day 3: 1.0 ± 0.4; Day 10: 2.7 ± 0.5).In contrast, EP4 antagonist treated tendons had an accelerated increase in Col3a1 expression, with a 1.8-fold increase observed 7 days post-repair (Day 3: 1.0 ± 0.4; Day 7: 2.8 ± 0.5).
PGE2 can have an inhibitory effect on the synthesis of type I collagen (Col1a1) [37,38].This association, along with the important role of type I collagen in mature tendon, led us to investigate changes in expression patterns of Col1a1 after flexor tendon repair in the EP4 antagonist and vehicle groups (Fig 6C).At 10 days post-repair, the first time-point after discontinuing the antagonist treatment, there was a significant decrease in Col1a1 expression in the EP4 antagonist group (Vehicle: 1.2 fold increase ± 0.6; Antagonist: 1.2-fold decrease ± 0.1, p<0.05).From day 7 to day 10, this represented a 9.7-fold decrease in the antagonist group.Neither group had increases in expression at days 14 and 21 post-repair, and expression profiles were not significantly different between the groups at time-points other than 10 days.
Systemic EP4 Antagonist Changes the Expression of Mmp9 and Mmp2 during Repair
Mmp9 has been implicated in scar formation during flexor tendon repair, and it is directly associated with adhesion formation [29] Consistent with the role of PGE2 in inducing expression of Mmp9, [39] there was decreased expression at day 7 in the EP4 antagonist group relative There was increased polarization for type III collagen 10 days post-repair in the EP4 antagonist group (white arrows).Both groups had minimal organized collagen across the repair site at 10 and 14 days post-repair, though by 28 days there was evidence in both groups that the repair site was being remodeled with increases in mature collagen bridging the repair site.10X magnification; Scale bar = 300 microns; Arrows identify areas of yellow/green staining representing Type III collagen).doi:10.1371/journal.pone.0136351.g005to vehicle, though the changes did not reach significance (Vehicle: 3.1-fold ± 1.1; Antagonist: 0.9-fold ± 0.5, p = 0.14) (Fig 6D).After discontinuing the antagonist treatment, there was a steady increase in Mmp9 expression at days 14 (4.8-fold ± 0.8) and 21 (8.2-fold ± 0.6).
Mmp2 is associated with the resolution of adhesions during the remodeling phase of tendon repair [30].The vehicle treated repairs displayed a significant increase in expression at day 10 (6.8-fold ± 0.7) (Fig 6E).There was a gradual increase in Mmp2 expression in the EP4 antagonist group from 7 to 21 days post-repair, reaching a significant 19.4-fold increase (± 2.1) at 21 days.
Discussion
The present study demonstrates that systemic inhibition of EP4 increases matrix deposition around the repair site resulting in greater fibrous adhesion formation during FT healing.This is supported by functional data, showing impaired MTP ROM and increased gliding resistance, along with a more robust granulation response in the EP4 antagonist treated repairs.Changes in expression of F4/80, Col3a1, Col1a1, Mmp9, and Mmp2 provide insight into the molecular events behind the phenotypic changes.The EP4 antagonist group exhibited an accelerated peak of Col3a1 expression with decreases in Col1a1 at earlier time-points, along with elevated expression of F4/80 relative to vehicle.Mmp9 expression increased at 14 and 21 days postrepair, while Mmp2 continued to increase as time progressed, consistent with the greater extent of disorganized matrix in the antagonist group.
A major challenge in using biological approaches to improve tendon repair is achieving an optimal balance in inflammation.In the early stages after injury, the inflammatory response is essential to initiate repair and recruit cells to the site of injury.However, during the remodeling phase, excessive inflammation can have a negative effect on the healing environment and is associated with adhesions [9,10].Cyclooxygenase-2 (COX-2) inhibitors are common antiinflammatory medications used to treat tendon pathology, and their use decreases levels of PGE2 [40].The effect of Parecoxib treatment, a COX-2 inhibitor, on tendon healing has been reported, in which treatment with Parecoxib for the first 5 days after surgery significantly decreased the strength of the tendon callus, measured 8 days after surgery [13].The detrimental effect on strength of the callus was reversed when the treatment was withheld until 6 days post-injury.These results informed our hypothesis that delayed inhibition of the prostaglandin inflammatory cascade further downstream than COX-2, at the level of PGE2-EP4 signaling, might achieve a balance between the necessary and unwanted inflammation that takes place after tendon injury.PGE2 was selected as a target for inhibition due to previous studies that have shown the negative effects of PGE2 on tendon, collagen, and their associated catabolic genes [23,[41][42][43].Further, in vitro studies of tendon fibroblasts have shown that IL-1β treatment of tendon fibroblasts up-regulates COX-2 and stimulates EP4 receptor expression, suggesting an association with the catabolic inflammatory process in tendon pathology [23].Measurements of cAMP in the antagonist treated repairs demonstrated that the systemic EP4 antagonist significantly decreased EP4 signaling within the repair site, and was able to modulate prostaglandin-mediated signaling pathways during FT repair in this murine model.
In this study, the strength of the repairs was no different between EP4 antagonist and vehicle treated repairs-an important finding, since a primary concern with inhibiting inflammation is that there will be a concomitant decrease in the strength of repair.The maximum load at failure slowly increased in both groups from day 10 to day 28 post-repair, which is consistent with the repair being remodeled to a more organized structure that mimics the native tendon.Along with the changes in maximum load at failure, there were no differences in the stiffness between the two groups.While maximum load at failure represents the overall strength of the repair, the stiffness is able to provide information about tissue properties other than strength that may change during the repair process.These biomechanical results were encouraging, and suggested that delayed inhibition of EP4 in our flexor tendon injury model did not detrimentally suppress the inflammatory response in terms of the biomechanical properties conferred to the healing tendon.
The functional consequence of adhesion formation is the increasing loss in digit ROM.As fibrous adhesions form between the tendon and surrounding soft tissue, there is greater resistance to the tendons gliding within the sheath.Through in situ testing, the total ROM along with the gliding resistance, a measure of the overall work of flexion [31], was assessed.The MTP ROM was significantly lower in EP4 antagonist treated repairs at 10 and 21 days postrepair, with a similar trend present at 14 days.By day 28, there were no significant differences between the two groups.Similar changes were observed in measures of gliding resistance, with significant increases in resistance in the antagonist treated repairs at 10 and 21 days postrepair.These findings suggest that there are increasing adhesions within and around the repair site in repairs treated with an EP4 antagonist, and that these adhesions are remodeled by day 28 when there are no longer differences between groups.
After tendon injury, the repair site is initially bridged by granulation tissue composed primarily of type III collagen, which is subsequently remodeled to the more organized, mature type I collagen [1].As such, gene expression profiles of Col1a1 and Col3a1 are important for characterizing the catabolic and anabolic responses to tendon injury.Previous studies have shown that exogenous PGE2 can inhibit type I collagen [37,38].This effect is consistent with decreased Col1a1 expression from 7 to 10 days post-repair in the antagonist group.Both treatment groups displayed a temporary peak in Col3a1 expression, though the antagonist group had an accelerated peak in expression.Given the phenotype of robust matrix deposition with greater adhesions in the EP4 antagonist treated repairs, accelerated Col3a1 expression suggests earlier collagen catabolism, and is consistent with early functional detriments in this group.This is consistent with changes in F4/80 expression, a pan-macrophage marker; the antagonist group exhibits significantly higher expression at 7 days post-repair, suggesting accelerated inflammation relative to vehicle treated controls.
It has previously been shown that deletion of Mmp9 results in reduced catabolism of native tendon with fewer adhesions after injury and repair [29].EP4 signaling increases the expression of Mmp9, [39] therefore the effect of an EP4 antagonist on temporal expression of Mmp9 was of particular interest in this study.During the period of antagonist treatment, there was an expected decrease in Mmp9 expression relative to the vehicle control.The delayed increase in Mmp9 is consistent with decreased ROM at 21 days post-repair in the antagonist group, since its expression stimulates tendon catabolism and increases matrix deposition around the repair.Investigations of the role of Mmp2 in tendon repair suggest its involvement in facilitating the transition from early granulation tissue to the more organized collagen structure [30].The expression profile of Mmp2 was consistent with this presumed role for the enzyme after tendon injury.Given the increased matrix deposition and granulation response in the early timepoints, there is a greater extent of tissue that requires remodeling to restore the native structure of the tendon.The steady increases in Mmp2 from 7 to 21 days post-repair suggest a response to the robust matrix deposition seen in EP4 antagonist treated repairs.
The results of this study raise important questions regarding the role of EP4 in tendon injury and repair, since the results were counter to our original hypothesis that inhibiting EP4 would attenuate adhesion formation and matrix deposition.PGE2-EP4 signaling imparts diverse changes within different tissues, and there is evidence of both pro-and anti-fibrotic effects of this signaling pathway.In their work on PGE2 in lung fibroblasts, Huang et al., demonstrated an anti-fibrotic role for PGE2 and found that cAMP was a downstream mediator of decreased collagen expression in lung fibroblasts [25].This is consistent with other studies that have shown inhibitory effects of PGE2 on fibroblast proliferation and collagen synthesis [44][45][46].Since the antagonist used in this study is given systemically, it exerts its inhibitory effects across tissues and cell populations both within and outside of the repair site.While EP4 may have a pro-inflammatory role within the localized tendon environment, it remains to be seen how EP4 signaling at the systemic level contributes to the repair process in flexor tendons.These findings underscore the need to better characterize the cells that are involved with tendon repair, both locally and systemically, and to delineate the different roles of EP4 signaling across diverse cell populations.While a global inhibition of EP4 shifted the healing response toward increased matrix deposition and adhesions, a more targeted approach could achieve the desirable effect that was originally sought in this study.
This study describes the biomechanical, cellular, and molecular changes that occur following systemic EP4 antagonism in a model of flexor tendon injury.However, a number of limitations must be considered.While this model is used as a translational approach to investigate zone II injuries, we do not utilize a true zone II laceration.Given the microscopic size of hindpaw tendons in the mouse, a mid-paw laceration and repair is better able to reproduce the repair procedure used in human injuries.Also regarding the injury model, our release of the proximal myotendinous junction is not consistent with clinical practice.This is used to protect the delicate repair in the mice, since they begin active movement immediately after surgery and are not immobilized or put through controlled rehabilitation in the same way as flexor tendon repairs in the clinic.Furthermore, while relative expression of the different genes is important to observe, expression alone does not represent the full picture of translation, activity, and gene metabolism.
In summary, flexor tendon repairs treated with a systemic EP4 antagonist exhibited an increased granulation response, with greater matrix deposition, impaired early ROM, and increased gliding resistance.The biomechanical properties of the repair were no different between antagonist treatment vehicle treated repairs.Adhesion formation after primary repair of flexor tendon injuries remains a common clinical problem, and biologic approaches to attenuate the inflammatory response are needed to improve outcomes.This study highlights the complex role of EP4 signaling within the inflammatory cascade, and the need for future studies to characterize the specific cell populations involved in the different phases of tendon repair.
Fig 2 .
Fig 1. Temporal Expression of EP4 and Effective Inhibition with Systemic EP4 Antagonism.1A: qPCR analysis of EP4 expression in wilt-type tendons harvested between 3 and 28 days post-repair demonstrated a significant 2.3-fold increase in EP4 at 7 days post-repair.1B: Local levels of cAMP were measured in repairs of EP4 antagonist (black bars) and vehicle treated (white bars) repairs 7 days after surgery.There were significant decreases in cAMP in the antagonist group, suggesting that the systemic EP4 antagonist effectively decreases EP4 signaling.*p<0.05.doi:10.1371/journal.pone.0136351.g001
Fig 3 .
Fig 3. Maximum Load at Failure and Stiffness were Unchanged in EP4 Antagonist Treated Repairs.3A: Maximum load at failure, a measure of strength, was determined by tensile testing of the repaired flexor tendons.No significant differences were seen between EP4 antagonist (black bars) and vehicle treated repairs (white bars), with both groups exhibiting gradual increases in strength between 10 and 28 days.3B: Stiffness, the linear portion of the force-displacement curve generated during tensile testing, was no different between EP4 antagonist (black bars) and vehicle treated (white bars) repairs at any time-point.*p<0.05.doi:10.1371/journal.pone.0136351.g003
Fig 4 .
Fig 4. EP4 Antagonism Increased Early Granulation and Matrix Deposition around the Repair.Histologic analysis demonstrated increased granulation (outlined in yellow) and matrix deposition around the repair site in the earlier time-points of the EP4 antagonist treated repairs.This contrast is most evident at day 14.By 21 days post-repair, some granulation remained in the antagonist group (black arrow), while greater remodeling was seen in the vehicle group.Both groups have remodeled significantly by 28 days post-repair.(Alcian Blue/Hematoxylin and Orange G Staining.5X magnification; Scale bar = 500 microns; "T" indicates tendon; Granulation tissue is outlined in yellow).doi:10.1371/journal.pone.0136351.g004
Fig 5 .
Fig 5. Changes in Collagen Organization by Polarized Light Microscopy.Picrosirius red staining was used to visualize changes in collagen organization over time in vehicle and EP4 antagonist treated repairs.There was increased polarization for type III collagen 10 days post-repair in the EP4 antagonist group (white arrows).Both groups had minimal organized collagen across the repair site at 10 and 14 days post-repair, though by 28 days there was evidence in both groups that the repair site was being remodeled with increases in mature collagen bridging the repair site.10X magnification; Scale bar = 300 microns; Arrows identify areas of yellow/green staining representing Type III collagen).
Fig 6 .
Fig 6.EP4 Antagonism Alters Macrophage, Collagen, and Mmp Expression After Repair.Relative mRNA expression was determined in the repair site by RT-PCR.Values were normalized to the internal control β-actin, and then normalized to expression levels at day 3 post-repair.5A: Expression of the macrophage marker F4/80 is significantly increased in antagonist group (black bars) relative to vehicle at 7 days post-repair, consistent with more robust inflammation early after repair.5B: Significant increases in type III collagen (Col3a1) expression were seen in the EP4 antagonist group (black bars) at 7 days post-repair, suggesting accelerated collagen catabolism.5C: EP4 antagonism resulted in significant decreases in type I collagen (Col1a1) expression at 10 days post-repair (black bars).5D: Mmp9 is associated with adhesion formation during flexor tendon repair.Significant increases were seen in the EP4 antagonist group (black bars) at 14 and 21 days post-repair, consistent with functional losses at these time-points.5E: Mmp2 is associated with tissue remodeling during tendon healing.Expression of Mmp2 increases gradually over time in the antagonist group (black bars), suggesting a response to the increased extent of granulation in these repairs.*p<0.05,**p<0.01.doi:10.1371/journal.pone.0136351.g006
EP4 Signaling was Effectively Decreased by Systemic EP4 Antagonism | 2016-10-31T15:45:48.767Z | 2015-08-27T00:00:00.000 | {
"year": 2015,
"sha1": "2c186053a91a811184931565b062e4ec037d4e17",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0136351&type=printable",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2c186053a91a811184931565b062e4ec037d4e17",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
259246277 | pes2o/s2orc | v3-fos-license | Review article: new treatments for advanced differentiated thyroid cancers and potential mechanisms of drug resistance
The treatment of advanced, radioiodine refractory, differentiated thyroid cancers (RR-DTCs) has undergone major advancements in the last decade, causing a paradigm shift in the management and prognosis of these patients. Better understanding of the molecular drivers of tumorigenesis and access to next generation sequencing of tumors have led to the development and Food and Drug Administration (FDA)-approval of numerous targeted therapies for RR-DTCs, including antiangiogenic multikinase inhibitors, and more recently, fusion-specific kinase inhibitors such as RET inhibitors and NTRK inhibitors. BRAF + MEK inhibitors have also been approved for BRAF-mutated solid tumors and are routinely used in RR-DTCs in many centers. However, none of the currently available treatments are curative, and most patients will ultimately show progression. Current research efforts are therefore focused on identifying resistance mechanisms to tyrosine kinase inhibitors and ways to overcome them. Various novel treatment strategies are under investigation, including immunotherapy, redifferentiation therapy, and second-generation kinase inhibitors. In this review, we will discuss currently available drugs for advanced RR-DTCs, potential mechanisms of drug resistance and future therapeutic avenues.
Introduction
Differentiated thyroid cancers (DTCs) have an excellent prognosis in most patients, with an overall 5-year relative survival rate of 98.4% according to the SEER database (1). However, a subtype of patients, representing 5-10% of all DTCs, will develop distant metastasis, most frequently in the lungs and bones (2). Prognosis remains favorable as long as metastatic disease is radioiodine-avid (3). Yet, 50% of metastatic DTCs are refractory to radioactive iodine (RAI), which is associated with poor outcomes and a 10-year survival rate of about 10% (4). On the other hand, many patients with advanced radioiodine refractory DTC (RR-DTC) can have an indolent or slowly progressive disease for many years. Thus, as multiple advances have been made in the treatment of advanced RR-DTCs in the last decade with multiple new therapeutic options, current challenges include identifying the appropriate timing for treatment initiation as well as choice of the right therapy.
When to treat RR-DTC
The first step in treating advanced DTCs is to properly identify radioiodine refractory disease. In fact, until disease is considered unresponsive to RAI, 131 I remains the gold standard in the treatment of metastatic advanced disease (3). However, taking into account the toxicity associated with high cumulative doses of RAI, it is crucial to properly identify when this therapy is no longer beneficial to the patient. The definition of RR-DTC can be challenging in clinical practice and remains somehow controversial. In most publications (2)(3)(4)(5)(6), RAI-refractory (RAI-R) disease is defined as either: (1) absence of RAI uptake outside the thyroid bed on the first posttherapy whole body scan, (2) loss of RAI concentration in a tumor tissue which was previously proved as RAI-avid, (3) concentration of RAI in some tumor lesions but not in others, and (4) progression of metastatic disease despite significant concentration of RAI, within a relevant time frame, usually considered as 6-12 months after 131 I therapy. A fifth criterion which is highly debated is disease progression in a patient who has received ≥ 600 millicuries (mCi) of 131 I. This is based on a single study which showed no further complete remissions after a cumulative dose of 22.2 GBq (600 mCi) (7). Therefore, factors such as response to previous therapies, duration of response, RAI uptake on diagnostic whole-body scan as well as previous treatment toxicity and patient preference should all be taken into account when considering if additional RAI therapy is indicated, rather than cumulative dose alone. Finally, 18 F-FDG PET/CT could also be useful in identifying RAI-R disease. For instance, a study showed that a SUVmax greater than 4.0 in 18 F-FDG avid metastases has a sensitivity of 75.3% and a specificity of 56.7 for predicting absence of 131 I avidity (8).
Although associated with risk of progression and poorer prognosis, not all RAI-R disease needs immediate therapy. In fact, RAI-R metastatic DTCs can have an asymptomatic and indolent clinical course for several years. Such patients can be managed with active surveillance and TSH suppression alone as long as disease is asymptomatic, there is no or minimal progression, and tumor burden is low (2)(3)(4)9). Active surveillance includes regular crosssectional imaging of known sites of distant disease (every 3-12 months), serum thyroglobulin (Tg) and Tg antibody measurement, and as needed 18 FDG-PET/CT whole body imaging, especially when Tg levels are increasing without explanation from crosssectional imaging (2,3,9).
During this surveillance period, various scenarios can occur. First, disease can remain stable and asymptomatic, thus requiring no further intervention. Alternatively, there can be significant progression in one single lesion putting the patient at risk of complications or symptoms. This should be managed by locoregional therapy when feasible, including external beam radiation, stereotactic radiosurgery, thermal ablation, transarterial chemoembolization, and/or surgery (2,3,9). Finally, when local therapies are not feasible, or there is tumor progression despite local therapy, or there is significant disease progression in multiple lesions affecting more than one organ, then systemic therapy with kinase inhibitors becomes indicated (9).
Molecular basis of differentiated thyroid cancer
Selecting the right kinase inhibitor to treat advanced progressive RR-DTC requires a comprehensive knowledge of the genetic alterations underlying these tumors. In fact, over the last decade, better understanding of the molecular mechanisms driving DTCs and RAI refractoriness has allowed the development of multiple targeted therapeutic agents ( Figure 1).
The mitogen-activated protein kinase (MAPK) pathway is central to the pathogenesis of papillary thyroid carcinomas (PTCs). Mutually exclusive activating somatic alterations of genes encoding effectors in this pathway were found to represent over 80% of the known genetic alterations in these tumors in The Cancer Genome Atlas (TCGA) (10). BRAF V600E oncogenic mutations are the most frequent, encountered in about 60% of PTCs, followed by RAS point mutations and RET fusions (10). Rearrangements involving ALK and NTRK genes encoding tyrosine kinase receptors have also been described and are of particular interest since therapies targeting these mutations are now available (5, 11, 12). When no mutation in the MAPK pathway is identified, alterations in members of the phosphoinositide 3-kinase (PI3K) pathway are usually detected, including PTEN, PIK3CA and AKT1 mutations, although those are relatively rare (5, 10). EIF1AX has been described as a novel driver oncogene in approximate 1% of PTCs using the TCGA, and is mutually exclusive with MAPK mutations (10). Other mutations occasionally encountered in PTCs include fusions involving BRAF, THADA, MET, FGFR2 and ROS1 (10, 13).
Mutation profile can help predict tumor behavior and RAI refractoriness (14). In fact, it has been well described that tumors driven by BRAF V600E mutations exhibit high MAPK-signaling output and significant reduction in the expression of genes responsible for iodine uptake and metabolism, such as the sodium-iodide symporter (NIS) (10, 11). Tumors harboring BRAF V600E mutations had a significantly lower differentiation score in the TCGA cohort when compared to those with RAS mutations (10), explaining the decreased RAI uptake and responsiveness seen in BRAF-mutant tumors. A mouse model developed by Chakravarty and colleagues (15), in which oncogenic expression of BRAF V600E in thyroid follicular cells is inducible by doxycycline administration, supports this observation. Following induction of BRAF V600E expression, mice developed high-grade PTCs with increased MAPK transcriptional output and impairment of thyroid-specific gene expression, including near complete loss of NIS. Nevertheless, given the high frequency of BRAF mutations in PTC, and the indolent course of most cases, BRAF V600E mutation is likely insufficient on its own to explain the aggressive behavior of some tumors. Next generation sequencing (NGS) of advanced PTCs has shown that aggressive tumor behavior and recurrence are more likely when more than one oncogenic mutation is present, especially when TERT promoter, TP53, PIK3CA and/or AKT1 mutations coexist with BRAF V600E mutations (3,12,16,17). Moreover, these mutations may act in concert with BRAF V600E mutations to induce RAI refractoriness by leading to increased signaling in the MAPK and PI3K/AKT/mTOR pathways and further reducing NIS signaling (4,18) Follicular thyroid carcinomas (FTCs), which represent only 2-5% of all thyroid cancers, are most often associated with mutations involving the RAS oncogene, and rarely, PAX8-PPARg rearrangements (11, 12). Mutations in genes encoding components of the PI3K/PTEN/AKT signaling pathway are also frequent: for instance, PTEN mutations are encountered in up to 10% of sporadic FTCs (19,20). TERT promoter mutations can also be found in FTCs and are associated with more aggressive disease (16).
Oncocytic thyroid carcinomas (OTCs, previously Hürthle cell carcinomas), now considered as a separate subtype of thyroid cancer, harbor a mutational profile distinct from those of PTCs and FTCs (21,22). In fact, OTCs are not associated with BRAF mutations, and rarely harbor RAS mutations or oncogenic fusions (5, 21,22), justifying that it was inappropriate to consider them as a subtype of FTC. OTCs are rather characterized by near-haploid chromosomal content in most tumors, as well as mitochondrial DNA alterations (22). Furthermore, genes found to be more frequently altered in OTCs include DAXX, TP53, TERT promoter and EIF1AX, among others (5, 21,22).
Agents targeting some of the mutations known to contribute to thyroid cancer pathogenesis have been developed in the last two decades, significantly changing the outcome of patients with advanced DTCs. Table 1 summarizes all currently FDA-approved kinase inhibitors for RR-DTCs.
Kinase inhibitors
FDA-approved non-specific tyrosine kinase inhibitors The first two agents that were approved by the Food and Drug Administration (FDA) for the treatment of patients with locally recurrent or metastatic, progressive, RR-DTC are sorafenib (approved in 2013) and lenvatinib (approved in 2015), both multikinase inhibitors (MKI) with anti-angiogenic action through inhibition of the vascular endothelial growth factor receptors (VEGF-R) 1,2 and 3. In fact, DTCs were shown to exhibit disorganized vasculature and cancer-cell hypoxia, leading to an increased activation and expression of VEGF-R and a dependence on its signaling for tumor survival (11). VEGF and its receptor are therefore interesting therapeutic targets. Sorafenib and lenvatinib also have variable inhibitory actions on other kinases, including RET, fibroblast growth factor (FGF) and platelet-derived growth factor (PDGF) receptors.
Sorafenib
Efficacity of sorafenib for the treatment of advanced DTC was demonstrated in the phase 3 randomized, double-blind, placebocontrolled trial DECISION (30). This study enrolled 416 patients with locally advanced or metastatic RR-DTC that had progressed in the previous 14 months and had not been previously treated with targeted therapy or chemotherapy. Median progression-free survival (PFS) was significantly longer in the sorafenib group (10.8 months) compared to the placebo group (5.8 months; hazard ratio [HR] 0.59, 95% CI 0.45-0.76; p < 0.0001). Objective response rate (ORR) and disease control rate (DCR) were also significantly higher in the sorafenib group, respectively 12.2% compared with 0.5%, and 54.1% compared with 33.8%. Overall survival (OS) did not differ significantly between the treatment groups (HR 0.80; 95% CI 0.54-1.19; p =0.14), but patients were allowed to cross over from the placebo to the treatment arm at disease progression.
Lenvatinib
Similarly, the phase 3 SELECT trial led to the FDA approval of lenvatinib (27). This randomized, double-blind, placebo-controlled trial included 261 patients with RR-DTC that had progression within the previous 13 months. Patients treated with up to one prior tyrosine kinase inhibitor (TKI) were included. Median progression-free survival was 14.7 months longer in the lenvatinib group (PFS 18.3 versus 3.6 months; HR 0.21, 99% CI 0.14-0.31; p < 0.001). This PFS benefit was independent of previous TKI therapy. ORR was 64.8% in the lenvatinib group as opposed to 1.5% in the placebo group (Odds ratio [OR] 28.87; 95% CI 12.46 -66.86; p < 0.001), with four complete responses (CR). Although no significant difference in OS was observed between the two groups (HR for death 0.73, 95% CI 0.50-1.07; p=0.10), there was a significant survival benefit with the use of lenvatinib in patients over the age of 65 despite crossover from the placebo to the treatment arm at disease progression (OS not reached VS 18.4 months; HR 0.53; 95% CI 0.31-0.91; p = 0.02).
Although they led to significant prolongation of PFS, these agents were associated with adverse events (AEs) in virtually all patients, including grade ≥ 3 adverse events in 75.9% of patients on lenvatinib and 37.2% of patients on sorafenib. AEs led to discontinuation of lenvatinib in 14.2% of patients and sorafenib in 18.8%, while treatment interruptions and dose reductions due to toxicity occurred in well over 50% of patient with both agents (27,30). Most common AEs include hypertension, palmar-plantar erythrodysaesthesia syndrome, fatigue, weight loss, diarrhea, and stomatitis.
Cabozantinib
In September 2021, a third MKI, cabozantinib, was approved by the FDA as a second line therapy for patients with locally advanced or metastatic RR-DTC that has progressed following prior VEGF-R targeted therapy. Cabozantinib inhibits multiple tyrosine kinases involved in tumor growth and angiogenesis including VEGF-R2, AXL, c-MET and RET (5, 23). Notably, upregulation of c-MET and AXL signaling has been shown to play a role in resistance to antiangiogenic agents (31,32), which serves as a premise for the
FDA-approved selective kinase inhibitors
Although MKIs can significantly improve PFS in patients with advanced RR-DTCs, these therapies have multiple drawbacks. Their toxicity profile can have a major impact on patients' quality of life and may limit their long-term effective use in clinical practice. For instance, real-life studies with lenvatinib describe treatment interruption and dose reduction rates as high as 79.5% (33). Moreover, as we will discuss below, many patients will eventually develop resistance to treatment and progress. For these reasons, the quest for treatments that do not target the angiogenic pathway and provide more personalized therapeutic options for patients with advanced DTCs has continued, culminating in the FDA-approval of various selective kinase inhibitors. These agents target more specifically one or a few kinases involved in tumorigenesis, which allows for better efficacy and most importantly less toxicity.
BRAF +/-MEK inhibitors
As mentioned earlier, the BRAF V600E mutation is the most frequent oncogenic driver in PTCs, present in 60% of cases, which makes it an attractive therapeutic target. BRAF + MEK inhibitors have been used for many years in other BRAF-mutated solid tumors, mainly melanoma and non-small cell lung carcinoma (NSCLC).
Dabrafenib and trametinib
The combination of dabrafenib and trametinib, a selective BRAF and MEK 1/2 inhibitor respectively, was FDA approved in 2018 for the treatment of locally advanced or metastatic BRAF V600E-mutant anaplastic thyroid cancer (ATC) and has significantly changed the treatment paradigm of these tumors which were previously viewed as a death sentence (34-37). Most recently, based on data from the ROAR (NCT02034110) (38) and NCI-MATCH (39) basket trials, the FDA granted in June 2022 an accelerated approval of dabrafenib in combination with trametinib for the treatment of patients with BRAF V600E-mutated metastatic or unresectable solid tumors who have progressed on prior treatment and have no other satisfactory treatment options, including thyroid cancers.
Dabrafenib was first shown to be promising in patients with DTC in a phase 1 basket trial (40). This led to a randomized, multicenter, open-label phase 2 trial in patients with BRAF mutated PTC (24). This study included 53 patients with progressive disease within 13 months before enrollment. Patients could have received up to three priors oral MKIs, excluding other selective BRAF or MEK inhibitors. Patients were randomized to dabrafenib monotherapy or dabrafenib in combination with trametinib, and primary endpoint was ORR in each group within the first 24 weeks of therapy. It was hypothesized that combination therapy would have superior clinical efficacy due to dual inhibition of the MAPK pathway as well as mitigation of potential mechanisms of resistance to dabrafenib through MEK kinase inhibition. Patients on dabrafenib alone were allowed to crossover to the combination group on disease progression. ORR, which included minor responses (defined as a 20 to 29% decrease in the sum of target lesions), was 42% (95% CI 23-63%) with dabrafenib and 48% (95% CI 29-68%) with dabrafenib + trametinib (p = 0.67). Median PFS was also not statistically different between the two groups (10. 7 Notably, of the 14 patients who crossed over at progression, 4 (29%) had an objective response, including 3 PRs and one minor response, and 8 had stable disease (SD). Grade 3 AEs occurred in about 50% of patients in both groups. Most frequent AEs associated with dabrafenib alone were skin disorders, fever, and hyperglycemia, while fever, hypophosphatemia and fatigue were most common with combination therapy. Skin disorders were strikingly less frequent with the combination compared to dabrafenib alone (33% VS 65% respectively). This trial, although not showing any superiority of combined BRAF and MEK inhibition over BRAF inhibitor therapy alone, did show prolonged PFS and OS with both treatment strategies, making dabrafenib +/-trametinib a therapeutic option for patients with advanced BRAF-mutated PTCs, especially when anti-angiogenic agents are contraindicated or associated with significant risk. This being said, there has been no direct comparisons between dabrafenib and MKIs such as lenvatinib in BRAF-mutated advanced DTCs to justify favoring one treatment over the other.
RET inhibitors
RET is a transmembrane glycoprotein receptor-tyrosine kinase (RTK). Ligand binding leads to RET homodimerization followed by trans-phosphorylation of tyrosine residues within the intracellular domains and activation of several signal transduction cascades involved in cellular proliferation, including the MAPK and PI3K pathways (41). Oncogenic activation of RET can occur through three main mechanisms: mutations leading to activation of the kinase domain by ligand-independent dimerization, mutations causing direct activation of the RET kinase domain, and chromosomal rearrangements producing chimeric proteins with constitutively active RET kinase domain (41,42). Germline activating RET mutations are associated with multiple endocrine neoplasia type 2 (MEN2) syndromes, while somatic RET mutations are found in~65% of all sporadic MTCs (41). RET rearrangements, on the other hand, have been identified in various solid tumors, including about 5 to 10% of PTCs, most frequently in children and in patients with prior exposure to radiation (41). CCDC6-RET and NCOA4-RET are the most frequently identified RET fusions in PTCs.
Involvement of RET alterations in tumorigenesis makes this RTK a potentially actionable therapeutic target. Moreover, tissuespecific RET knockout studies in mice, targeting the hematopoietic, neuronal, and lymphoid tissues, suggested that RET inhibition would most likely result in very little clinically significant AEs (43-45). This led to efforts aiming to identify selective RET inhibitors that would be used to treat RET mutated tumors in patients.
Selpercatinib (LOXO-292) and pralsetinib (BLU-667) are two potent and highly selective RET kinase inhibitors that have been recently FDA approved in 2020 for patients with RET fusionpositive DTCs and R ET -mutant MTCs who require systemic therapy.
Selpercatinib
Efficacy of selpercatinib in RET-altered thyroid cancers was demonstrated in the LIBRETTO-001 trial (29). This phase 1/2 study included 19 patients with RET fusion-positive thyroid cancers, mainly PTCs (13/19). Most patients (79%) had had previous therapy with at least one MKI. In the RET fusion-positive DTC cohort, ORR was 79% (95% CI 54-94), including one CR and 14 PRs. Interestingly, 2/3 patients with poorly differentiated thyroid cancers (PDTC), 1/1 patient with OTC and 1/2 patients with ATC had PRs to therapy. Median PFS was not reached, but 64% of patients were progression-free at 1 year. Among all the patients with RET-altered thyroid cancers treated with selpercatinib in the trial (n=162), grade 3 or grade 4 treatment related AEs occurred in 28% and 2% respectively, most frequently hypertension (in 21% of patients) and increased cytolytic liver enzymes (increased alanine aminotransferase in 11% and asparte aminotransferase in 9%).
Pralsetinib
ARROW is a phase 1/2 trial evaluating the efficacy of pralsetinib in patients with RET-altered locally advanced or metastatic solid tumors, including thyroid carcinomas (46). Updated data presented at the 2022 American Society of Clinical Oncology (ASCO) meeting in 21 patients with previously treated RET fusion-positive thyroid cancers showed an ORR of 86% (95% CI 64-97), including 15 PRs. Duration of response was 17.5 months (95% CI 16.0 -NR) and PFS was 19.4 months (95% CI 13.0 -NR) (28). Similar to selpercatinib, pralsetinib was well tolerated, with a manageable safety profile. Most frequent grade 3 AEs were hypertension (17% of all trial patients) and cytopenia (neutropenia in 13%, lymphopenia in 11% and anaemia in 10%). One case of grade 5 pneumonia also occurred (46).
In both the LIBRETTO-001 and ARROW trials, AE-related dose reductions and treatment discontinuations were relatively low with only 2 and 4% of discontinuations of selpercatinib and pralsetinib respectively (28,46).
NTRK inhibitors
The tropomyosin-receptor kinase (TRK) family of RTKs includes TRKA, TRKB and TRKC which are encoded respectively by the neurotrophic receptor tyrosine kinase genes NTRK1, NTRK2 and NTRK3 (47-49). Once activated, TRK RTKs signal through several downstream pathways involved in cellular proliferation, among which MAPK and PI3K/AKT. TRK receptors play an important role in the nervous system development (47). Oncogenic fusions leading to constitutive activation of the kinase domain have been described in all three NTRK genes, and these alterations have been identified in multiple solid tumors including colorectal cancer, lung cancer, and melanoma. In the thyroid, NTRK-driven malignancies are rare, found in 2-3% of thyroid cancers in adults, including PTCs, OTC, ATCs, and PDTCs (10, 48,49). Like RET-fusions, NTRK fusions are more frequent in pediatric patients with PTCs (up to 25% of cases) as well as in patients with previous radiation exposure. Despite their rarity, NTRK fusion-positive thyroid cancers are important to identify as we have now two FDAapproved targeted TRK inhibitors which have demonstrated clinical safety and efficacy in patients with metastatic or unresectable solid tumors with NTRK gene fusion.
Larotrectinib
Larotrectinib is a highly selective and potent TRK inhibitor with central nervous system activity (CNS). A pooled analysis (26) of 28 patients with NTRK fusion-positive thyroid cancers treated with larotrectinib from three basket trials (50)(51)(52) showed an ORR of 71% (95% CI 51-87), including 2 CRs, 18 PRs and 4 SDs. All patients with CNS metastases at baseline had a PR. 24-month PFS and OS were respectively 69 and 76%. When excluding the 7 patients with ATC, ORR increased to 86% (95% CI 64-97) and 24-months PFS to 84%. Response to therapy was irrespective of previous systemic therapy: 13 patients with DTC who had one or more prior lines of systemic therapy had an ORR of 92%. Notably, AEs were mainly grade 1 and 2, with only two patients who experienced grade 3 treatment-related AEs (anaemia and lymphopenia). No patients required treatment discontinuation due to AEs and only 2 patients experienced AEs leading to dose reduction.
Therefore, larotrectinib and entrectinib appear as reliable and durable treatment options in NTRK fusion-positive thyroid cancers, including those with CNS metastases.
Other non-FDA approved kinase inhibitors studied in DTC
We are frequently faced in clinical practice with patients that progress or do not tolerate the previously described FDA-approved treatments. In these situations, we resort to the off-label use of other anti-neoplastic agents that have been or are currently being studied in advanced DTCs and have shown some efficacy.
Vemurafenib, a selective BRAF inhibitor approved for treatment of BRAF-mutated melanoma, was in fact the first BRAF-inhibitor studied in DTC. Its efficacy was initially demonstrated in a small case series of 3 patients with metastatic BRAF V600E-mutated PTC (55). This was later confirmed by a phase 2 non-randomized, open-label, multicenter trial in which patients with recurrent or metastatic RAIrefractory BRAF V600E-mutated PTC, who were either TKI-naïve (cohort 1, n=26) or had progressed on VEGF-R TKI (cohort 2, n=22), received single-agent vemurafenib (56). In cohort 1, DCR with vemurafenib was 73% (95% CI 52-88) with 10 (38.5%) patients who had PR and 9 (35%) who had SD as best overall response. In cohort 2, response rates were lower, with 6 (27.3%) patients who had a PR as best overall response and 6 who had SD, leading to a DCR of 55% (95% CI 32-76). Median PFS was 18.2 months (95% CI 15.5-29.3) and 8.9 months (95% CI 5.5 -NE) in cohorts 1 and 2 respectively. Median OS was not yet reached in cohort 1, while it was 14.4 months (95% CI 8.2-29.5) in cohort 2. AEs were mostly grade 1-2, including rash, fatigue, alopecia, dysgeusia, creatinine increase and weight loss. Vemurafenib seems therefore to be a valid therapeutic option for BRAF-mutated PTCs, although it yet has to be studied in a phase 3 trial.
Encorafenib is another BRAF inhibitor, currently approved in combination with the MEK-inhibitor binimetinib for BRAFmutated metastatic melanoma and colorectal carcinoma. It has a more than 10-times longer dissociation half-life than dabrafenib or vemurafenib, allowing more sustained target inhibition and potentially a more potent antitumor activity (57). Moreover, it is associated with low rates of pyrexia and photosensitivity which are the two main dose-limiting AEs with the dabrafenib/trametinib and vemurafenib/cobimetinib combinations, respectively (57,58). Although no clinical data is currently available for its use in thyroid cancer, there is an ongoing phase 2 trial examining encorafenib combined with binimetinib, with or without immunotherapy (nivolumab), in patients with metastatic BRAF V600E mutant RR-DTC (NCT04061980). In practice, this drug can be considered as an alternative when dabrafenib is not tolerated, especially due to intractable fevers.
Everolimus, an inhibitor of mammalian target of rapamycin (mTOR), has been studied in several trials for treatment of advanced RR-DTCs. In fact, as previously discussed, activation of the PI3K/PTEN/AKT signaling pathway is frequent in advanced thyroid cancers. This is often due to a mutation of the PTEN protein, a PI3K inhibitor. Parallel activation of this pathway has also been suggested as an escape mechanism to TKIs. mTOR, a serinethreonine kinase, is a downstream effector of the PI3K/AKT pathway and serves as a potential therapeutic target. The first reported trial of everolimus in thyroid cancers was a multicenter, open-label, phase 2 study in South Korea that enrolled patients with all thyroid cancer histologies, including 6 patients with ATC and 9 with MTC (59). Among the 38 patients that were evaluable for response, DCR was 81%, including 2 PRs (both in DTC patients). 45% of patients showed durable SD for 24 weeks or longer. Median PFS in patients with DTC was 43 weeks. Treatment was overall well tolerated with mostly grade 1 AEs. This study was followed by a second phase 2 trial in the Netherlands, which enrolled 28 patients with advanced DTC, 54% of whom had previous treatment with a TKI, namely sorafenib (60). Sixty five percent of patients showed SD as their best response, with 58% having SD lasting more than 24 weeks. However, there were no PRs or CRs. Median PFS was 9 months (95% CI, [4][5][6][7][8][9][10][11][12][13][14], and median OS was 18 months (95% CI 7-29). Hanna and colleagues further expanded on the topic with another phase 2 trial, once again in all thyroid cancer histologies (61). In the DTC cohort (n=33), in which 51% of patients had previously been treated with a TKI, best response to therapy was SD in 82% and PR in 3%. Median PFS was 12.9 months (7.3-18.6), and median OS was not reached. Interestingly, in this trial, DTC patients with only a BRAF mutation had the longest PFS on everolimus, while patients with alterations in the PI3K/mTOR/ AKT pathway did not show better response to therapy. Thus, these three phase 2 trials demonstrate that mTOR inhibition is a viable second-line option in patients who progress on TKI therapy.
The combination of everolimus plus sorafenib showed improvement of PFS in comparison with sorafenib alone in a randomized phase 2 trial in patients with RAI-R oncocytic thyroid carcinoma that included 34 evaluable patients (62). PFS was significantly improved in the sorafenib plus everolimus arm (24.7 months (95% CI 6.1-no upper) compared to the sorafenib arm (10.9 months (95% CI 5.5-no upper). Response rates were similar between groups.
Pazopanib is an antiangiogenic MKI that inhibits VEGF, FGF, PDGF, KIT and RET receptors. It is currently FDA approved for other solid tumors including renal cell carcinoma. Pazopanib was evaluated in two phase 2 trials looking at its efficacy in patients with RR-DTC (63,64). In 2010, Bible and colleagues conducted a first trial in 37 patients, 18 of which had confirmed PR to therapy (response rate 49%; 95%CI 35-68). Responses were seen in 8/11 (73%) patients with follicular tumors, 5/11 (45%) patients with oncocytic tumors, and 5/15 (33%) patients with papillary tumors. Therapy was well tolerated with 46% of patients taking pazopanib for 12 months or longer. The most frequent AEs were fatigue, skin and hair hypopigmentation, diarrhea, and nausea. In 2020, the same group published a larger phase 2 international study in 60 patients with advanced or progressive RR-DTC treated with pazopanib. In this second trial, response rate was slightly lower, with 36.7% of patients having a PR (CI 24.6-50.1). This is probably explained by the fact that patients were more heavily pretreated than in the prior study (91.7% VS 27%). Median PFS was 11.4 months and median OS 2.6 years. Both studies did not show any differences in response to therapy between histological subtypes of DTC, nor according to mutation profile. There is therefore substantiating evidence to support the efficacy of pazopanib in RR-DTC, and it should be considered as a therapeutic option in patients who progress or do not tolerate other FDA-approved therapies.
Potential mechanisms of drug resistance
Despite promising initial results, all kinase inhibitors seem to become eventually ineffective, leading to inevitable disease progression. Current research efforts are therefore focused on identifying resistance mechanisms to kinase inhibitors and ways to overcome them. To date, a few potential mechanisms of tumor resistance to kinase inhibitors have been described (73) (Figure 2).
First, acquired resistance to tyrosine kinase inhibitors can involve escape mechanisms that activate parallel signaling pathways. For instance, upregulation of alternative angiogenic signaling factors such as FGF2, PDGF or epidermal growth factor receptor (EGFR) has been observed in tumors resistant to anti-VEGF TKIs (74-76). One possible factor underlying this phenomenon is hypoxia secondary to VEGF-R inhibition (74)(75)(76). In fact, hypoxia induces gene expression of proangiogenic factors primarily through the HIF-1a (hypoxia inducible factor-1a) transcription factor. Moreover, activation of the PI3K/AKT pathway or reactivation of the JAK-STAT pathway were also shown to be involved in acquired resistance to sorafenib (76).
Similarly, several studies have demonstrated that cancer cells develop resistance to BRAF inhibitors by overexpressing growth factor receptors at their surface, including KIT, c-MET, EGFR and PDGF-receptor-b (PDFGR-b), leading to MAPK pathway reactivation despite BRAF inhibition (73,77,78). The treatment strategy to counteract this activation of alternate pathways is to either add a second TKI or to switch to another targeted systemic therapy. For example, in a multicenter phase 2 International Thyroid Oncology Group (ITOG) trial, cabozantinib conferred significant additional PFS and OS benefits (12.7 and 34.7 months respectively) in advanced DTC patients who had progressed on prior VEGF-R targeted therapy (79).
Factors associated to the tumor microenvironement have also been involved in resistance to kinase inhibitors. Pericytes are stromal cells that play a key role in the angiogenic microenvironement of thyroid cancers, in part by facilitating vessel maturation. PDGF growth factor-BB (PDGF-BB), which promotes pericyte proliferation through interaction with PDGFRb, has been found to be increased in BRAF V600E-mutated PTCs, and pericytes have been shown to support the growth and survival of PTC cells (80, 81). Furthermore, in vitro studies suggest that pericytes might play a role in resistance to sorafenib and vemurafenib through secretion of thrombospondin-1 (TSP-1) and TGFb1, which trigger rebound elevation in ERK1/2 and AKT levels allowing tumor cells to overcome inhibitory effects of these targeted therapies (82). Cancer-associated fibroblasts (CAF) have also been shown to promote cancer growth and to play a role in drug resistance (83).
Epithelial-mesenchymal transition (EMT) of tumor cells, induced by secondary mutations, hypoxia and other stimulating factors from the tumor microenvironment, was also shown to be involved in resistance to sorafenib (76) and lenvatinib (84). In fact, studies identified changes in treatment-resistant cells towards a mesenchymal morphology (76, 84,85). Tumor cells undergoing EMT loose cell adhesion molecules such as E-cadherin and gain mesenchymal cell markers such as vimentin and N-cadherin, resulting in loss of cell-to-cell contacts and increased motility, which favor their dissemination to distant sites (84,85). In addition, EMT makes tumor cells resistant to apoptosis and antitumor drugs (84,85). Nonetheless, the exact interaction between EMT and anti-VEGFR TKIs resistance remains unknown.
Acquired wild-type copy number amplifications has also been identified as a resistance mechanism to BRAF inhibitors. For example, MCL1 copy number gain has been associated with resistance to vemurafenib treatment in PTC (86). MCL1 is an anti-apoptotic member of the BCL2 family, which might regulate parallel signaling pathways activating BRAF in PTCs resistant to anti-BRAF agents. Similarly, in another case report of a PTC which underwent ATC transformation while on dabrafenib (87), acquired triploidy of chromosome 7, which harbors the EGFR, RAC1, MET, and BRAF genes, was demonstrated in the progressive metastatic lesion. Copy number amplifications of these protooncogenes were consequently present in the dedifferentiated sample, probably contributing to tumor progression.
Finally, acquisition of secondary point mutations has also been proposed as a resistance mechanism to TKIs. For instance, a study exposing BRAF V600E mutated KTC1 thyroid cancer cells to long term vemurafenib showed development of secondary KRAS point mutations, allowing these cells to bypass BRAF inhibition (88). In addition to RAS point mutations, other acquired mutations that possibly confer drug resistance were found in the RAC1, PTEN, NF1, NF2, TP53, and CDKN2A genes (73,87,89). Moreover, it is now well recognized that acquired mutations in the RET kinase domain cause resistance to selective RET-inhibitors by interfering with drug binding. These include RET G810 solvent-front mutations as well as non-gatekeeper mutations at hinge (Y806C/ N) and b2 strand (V738A) sites within the RET kinase domain (90)(91)(92)(93).
Identification and better understanding of these resistance mechanisms pave the way for future novel therapies including combination of kinase inhibitors, potentiation of TKIs by adding immunotherapy, and redifferentiation therapy.
Immunotherapy in DTC
In the past decade, immune checkpoint inhibitors (ICIs) have revolutionized cancer therapy. These monoclonal antibodies reactivate T-cell response against cancer cells, by blocking either the lymphocyte inhibitory receptor CTLA4 or the interaction between T-cell receptor PD-1 with its ligands PDL-1 and PDL-2 at the surface of cancer cells. To date, seven ICIs have received FDA approval for the treatment of various neoplasms including melanoma, NSCLC, renal cell carcinoma, and many others.
Just like in other neoplasia, thyroid cancer cells also escape immune surveillance, making ICIs an interesting therapeutic avenue. Immune escape in DTCs occurs through various mechanisms. First, deficient antigen presentation and reduced T-cell activation has been shown to play a role. This can occur either by downregulation of major histocompatibility complex (MHC) class I, mutations within the T-cell receptor binding domain of MHC-I, or loss of function of b2-microglobulin which results in disruption of MHC-I folding and transport to the cell surface (94, 95). Notably, MHC-I and b2-microglobulin expression were shown to be reduced or absent in 76% of PTCs (96).
Moreover, an immunosuppressive tumor microenvironment (TME) contributes to immune tolerance. Infiltration by regulatory T cells (Treg), which facilitate self-tolerance by suppressing effector T cells, has been observed in many tumor types (94). In PTC, increased Treg has been shown to correlate with lymph node metastasis and might be indicative of more aggressive disease (95,97,98). Other cells of the immune system including tumorassociated macrophages, plasmacytoid dendritic cells and tumorassociated mast cells are all overexpressed in the TME of DTCs and contribute to immune escape. Conversely, aberrant tumor vasculature that impairs the infiltration of immune cells can also occur (78). Finally, exhausted PD1 + CD8 + T cells with defective cytokine production also play a role in the immunosuppressive milieu of DTCs (97,98).
Several signaling pathways that are activated by oncogenic mutations associated with thyroid cancer can contribute to the immune escape. Among those, constitutive activation of the MAPK pathway impairs recruitment and function of tumor-infiltrating lymphocytes through increased expression of VEGF and multiple FIGURE 2 Proposed mechanisms of resistance to kinase inhibitors.
Another major mechanism of immune escape in thyroid cancer as well as many other tumor types is up-regulation of inhibitory immune checkpoints, mainly PDL-1 but also PD-1, PDL-2 and CTLA4 (95). Notably, PDL-1 has been shown to be overexpressed in more advanced DTCs, with significant correlation between PDL-1 expression and lymph node metastasis, extrathyroidal invasion and disease-free survival (99). Interestingly, PDL-1 expression was higher in BRAF V600E mutant tumors, which are known to have the potential for more aggressive behavior.
Therefore, since pathogenesis of thyroid cancers includes escape of the immune system, reactivation of the anti-tumoral immune response may prove useful in the treatment of some thyroid neoplasia. This rationale led to various studies looking at ICIs in advanced DTCs.
Pembrolizmab single agent
KEYNOTE-028 is a phase Ib clinical trial of the PD-1 targeting antibody pembrolizumab in patients with PDL-1 positive, locally advanced or metastatic DTC (100). Of note, patients did not need to have radioiodine refractory disease or progression to be enrolled in the study. 22 patients were treated with pembrolizumab 10mg/kg every 2 weeks for 24 months or until confirmed progressive disease, unacceptable AEs, or investigator or patient decision to withdraw. 50% of patients had previously received an MKI. ORR was 9% (95% CI 1-29%) with only 2 PRs. Clinical benefit rate, defined as PR + SD for at least 6 months, was of 50% (95% CI 28-72%). Median PFS was 7 months (95% CI 2-14 months). At data cutoff, median OS was not reached (95% CI, 22 months to not reached), with 6-and 12-month OS rates of 100 and 90%, respectively. Treatment was overall well tolerated, the most frequent AEs being diarrhea (in 32% of patients) and fatigue (in 18%). Only one grade 3 AE occurred, namely colitis, and no grade 4 AEs or AE-related treatment discontinuations were described.
Lenvatinib and pembrolizumab combination
An ongoing phase 2 trial (NCT02973997) explores the combination of lenvatinib and pembrolizumab as a first line treatment of RR-DTCs with disease progression less than 14 month prior to enrollment (101).. In fact, VEGF has been associated with resistance to immune checkpoint blockade (94). VEGF axis promotes a hypoxic and immunosuppressive TME by decreasing T cell infiltration, impairing cytotoxic T cell activity, and promoting repressive immune cell infiltration. Thus, inhibition of VEGF signaling might represent an important strategy to enhance ICI efficacy. Inhibition of VEGF-R was correlated with improved response to ICIs in renal cell carcinoma, and the combination of lenvatinib + pembrolizumab has been approved for advanced endometrial carcinoma (102). Therefore, lenvatinib + pembrolizumab was also explored in DTC. Results reported in a poster at the 2020 ASCO meeting in 30 patients showed PR in 62% of patients and SD in 35%. Median time to tumor size nadir was 7.4 months (CI 1. 6-17.8). Notably, 14/29 evaluable patients were still on therapy at data cutoff (7.6-18.9 months) and 6/19 (43%) patients had not yet reached tumor size nadir. Median PFS was not yet reached, but PFS at 12 months was 74%. Seventy percent of patients had grade 3 AEs and 10% had grade 4 AEs. The most common grade > 3 AEs were hypertension (in 47%), weight loss (in 13%) and maculopapular rash (in 13%). Therefore, the combination of lenvatinib + pembrolizumab seems promising, although it is unclear yet if addition of pembrolizumab brings any supplemental benefits to single agent lenvatinib as PR and SD rates with the combination are similar to those with lenvatinib alone (27). Updated data from the lenvatinib + pembrolizumab trial might help answer this question, especially if PFS or OS benefits are achieved.
Cabozantinib and atezolizumab combination
Another ongoing multinational phase 1b trial, COSMIC-21 (NCT03170960), is evaluating cabozantinib in combination with the anti-PDL-1 antibody atezolizumab in advanced solid tumors, including DTCs. Similar to lenvatinib, cabozantinib has immunomodulatory properties that counteract tumor-induced immunosuppression and may enhance response to ICIs. Combination of cabozantinib and nivolumab, a PD-1 inhibitor, has already shown efficacy in a phase 3 randomized trial for advanced renal cell carcinoma (103). Efficacy and safety results of cabozantinib + atezolizumab as a first line therapy in 31 patients with locally advanced, metastatic and/or progressive RR-DTCs included in the COSMIC-021 trial were presented in a highlighted poster at the 2022 American Thyroid Association (ATA) meeting (104). Patients who had received any other systemic anticancer therapy were excluded. Fifty-eight percent of patients had PTC and 61% of the tumors were harboring a BRAF mutation. Patients were treated with cabozantinib 40mg daily and atezolizumab 1200mg every 3 weeks. At data cutoff, with a median follow-up of 24.9 months (95% CI 14.9-33.3), ORR was 42% (95% CI 25-61) including 13 PRs and 17 SDs. Impressively, DCR was 97% (30/31) with the remaining patient having no post-baseline assessment available. Duration of response to therapy was 22 months (95% CI 1.4 -28.0), median PFS was 15.2 months (95% CI 10.4-24.3), and 28/31 patients were still alive at data cutoff. Grade 3/4 AEs occurred in 55% of patients, mainly diarrhea (13%) and cytolytic transaminase increase (10%), with 4 patients having had to stop treatment due to AEs related to one or both drugs. Overall, AEs related to the combination therapy were consistent with those of the individual agents and were manageable. Therefore, first-line combination of cabozantinib + atezolizumab in advanced RR-DTC provided durable responses and a high rate of disease control across different subtypes of DTC, which makes it an interesting therapeutic option.
Ipilimumab plus nivolumab
Combined CTLA4 and PD-1 blockade has also shown efficacy in multiple tumors including melanoma and renal cell carcinoma. In fact, combination therapy overcomes ICI resistance: CTLA4 inhibition increases T-cell priming and reduces Tregs in the TME, while PD-1 inhibition enhances T cell effector response (95). Preclinical data suggests that this combination could also be beneficial in aggressive RR-DTCs. Therefore, an ongoing phase 2 study (NCT03246958) is looking at the combination of nivolumab (an anti-PD-1) and ipilimumab (an anti-CTLA4) in RR-DTCs, including PDTCs (105). Results in 32 patients were presented in a poster at the 2020 ASCO meeting. Three (9%) patients achieved partial response, including one near-complete response, while 14/32 (44%) had stable disease. Median PFS at data cutoff was 4.9 months.
Cabozantinib and ipilimumab plus nivolumab combination
Ipilimumab/nivolumab combination has also been studied in association with cabozantinib in a multicenter phase 2 trial looking at locally advanced or metastatic RR-DTCs that have progressed on one previous anti-VEGFR therapy (NCT03914300). Interim results in 11 patients were presented at the 2022 ATA meeting (106). Interestingly, 45% (5/11) of patients included in the study had OTC and 18% (2/11) had PDTC. ORR within the first 6 months, which was the trial's primary endpoint, was 9% (1/11), while ORR at data cutoff was 18% with 6 SDs and 2 PRs. Median PFS and OS were respectively 9 months (3.0-NR) and 19.2 months (4.6-NR). Only 3 patients were still on trial treatment at data cutoff. Therefore, although this triple combination therapy was overall well tolerated, efficacy was very limited and ipilimumab+ nivolumab did n ot see m to off er any addition al advant age to cabozantinib monotherapy.
Multiple other clinical trials looking at various ICIs in advanced DTC are currently underway, including a phase 2 trial studying encorafenib + binimetinib with or without nivolumab in patients with metastatic BRAF V600E mutant RR-DTC (NCT04061980), and a phase 2 trial evaluating the combination of the anti-PDL-1 durvalumab with the anti-CTLA4 tremelimumab in advanced RR-DTC (NCT03753919). Table 2 summarizes ongoing and published trials of immunotherapy in DTC.
Redifferentiation therapy
Loss of RAI sensitivity in DTCs is associated with more aggressive disease and a significantly poorer prognosis. RAI refractoriness is due to loss of thyroid differentiation features, among which the most important is Na/I symporter (NIS) function and expression. In fact, NIS allows active iodine transport into follicular cells and is responsible for RAI entry into thyroid cancer cells. Immunohistochemistry studies have shown that NIS protein expression is significantly decreased in differentiated thyroid cancer tissues (107). Decreased targeting of NIS to the plasma membrane through reduced vesicular trafficking (108) and impaired cell-cell adhesion secondary to loss of Ecadherin (109) might also play a role in loss of RAI uptake in advanced thyroid cancers.
It has now been well demonstrated that MAPK pathway activation is associated with dedifferentiation and a decreased NIS expression (110, 111). Moreover, studies have shown that the degree of tumor dedifferentiation correlates with the magnitude of activation of the MAPK pathway, and that BRAF V600E mutations lead to greater MAPK activation than RAS or RTK alterations (11, 110, 111). Conversely, suppressing the MAPK pathway with BRAF or MEK inhibitors in mice was shown to restore NIS expression and RAI uptake (15). These findings opened the floor to redifferentiation therapy, a treatment strategy in which we aim to restore RAI uptake, allowing subsequent treatment with RAI in a tumor which was previously considered as RAI-refractory.
In the first clinical study looking at redifferentiation therapy, 24 patients with RR-DTC were treated with a MEK inhibitor, selumetinib, for 4 weeks (112). Of the 20 patients who could be evaluated, 60% (12/20) had increased uptake on subsequent 124 I PET-CT scan, 8 of which reached the dosimetry threshold for radioiodine therapy (i.e. if one or more lesions could be treated with a dose of ≥ 2000 cGy with an 131 I administered activity ≤ 300 mCi) and were therefore treated with RAI. During the 6 months-followup period after radioiodine therapy, a reduction in the size of target lesions was observed in all patients, with confirmed PR in 5/8 patients and SD in 3/8 as the best overall response. In the study cohort, 9 patients had tumors harboring a BRAF V600E mutation, 5 a NRAS mutation, 3 a RET fusion and 3 had no identified mutation.
Interestingly, in the selumetinib redifferentiation trial, the 8 patients who reached the dosimetry threshold included all 5 patients with an NRAS mutation but only 1 patient with a BRAF mutation (112). This led to the hypothesis that MEK inhibitors possibly achieve an incomplete blockade of MAPK signaling in BRAF-mutant tumors which harbor a higher degree of pathway activation. Therefore, this was followed by four trials evaluating redifferentiation with BRAF inhibitors in BRAF-mutated RR-DTC.
First, Rothenberg and colleagues (113) enrolled ten patients with BRAF V600E mutant RAI-refractory PTCs. Each patient received dabrafenib for 25 days, followed by an 131 I whole body scan (WBS). 6/10 patients whose scan showed new sites of radioiodine uptake remained on dabrafenib for a total of 42 days, after which they received an empiric dose of 150 mCi of RAI. At 3 months, 2/6 patients had PR and 4/6 patients had SD.
Similarly, Dunn and colleagues (114) studied redifferentiation therapy using vemurafenib in a cohort of 12 patients with BRAF mutant RR-DTC, excluding OTCs. Patients were treated with vemurafenib for 4 weeks. Pre-treatment 124 I PET-CT lesional dosimetry was done before and 4 weeks after vemurafenib therapy. Patients in whom at least one index tumor (of ≥ 5mm in maximal diameter) was predicted to absorb ≥2000 cGy with a clinically administered 131 I activity of ≤ 300 mCi, identified as 124 I responders, were subsequently treated with RAI while still on vemurafenib. 10/12 completed the 4-week treatment course of vemurafenib, and 4 of them were 124 I responders, qualifying for RAI therapy. At 6 months, 2/4 patients had SD and 2/4 had a PR. Of these four patients, two required subsequent thyroid cancer treatment at 9 and 18 months, and the other two patients have not required further therapy at 22 and 33 months, suggesting prolonged benefits.
Weber and colleagues (115) performed a prospective phase II redifferentiation study in which 6 patients with BRAF-mutated RR-DTC were treated with dabrafenib + trametinib while 14 patients with BRAF wild-type tumors were treated with trametinib alone for 21 ± 3 days. Redifferentiation was achieved in 2/6 BRAF-mutated and 5/14 BRAF wild-type patients, all of which received a dosimetry-guided therapeutic dose of RAI. At one year, response to therapy per RECIST 1.1 was PR in 1/7 patient, SD in 5/7 patients and PD in 1/7 patient. Both BRAF-mutated patients had some decrease in tumor size following redifferentiation therapy (one PR and one SD).
Finally, Leboulleux et al. (116) recently published another prospective multicentric trial in which 21 patients with BRAFmutated metastatic, progressive, RR-DTCs were treated with dabrafenib and trametinib for 42 days then received an empiric dose of RAI 150 mCi irrespective of uptake on diagnostic WBS. Only one patient had 131 I uptake on baseline diagnostic WBS while 20 patients demonstrated uptake on the post-therapeutic WBS. Responses at six months were SD in 52% of patients, PR in 38% and PD in 10%, which corresponds to a tumor control rate of 90%. Eleven patients with PR at 6 or 12 months were re-treated with a second course of dabrafenib + trametinib followed by RAI. Nine of the 10 evaluable patients within this group had abnormal 131 I uptake on the second post-treatment WBS. At 6 months, 6/10 patients had a PR and 1/10 a CR. The 12-month PFS rate was 82.0% (95% CI, 58. 8-92.8). Notably, re-induction of 131 I uptake and response rates following redifferentiation with dabrafenib and trametinib were higher in this study compared to what was reported by Weber et al. (115). Potential explanations for these differences include longer duration of drug therapy (42 vs 21 days), higher dose of dabrafenib (150 mg vs 75 mg twice daily), more limited tumor volume (no lesion larger than 3 cm) as well as empiric treatment of all patients regardless of restoration of uptake on diagnostic WBS in the trial by Leboulleux and colleagues.
Successful redifferentiation of RAS mutant tumors with MEK inhibition in the selumetinib pilot study (112) also led to a phase 2 trial looking at efficacy of the MEK 1/2 inhibitor trametinib for redifferentiation of RAS mutant and RAS wild-type RR-DTCs. 15/ 25 patients in the RAS-mutant cohort met the dosimetry threshold for radioiodine therapy on 124 I PET, 14 of which received RAI. At 6 months, ORR was 32%, with 8 PRs (57%), 3 SDs (21%) and 2 PDs (21%). Six-month PFS in the RAS mutant patients was 44%. In the RAS wild-type cohort (n=9), 3/4 patients with BRAF Class II alterations and 1/4 patients with RET rearrangements qualified for RAI, with 3 SDs and 1 PR (in patient with a BRAF-altered tumor) (117).
Additional retrospective studies have confirmed that redifferentiation represents a promising new therapeutic approach in patients with advanced RR-DTCs. Jaber et al. (118) described 13 patients with RR-DTC in whom targeted therapy with either singleagent BRAF or MEK inhibitor, or combination of dabrafenib and trametinib (in one patient), led to increased 131 I uptake. 9/13 The concept of redifferentiation might also apply to tumors harboring other than BRAF or RAS mutations. For instance, Groussin and colleagues (120) described one case of successful redifferentiation therapy with Larotrectinib in a patient with metastatic PTC harboring an EML4-NTRK3 gene fusion. Similarly, restoration of radioiodine uptake in patients with RETfusion positive RR-DTC has been reported following treatment with selective RET-inhibitors pralsetinib (121) and selpercatinib (122).
Thus, substantial data now shows that mutation-guided MAPK pathway inhibition seems to be an efficient strategy to redifferentiate RR-DTCs (Table 3). However, available trials are significantly heterogeneous with regard to multiple aspects, including definition of radioiodine-refractory disease, inclusion criteria, duration of TKI therapy prior to RAI administration, choice of imaging modality to determine restoration of RAI uptake ( 124 I PET/CT versus 123 I scintigraphy) and dose of RAI (dosimetry-guided versus empiric). It also remains unclear whether increase of uptake on diagnostic WBS performed after treatment with the kinase inhibitors should be used as a criterion to select candidates for RAI administration. Therefore, more studies are needed to identify the optimal choice and duration of TKI before RAI, to better determine the characteristics of patients who are most likely to benefit from redifferentiation therapy, and to clarify the long-term risks as well as the duration of response to this therapeutic approach.
Future perspectives in radioiodine refractory DTC
When tolerated, TKIs can lead to a significant decrease in tumor size and could allow surgical resection of a previously inoperable tumor: this is referred to as neoadjuvant chemotherapy. Most MKIs used in advanced DTC are anti-angiogenic and thus may lead to poor wound healing and fistula formation. Therefore, these drugs need to be discontinued several weeks before surgery, which makes them unfit for use in the neoadjuvant setting. Nevertheless, case reports have been published in which MKIs, mostly lenvatinib (123) and sorafenib (124), have been successfully used to achieve shrinkage of locally aggressive tumors invading major cervical vessels, allowing subsequent complete surgical resection. More recently, a systematic review of neoadjuvant targeted therapy in locally advanced thyroid cancer (125) reported an R0/R1 resection rate of 78.1% among 27 patients, across all thyroid cancer subtype including ATC, MTC and PDTC. This review included 18 patients with DTC, all of whom were treated with non-selective TKIs with anti-VEGFR activity (anlotinib, lenvatinib, sorafenib). Despite this, no increased hemorrhagic risk during surgery was reported. To further explore this therapeutic avenue, there is currently a phase II multicenter clinical trial examining the efficacy of neoadjuvant lenvatinib in patients with locally advanced DTC (NCT04321954).
Selective kinase inhibitors, on the other hand, have little to no anti-angiogenic properties, which makes them potentially safer in the neoadjuvant setting. A clinical trial looking at the neoadjuvant use of the selective BRAF-inhibitor vemurafenib in 17 patients with unresectable BRAF-mutated PTC, has been reported (126). Eleven patients who completed the 56 days of treatment with vemurafenib underwent subsequent surgery: 8 had a complete resection (R0), and 3 had a resection leaving only microscopic residual disease (R1). 3/11 patients had an incomplete resection. One patient, whose tumor was involving the carotid, had a fatal hemorrhage two weeks after surgery.
Although neoadjuvant use of targeted therapy is not standard in the management of locally advanced DTCs, this approach is promising and is being increasingly used in clinical practice, especially with the growing availability of specific kinase inhibitors. Nevertheless, more data is required to confirm the efficacy, safety, and long-term benefits of this treatment strategy. An ongoing trial looking at the use of neoadjuvant selpercatinib for locally advanced RET-altered thyroid cancers might help answer some of these concerns (NCT04759911).
Another major therapeutic avenue that is being explored for advanced radioiodine refractory thyroid carcinomas, resistant to existing treatments, is chimeric antigen receptor T-cell (CAR-T) therapy. CAR-T cells are genetically engineered T-cells that express a chimeric antigen receptor, which contains a singlechain variable fragment (scFv) responsible for antigen recognition, and an intracellular signaling domain which initiates T cell activation. CAR molecules can reprogram T-cells to recognize and eliminate tumor cells expressing specific antigens (127,128). CAR-Ts have demonstrated remarkable efficacy in hematological neoplasms and are currently being studied in various solid tumors. However, use of CAR-T therapy is more challenging in solid tumors, due to an immunosuppressive tumor microenvironment that impedes the access of CAR-T cells into the tumor. Moreover, antigen selection in solid tumors can also be challenging, because many tumor antigens also have some low-level expression in normal tissues, exposing the patient to a risk of "on-target, off-tumor" toxicity (128). The TSH-receptor, a well-known thyroid specific antigen, seems to be a promising target for CAR-Ts in advanced DTCs in in-vitro and mouse models (129). Moreover, a study assessing the safety and tolerability of autologous CAR-T cells targeting intracellular adhesion molecular-1 (ICAM-1) in advanced refractory poorly differentiated thyroid cancers is currently ongoing (NCT04420754).
Conclusion
Better understanding of the molecular mechanisms underlying thyroid cancer has revolutionized the treatment of advanced, radioiodine refractory disease. Over the past decade, we have seen an expansion in the use of kinase inhibitors for advanced thyroid cancers, with the most recent approval of six selective, less toxic, targeted agents. The increasing number of available drugs raises the question as to what is the optimal treatment sequence, which remains to be defined. Moreover, although these drugs offer a delay in disease progression and tumor size shrinkage, none have led to an improved length of survival. For many of these agents, drug related toxicity is non negligeable and can significantly alter quality of life. Furthermore, patients eventually develop resistance to these therapies and experience disease progression. Therefore, identification of the optimal timing for initiation of systemic therapy is crucial, taking into consideration disease burden and rate of progression, presence of symptoms, as well as patient comorbidities and toxicity profile of potential drugs. Given limitations of currently available therapies, the search for a curative treatment for RR-DTC, with long-term persistent efficacy, continues.
Conflict of interest
RD has received grant funding from Eisai, Exelixis, AstraZeneca, and Merck and has participated in advisory boards for Exelixis and Bayer. MC has received grant funding from Genentech and Merck and has received consulting fees from NB has received grant funding from GSK, has received consulting fees from Eisai and Loxo and has participated in advisory board for Exelixis. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-06-26T13:03:21.254Z | 2023-06-26T00:00:00.000 | {
"year": 2023,
"sha1": "fc5abb9b3b69e2b77ad3b8a616029cb88ba7cf49",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "fc5abb9b3b69e2b77ad3b8a616029cb88ba7cf49",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119084845 | pes2o/s2orc | v3-fos-license | Electro-optic controllable disordered photonic crystal for Anderson localization of light in real time
Anderson localization is a ubiquitous interference phenomenon in which waves fail to propagate in a disordered medium. Unlike in a classical resonator, satisfying the favorable condition for the interference in a disordered medium is truly a statistical problem in physics. Recent progress in realizing Anderson localization is mainly limited to the iterative method for optimizing the disordered medium. Availability of an in-situ, active control for optimization surely paves the way of realizing the Anderson localization and its applications. In this letter, we have proposed an electro-optic controllable disordered photonic crystal and demonstrated its performance in terms of Anderson localization of light in situ in real time by application of an external electric field. We believe that Anderson localization using this medium is not only expected to address the scientific rigors but also to introduce an extra degree of freedom i.e., the tunability in its technical applications.
The statistical distribution of phasors for maximizing the possibility of constructive interference unequivocally leads to a criterion, known as Ioffe-Regel criterion [1,2] , ℓ~1, for Anderson localization, where = 2 ⁄ , and ℓ is the scattering mean free path, defined by the average distance between any two consecutive scattering events. In the dilute limit of scatterer, the optical density, 1 ℓ ⁄ increases linearly with the particle volume fraction ∅, i.e., 1 ℓ ⁄ ∝ ∅. It is observed that [3,4] , above a certain value of ∅, shortrange interparticle correlation starts prevailing in the system and instead, results decreased in the optical density 1 ℓ ⁄ . Intuitively, there should be a region between the uncorrelated (dilute limit) and correlated (high concentration limit) regimes where the value of ℓ must reach to a global minimum. Searching this critical regime has been an emergence over the last few decades to demonstrate the Anderson localization of light [5][6][7][8][9] . Because, the Ioffe-Regel criterion is likely to be held at this regime. Parameterizing this region of interest requires the knowledge of the local structure factor ( ) of the disordered medium as well as the nature of the constrains, such as, absorption coefficient, coherent enhancement factor, internal reflection, etc. Note that, unless there is an in-situ control, optimizing all these entities experimentally, requires a multiple iterative step. This limits the adaptivity and reproducibility of this localization technique for various photonic applications. An ability to tune the disorder or in other word, searching the effective truestatistics for the constructive interference in-situ, in real-time must be a privilege to define the localization regime even without solving the complicated light diffusion equations. In this letter, we introduce an insitu control of disorder, for the first time for realizing Anderson localization of light, by application of an external electrical field in liquid crystal (LC) based disordered photonic crystal and validate its ability in terms of the Anderson localization of light. Fig.1.Chiral-nematic liquid crystalline mesophase. a. Schematic diagram of the self-assembled periodic dielectric layers, which support extended Bloch wave; P: pitch length of the structural helix; WG: window glass to encapsulate the liquid crystal sample; LCP: left circular polarization; RCP: right circular polarization. The chiral-nematic liquid crystal medium is shown here to be a left-handed helix hence, polarization selectivity property of this medium renders RCP to transmit through the film and LCP to reflect back upon shining an un-polarized light on it. b. External electric field induced distorted focal conic state is shown schematically, in which multiple light scattering can be expected from the random domains; L: thickness of the cell. c. A set of polarizing optical microscopic images (illumination at 488nm using a lens of focal length f = 5mm, NA = 0.41, acquired from a single cell of thickness L=5µm, containing 25wt% -R811, a chiral dopant mixed in a host liquid crystal -E44), shows how the focal conic domain sizes are evolved with application of an external electric field (values are shown on the respective images); scale bar 5 . d. Transmission spectra of the same cell reveals photonic band gap (RCP 50%) at field E= 0, where the shape of the transmission spectrum is changed with increasing the field value, E> 0, and the overall transmission is decreased because of the multiple light scattering from random focal-conic domains (as in Fig.1 L LC has been a very promising material for tunable optics since its glorious historical beginning. Particularly, its electro-optic (e-o) control and self-assembled property made the material highly efficient for a large range of applications including display, laser, filter, wave-plates, lens, grating, polarizer, photonic crystals and more [10] . The advantage of its self-assembled property [11] is not only limited to reducing the fabrication cost for various photonic applications, but also, likely to enhance the monodispersity, and reproducibility of a certain structure factor ( ). The self-assembled structural analysis for various mesophases of LC and their e-o properties have been well studied [12,13] . Of interest a mesophase, called chiral-nematic, prepared by mixing a solvable chiral dopant with a nematic LC is treated as self-assembled, soft-matter photonic crystals. A complete photonic band gap is appeared in the visible spectral range due to the selective Bragg's reflection from a set of self-assembled periodic dielectric layers ( Fig.1.a.). Being e-o material, the self-assembled structure factor ( ) can easily be deformed ( Fig. 1.c.) by application of an external electric field [14][15][16] . The deformed state is known as focal conic state ( Fig.1.b.), which scatters light strongly. Because, multiple light scattering is occurred from a set of random domains appeared at this state [16] . Here, we intend to exploit this electric field controlled focal conic state towards realization of the Anderson localization of light. For that, we optimized the scattering strength of the focal conic state by varying the chiral concentration ( Fig.2.a) and, electric field ( Fig.2.b). We use this optimized mixture in the following section for quantifying the localization of light.
Scaling theory of the localization predicts [17] that the transmission decays exponentially, i.e., ∝ exp(− ) with the thickness of the sample within the localization regime ( ℓ~1). Whereas, the decay shows quadratic nature i.e., ∝ −2 outside the regime. We use these predictions as one of the approaches, likely to confirm the Anderson localization of light within our proposed e-o controllable disordered photonic crystals. Fig.3.a. shows the quadratic nature of the transmission over inverse-thickness of the sample (optimized one, ~5.2wt% of chiral concentration) for a few demonstrative field-values. Here, the quadratic nature suggests that these corresponding filed-induced (1.5V/ and 2.9V/ ) disorders are not sufficient for light-localization but, light-diffusion. The characteristic exponential type decay behavior is observed for 3.5V/µm (Fig.3.b.), whereas the decay follows the exponential trend with large deviations for two other near-neighboring fields (3.2V/µm and 3.8 V/µm). These collective results apparently show the ability to tune the disorder of the soft-matter photonic crystal by application of external electric field. Secondly, the exponential decay of transmission for the typical field-value (3.5V/µm) is likely to satisfy the criterion for Anderson localization. Comprehensively, at this optimized field-value (Eopt = 3.5V/µm), the structure factor, ( ) of the deformable domain becomes highly favorable for maximizing the constructive interference among its statistically distributed random phasors: the realization of true-statistics for localization.
The rate of the exponential decay of transmission is designated as the localization length ( ). Fig.3. b. shows a comparison between the rates of the decay at three different fields. For =3.5V/µm, the value of gets shorter (~2.4µm). We image a bright spot appeared for these three field values (Fig.3.c). The 3D surface profile provides the spatial distributions of intensity for comparison (Fig.3.d.). It is observed that the tail of the distribution gets faster decay for =3.5V/µm. This in turn indicates (from the central limit theorem) that the associated number-density of independent phasors (or, summands) is higher. In other word the chance of happening the true-statistics is higher at , whereas the chance is reduced gradually while moving away bi-directionally from this optimize field. Note that the reason of this bi-directional reduction is due to the e-o property of this material which is discussed in the supplementary (S1).
Another notable approach for identifying the prerequisite of Anderson localization is performed here based on the far-field speckle statistical. The detailed development of the statistical analysis can be found elsewhere [18] . Briefly, the turbidity in the optical phase near to the localization regime is higher which reflects to the statistical property of the transmitted light in such a way that, the far-field intensity fluctuation becomes higher. Here, we measure the far-field transmitted intensity (Fig.3.a.) along a line (line profile) parallel to the LC cell surface for various fields (see the supplementary for the setup). Two statistical parameters, the variance , and the mean 〈 〉 for the intensity distributions are estimated for comparison.
c.
The value of 〈 〉 is observed to be less for which is expected. Because ideally, the light is localized within the sample at . Further, the value of is found to be higher at , which is also expected as per the statistical signature of the localization.
Finally, evaluation of optical turbidity is done through measuring the coherent backscattering for as well as, for a near neighboring electric field, 3.2V/ (Fig.4.d.). The detailed geometry of the system is illustrated in supplementary. The values are found to be ℓ~1.9 for and, ~ 7.2 for the near neighboring field by measuring the inverse width of the backscattering cone fitting the low angle slope in the region of the strong scattering [19] . Note that, the minimum we obtained ( ℓ~1.9) from our experiment is surely off-centered from the global minimum ( ℓ~1), hypothesized for the Anderson localization of light. However, this difference lies well inside the range of value reported in literature, so far [5][6][7][8][9] . A few general reasons mainly absorption loss [20][21][22][23][24] , internal reflections are summoned most often to play the pivotal role towards this difference. In our case, these possible reasons may be applicable as well. Moreover, from statistical point of view, any disordered system always possesses some non-identical distributions, which limit the system form generating the true-statistics. In our case, the subjective origin of these nonidentical distributions may be listed as inhomogeneity in mixture, phase segregation, non-uniformity in sample thickness and the corresponding dielectric coupling etc. Presence of these most probable experimental factors can oppose to happen the true-statistical distribution of the phasors, even in our fine e-o controllable disordered photonic crystal.
In conclusion, we introduce an electric field controllable disordered photonic crystal for realizing Anderson localization of light. The extent of the localization is analyzed from multiple approaches; firstly, scaling theory-based approach, which predicts exponential decay of transmission with sample-thickness . c. The inverse contrast 1/ = 〈 〉/ of the transmission profile for each field is plotted. d. Finally, measurement of the turbidity is estimated from the coherent backscattering to derive ℓ , which is ~150 for .
within the localization regime. Secondly, we use the central limit theorem in statistics for analyzing the image of the localized spots, followed by the far-field speckle statistics for the transmitted intensity and, finally, the measurement of coherent backscattering. We found that the Ioffe-Regel criterion, ℓ~1, for localization is closely held in our proposed e-o controllable disordered photonic crystal. We believe that Anderson localization using this material is not only expected to address the scientific rigors but also to introduce an extra degree of freedom i.e., the tunability in its technical applications.
Cell fabrication: All the liquid crystal cells used here are custom designed. There is no alignment layer used on the glass substrates in order to exploit the fullest randomness of the director orientation of liquid crystals. Maintaining and evaluating thickness of the cell is very important for characteristic transmission vs. thickness plot. In order to maintain the uniformity of the thickness, spacer particles (of required thickness) are dispersed (0.1wt%) in isopropanol and the mixture solution is spin coated at 3000rpm on the glass substrates in order to have an uniform distribution. After spin coating, the substrates kept in an oven at 150 °C for 30 mints to evaporate the isopropanol. These substrates are then used to make a cell within which the chiral-nematic LC mixture is infiltrated. Note that, all the mixtures are infiltrated at the isotropic temperature of the liquid crystal (~113℃) to maintain the homogeneity of the mixture. The effective thickness of a mixture sample is determined from the Fabry-Perot interference fringes of the cell after filling the mixture. Driving field: It is instructive to apply continuous RF wave to a liquid crystal cell in order to avoid the influence of ionic current [12,13] inside the cell which can damage the cell and liquid crystals. Because of that we applied continuous FR wave with frequency 1KHz. The value of this frequency kept constant during the entire experiment. For transmission vs. applied field measurement (Fig.2b), the reading is taken during increment of the field gradually. This trend is in fact, followed during the entire study in order to avoid any hysteresis (memory) effect [14] related to the reverse voltage scanning. | 2019-01-21T22:26:42.000Z | 2019-01-21T00:00:00.000 | {
"year": 2019,
"sha1": "8692594dc841230f17df95762cfbb7c1873adf59",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8692594dc841230f17df95762cfbb7c1873adf59",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
12044634 | pes2o/s2orc | v3-fos-license | Rapid identification of allergenic and pathogenic molds in environmental air by an oligonucleotide array
Background Airborne fungi play an important role in causing allergy and infections in susceptible people. Identification of these fungi, based on morphological characteristics, is time-consuming, expertise-demanding, and could be inaccurate. Methods We developed an oligonucleotide array that could accurately identify 21 important airborne fungi (13 genera) that may cause adverse health problems. The method consisted of PCR amplification of the internal transcribed spacer (ITS) regions, hybridization of the PCR products to a panel of oligonucleotide probes immobilized on a nylon membrane, and detection of the hybridization signals with alkaline phosphatase-conjugated antibodies. Results A collection of 72 target and 66 nontarget reference strains were analyzed by the array. Both the sensitivity and specificity of the array were 100%, and the detection limit was 10 pg of genomic DNA per assay. Furthermore, 70 fungal isolates recovered from air samples were identified by the array and the identification results were confirmed by sequencing of the ITS and D1/D2 domain of the large-subunit RNA gene. The sensitivity and specificity of the array for identification of the air isolates was 100% (26/26) and 97.7% (43/44), respectively. Conclusions Identification of airborne fungi by the array was cheap and accurate. The current array may contribute to decipher the relationship between airborne fungi and adverse health effect.
Background
Fungi are widely distributed in the natural environment. Fungal spores can be easily dispersed into the air and may cause serious health problems. Exposure to fungal spores can cause a wide spectrum of allergenic reactions, such as asthma, and infections in susceptible individuals [1][2][3][4]. Asthma prevalence has considerably increased in recent decades such that it is now one of the most common chronic disorders in the world [5][6][7]. Some severe diseases, such as allergic bronchopulmonary aspergillosis and fungal sinusitis, may be found in susceptible or immunocompromised individuals through mold exposure [8,9]. The predominant genera of airborne fungi causing health concern are Alternaria, Aspergillus, Cladosporium, and Penicillium [4].
In order to decipher the relationship between fungi and potential fungal infection, it is imperative to establish methods that can accurately identify airborne fungi to the species level and the method could be easily followed. Conventional methods for fungal identification are primarily based on morphological and physiological tests [10]. These tests often require several days or even weeks and the results can be inconclusive or inaccurate [11]. Even for a mycologist, the identification of airborne fungi to the species level can be challenging, due to the taxonomically high divergence of these microorganisms. In recent years, numerous DNA-based methods have been developed to identify a variety of medically important fungi [12]. The rRNA genes have been extensively used as the targets for molecular identification [12,13]. These methods include DNA probes [14], PCR-restriction enzyme analysis [15], real-time PCR [16], and DNA sequencing [17,18]. PCR techniques are particularly promising because of their simplicity, sensitivity, and specificity. However, these methods can identify only one or a limited number of species at a time.
A variety of DNA array methods, having the capacity to simultaneously identify multiple targets, have been developed to identify pathogenic fungi [19][20][21][22][23]26,30] with high sensitivity and specificity. In contrast, literatures using the array platform to detect airborne fungi are very limited and so far only one study using nonspecific probes was reported [24]. In our previous studies, oligonucleotide probes designed from the internal transcribed spacer (ITS) regions have been developed to identify a wide variety of pathogenic molds (19,26) and yeasts (30), including some airborne species. The aim of this study was to expand the probe panel to identify 21 airborne fungal species (13 genera) that may cause health problems in susceptible persons.
Fungal strains
A total of 73 target strains (strains we aimed to identify) representing 21 species (13 genera) ( Table 1) and 66 nontarget strains (66 species, additional file 1) were used in this study. These strains were obtained from the Bioresources Collection and Research Center (BCRC, Hsinchu, Taiwan), the American Type Culture Collection (ATCC, Manassas, Virginia, USA), and Centraalbureau voor Schimmelcultures (CBS, Utrecht, The Netherlands).
DNA extraction
Mycelia (approximately 0.5 × 0.5 cm) grown on Saubouraud dextrose agar were transferred into a 2-ml screw cap tube (Azygen Sientific, Union City, California, USA) containing 300 mg of zirconium/silica beads (0.5 mm in diameter, Biospec Products, Bartlesville, Oklahoma, USA) in 1 ml of sterilized saline. The mycelial suspension was shaken in a cell disrupter (Mini-Beadbeater, Biospec Products) for 5 min at a speed of 4,200 rpm. An 0.1 ml aliquot of the disrupted cell suspension was transferred to a 1.5-ml centrifuge tube and centrifuged at 8,000 × g for 10 min. Fungal DNA in the supernatant was extracted by a DNA extraction kit (Viogene, Taipei, Taiwan) following the manufacturer's instructions [19].
Design of oligonucleotide probes
Species-or group-specific oligonucleotide probes (18-to 30-mers) were designed from the ITS 1 or ITS 2 regions based on sequences in the GenBank database (Table 2) or on sequences determined in this study. The positive control probe was designed from a conserved region in the 5.8S rRNA gene [26]. The designed probes were checked for melting temperature, secondary structure, and GC content by using the Vector NTI Advance 9 (Invitrogen, Carlsbad, California, USA), and checked for potential cross-reactivity with other species in GenBank by using the BLASTN program. A total of 25 probes, including 11 previously described ones [19,26,30], were used to fabricate the oligonucleotide array on nylon membrane. Ten bases of thymine were added to the 3' ends of probes that exhibited weak hybridization signals after preliminary testing [27]. An irrelevant oligonucleotide (16 bases) labelled with a digoxigenin molecule at the 5' end was used as a position marker on the array.
Fabrication of arrays
The array (0.8 × 0.7 cm) contained 72 dots (9 by 8 dots), including 50 dots for species identification (duplicate dots of each of the 25 fungal-specific probes), 5 dots for negative control (probe code NC, tracking dye only), 2 dots for positive control (probe code PC), and 15 dots for the position marker (probe code M) ( Figure 1). The oligonucleotide probes (10 μM) were drawn into wells of 96-well microtiter plates, and spotted onto positively charged nylon membrane (Roche, Mannheim, Germany) as described previously [28]. The layout of all probes on the array is shown in Figure 1.
Array hybridization
The ITS region of a fungus was amplified by PCR using the forward primers (ITS1 and ITS5) and reverse primer (ITS4) as described in the previous section, with each primer being labelled with a digoxigenin molecule at the 5' end. The reagents and procedures for prehybridization, hybridization (55°C for 90 min), and color development using enzyme-conjugated anti-digoxigenin antibodies were previously described [28]. The hybridized spots (400 μm in diameter) could be read by the naked eye. A strain was identified as one of the species listed in Table 1 when both the positive control probe and the species-specific probe (or at least one of the multiple probes designed for a species) were hybridized ( Table 2). The images of the hybridization pattern were captured by a scanner (PowerLook 3000; UMAX, Taipei, Taiwan).
Isolation and identification of airborne fungi
Air samples were collected from three places (one hospital, one research laboratory, and one government office). The QuickTake 30 BioStage Pump kit (SKC Inc., Eighty Four, Pennsylvania, USA) was used to collect air samples. Fungal spores were collected on malt extract agar and Saubouraud dextrose agar for 3 min at a flow rate of 28.3 L/min. Spores trapped on agar plates were grown at 25°C for 3-7 days. Seventy colonies grown on agar plates were selected for identification by the array. The species names of the fungi identified by the array were further verified by sequencing of the ITS region and the D1/D2 domain of the large-subunit RNA gene; sequences in the two regions were found to be highly specific for a fungal species [29,30]. Species were identified by searching databases using the BLAST sequence analysis tool in the National Center for Biotechnology Information. If the result of array hybridization was in accordance with that of either ITS or D1/D2 domain sequencing, the identification made by the array was considered to be correct.
Determination of detection limit
Detection limit was the lowest amount of fungal DNA that could be detected by the array. Serial 10-fold dilutions of DNAs of Aspergillus fumigatus BCRC 30502 and A. versicolor BCRC 31488 were used to determine the detection limits.
Probe design
Initially, about 100 probes (data not shown) were designed to identify the 21 species listed in Table 1. Through extensive screening, many probes cross-reacted with heterologous species or produced weak hybridization signals with homologous species. Finally, 25 probes were selected for fabrication of the array ( Table 2); these probes included 11 oligonculeotides published in our previous studies [19,26,30]. One or multiple probes were designed to identify a single species, depending on the availability of divergent sequences in the ITS region (Table 2). For most species, a single probe was enough to identify an individual microorganism. But one probe (code Chcgf1) was used to identify a group of three closely related species (Chaeotomium cochlioides, C. globosum, and C. fumicola) due to high interspecies similarities of the ITS sequences among these species. Conversely, some fungi displayed high intraspecies sequence divergence in the ITS regions and hence multiple probes were constructed to identify a single species. For example, two probes were used to identify each of the following species: Mucor racemosus, Penicillium corylophilum, Scopulariopsis chartarum, and Stachybotrys chartarum, and three probes were synthesized to identify Aureobasidium pullulans (Table 2).
Sensitivity and specificity of the array
A total of 139 reference strains, including 73 target and 66 nontarget strains, were analyzed by the array. The hybridization patterns of different fungal species are shown in Figure 2. Of the 73 target strains, 72 (98.6%) were correctly identified to the species or group level by the array, with one strain (Trichoderma viride BCRC Oligonucleotide probes are arranged on the array as indicated in Figure 1. b Multiple bases of thymine, indicated by "t", were added to the 3' end of the probe. The underlined nucleotide indicates a single mismatch base that was intentionally incorporated into the probe to avoid cross-hybridization. c The location of probe is shown by the nucleotide number of either ITS 1 or ITS 2; the number (1 or 2) in parenthesis indicates the ITS region from which the probe was designed. d Probe modified from a previous study (19). e Probes designed in a previous study (19). f Probe designed in a previous study (26). g The positive control probe was designed from a conserved region of the 5.8S rRNA gene (30 32054) being not identified (only the positive control was hybridized). Discrepancy analysis revealed that the strain BCRC 32054 had an ITS 1 and ITS 2 sequence similarity of 100% with Trichoderma harzianum, while the corresponding sequence similarities were only 78.7% and 87.2%, respectively, with a reference sequence of Trichoderma viride in GenBank (accessing no. X93978). It was obvious that Trichoderma viride BCRC 32054 was a misidentification of Trichoderma harzianum, a nontarget species in this study. Therefore, the sensitivity of the array was 100% (72/72). In addition, a collection of 66 nontarget strains (66 species) were used for specificity testing of the array (additional file 1). No crosshybridization was observed for any strain analyzed and a specificity of 100% (66/66) was obtained.
Detection limit of the array
Serial 10-fold dilutions of DNAs extracted from two strains (Aspergillus fumigatus BCRC 30502 and A. versicolor BCRC 31488) were used to determine the detection limits. For both strains, the detection limit of the array was 10 pg genomic DNA per assay; this amount of DNA was approximately equal to 270 cells (37 fg of DNA per cell of Candida albicans) [13].
Identification of fungal strains isolated from air samples
The array was used to identify 70 fungal isolates recovered from the air samples in three buildings including one hospital (24 strains), one research laboratory (11 strains), and one office (35 strains). Among the 70 strains, 27 were identified to species level by the array, and 43 strains were not identified (nontarget species). The identified airborne fungi were Aspergillus fumigatus and A. versicolor (from a hospital), A. niger and A. versicolor (from a laboratory), and Alternaria alternata, Aspergillus flavus, Cladosporium cladosporioides, and Penicillium chrysogenum (from an office) ( Table 3). Among the 27 strains identified by hybridization, 26 were correctly identified, as revealed by their morphological characteristics and sequencing of the ITS and ribosomal D1/D2 domains of the rRNA operons (Table 3). A strain (no. 12) was misidentified as Acremonium strictum by the array, since the ITS sequences demonstrated that the strain was Acremonium implicatum, a nontarget species. The remaining 43 non-identified strains from air samples belonged to nontarget species, as evidenced by their ITS and D1/D2 sequences (additional file 2). Some nontarget strains were only identified to the genus level by DNA sequencing since there were no corresponding ITS or D1/D2 sequence entries in the public database. Based on these results, the sensitivity and specificity of the array for identification of airborne fungi were 100% (26/26) and 97.7% (43/44), respectively. Among the 70 isolates recovered from air samples, 26 (37.1%) have potentials to cause allergy or adverse health problems in susceptible individuals.
Discussion
In this study, an oligonucleotide array was developed to identify 21 species of airborne fungi that are of health concern (Table 1). High sensitivity and specificity of the array were demonstrated by testing a collection of 138 reference strains and 70 isolates from air samples. Comparing with glass chip, the current membrane array is relatively simple, time-saving, and the test cost was quite low. In addition, only minimal instrumentation Figure 1 Layout of oligonucleotide probes on the array (0.8 × 0.7 cm, 9 by 8 dots). The probe "PC" was a positive control and the probe "NC" was a negative control (tracking dye only). The probe "M", a position marker, was an irrelevant probe labeled with a digoxigenin molecule at the 5' end. The corresponding species names and sequences of all probes are listed in Table 2. All probes used for fungal identification were spotted on the array in duplicate. (a shaker and an incubator) is required for hybridization. The whole procedure for fungal detection by the array can be finished within a working day (8 h), starting from isolated colonies. The prominent feature of the current method is the use of a standardized protocol encompassing DNA extraction, ITS amplification, and membrane hybridization. The examinations of fungal reproductive structures, which are essential for classical identification, are not required by the present method.
In this study, one or multiple probes were designed to identify a single species, depending on the availability of divergent sequences in the ITS region ( Table 2). The Figure 1, and the corresponding sequences of the hybridized probes are shown in Table 2. All probes used for fungal identification were spotted on the array in duplicate. The last two panels shows the simultaneous hybridization of two (Acremonium strictum and Stachybotrys chartarum) and three species (Scopulariopsis chartarum, Stachybotrys chartarum, and Trichoderma viride), respectively, on a single array. advantage of using multiple probes is the increased coverage of different strains of a species, but the disadvantage is the potential decrease of specificity due to the unpredictable cross-hybridizations caused by other irrelevant fungal species. The T m (melting temperature) values of probes used in this study ranged from 52.6 to 66.2°C, with some probes had T m lower than the hybridization temperature (55°C) ( Table 2). However, clear signals were obtained for all target strains tested ( Figure 2). Volokhov et al. [31] also reported the successful use of probes having T m values lower than the hybridization temperature for bacterial identification. The addition of several thymine bases to the end of a probe (Table 2) had the benefit of reducing steric hindrance between target DNA and the probe immobilized on a solid support [27] In a previous study, an array targeting the 18S rRNA gene was developed to identify airborne fungi [24]. Since the 18S rRNA genes are highly conserved in closely related species, therefore species identification was based on hybridization patterns involving a combination of multiple probes rather than on species-specific probes [24]. The advantage of the current study is that all species were discretely identified by specific probes and the reading of results is very straightforward. The successful design of different probes was based on the known ITS sequences (Table 2), and multiple sequence alignment (interspecies and intraspecies) played an important role in finding out the regions that could be utilized for probe design. The present array is a powerful tool for the identification of important airborne fungi that may cause health problems in susceptible individuals. The array has the potential to be continually extended by including more probes, without significant increase of the cost or complexity. The current method permits a shorter time to achieve results as well as the correct identification of morphologically indistinguishable species. Furthermore, the array was able to identify multiple fungal species at the same time as demonstrated in Figure 2 (the last two arrays). The DNAs of colonies from two (Acremonium strictum and Stachybotrys chartarum) or three species (Scopulariopsis chartarum, Stachybotrys chartarum, and Trichoderma viride) on agar plates were extracted in a tube, amplified by PCR, and hybridized to an array. All individual species were simultaneously identified. Furthermore, we also tried to directly detect fungi by trapping airborne spores in a buffer, followed by centrifugation, DNA extraction, and array hybridization. However, comparing with culture, the direct method was less sensitive and this might be due to the limited numbers of spores collected (data not shown). It is anticipated that by improving the sampling method, DNA extraction efficiency, and using nested PCR [26], the current array may have a potential to directly detect airborne fungi without an initial cultivation step. A possible method would be that PCR can be directly performed on the collected fungal spores, omitting the DNA extraction step that may lead to a significant loss of DNA for a very small sample. However, further investigation is needed to verify this hypothesis.
Conclusions
Identification of airborne fungi by the array is highly reliable and accurate. The method can be used as an effective alternative to the conventional identification methods. The current array can greatly contribute to decipher the relationship between airborne fungi and adverse health effects. | 2014-10-01T00:00:00.000Z | 2011-04-13T00:00:00.000 | {
"year": 2011,
"sha1": "341fde414b19b45b28329d9a631f82e4a6fdae6a",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-11-91",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "341fde414b19b45b28329d9a631f82e4a6fdae6a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
265854623 | pes2o/s2orc | v3-fos-license | The Nevus Lipomatosus Superficialis of Face: A Case Report and Literature Review
Nevus lipomatosus superficialis (NLS) is a hamartoma of adipose tissue, rarely reported in the past 100 years. We treated one case, and we conducted a systematic review of the literature. A 41-year-old man presented with a cutaneous multinodular lesion in the posterior region near the right auricle. The lesion was excised and examined histopathologically. To review the literature, we searched PubMed with the keyword “NLS.” The search was limited to articles written in English and whose full text was available. We analyzed the following data: year of report, nation of corresponding author, sex of patient, age at onset, duration of disease, location of lesion, type of lesion, associated symptoms, pathological findings, and treatment. Of 158 relevant articles in PubMed, 112 fulfilled our inclusion criteria; these referred to a total of 149 cases (cases with insufficient clinical information were excluded). In rare cases, the diagnosis of NLS was confirmed when the lesion coexisted with sebaceous trichofolliculoma and Demodex infestation. Clinical awareness for NLS has increased recently. NLS is an indolent and asymptomatic benign neoplasm that may exhibit malignant behavior in terms of huge lesion size and specific anatomical location. Early detection and curative treatment should be promoted.
Introduction
Nevus lipomatosus superficialis (NLS) is a benign cutaneous hamartoma in which mature adipose tissue is deposited ectopically between collagen bundles in the dermal layer, as observed on histopathological examination.It was first reported by Hoffmann and Zurhelle in 1921. 1 The lesions have two clinical manifestations: (1) The classical type is a zosteriform pattern consisting of nontender, soft, skincolored or yellow papules, nodules, or plaques that are present at birth or appear during the first three decades of life, and (2) the solitary type is a single sessile, domeshaped papule or nodule that appears in the third to sixth decades of life. 2 Malignant transformation has not been reported.Surgical excision is curative, and no recurrence has been reported.We describe a case of NLS that manifested with an unusual combination of histopathological findings of sebaceous trichofolliculoma and Demodex infestation.We then discuss the findings of a literature review and summarize the clinical manifestations.
Case
The patient, a 41-year-old man, presented with a 35 Â 20 mm, cerebriform nodular skin lesion in the posterior region near the right auricle, and satellite papules on the posterior surface of the auricle; these asymptomatic lesions had been present since the age of 14 years.The lesions became prominent after the age of 16 years and gradually enlarged after the age of 29 years.The patient sought treatment for cosmetic reasons alone.Only the main lesion,
Abstract
Nevus lipomatosus superficialis (NLS) is a hamartoma of adipose tissue, rarely reported in the past 100 years.We treated one case, and we conducted a systematic review of the literature.A 41-year-old man presented with a cutaneous multinodular lesion in the posterior region near the right auricle.The lesion was excised and examined histopathologically.To review the literature, we searched PubMed with the keyword "NLS." The search was limited to articles written in English and whose full text was available.We analyzed the following data: year of report, nation of corresponding author, sex of patient, age at onset, duration of disease, location of lesion, type of lesion, associated symptoms, pathological findings, and treatment.Of 158 relevant articles in PubMed, 112 fulfilled our inclusion criteria; these referred to a total of 149 cases (cases with insufficient clinical information were excluded).In rare cases, the diagnosis of NLS was confirmed when the lesion coexisted with sebaceous trichofolliculoma and Demodex infestation.Clinical awareness for NLS has increased recently.NLS is an indolent and asymptomatic benign neoplasm that may exhibit malignant behavior in terms of huge lesion size and specific anatomical location.Early detection and curative treatment should be promoted.
including the subcutaneous tissue, was excised elliptically.To achieve approximation of both skin margins, the skin flaps were advanced after the undermining procedure.Preoperative and postoperative findings are illustrated in ►Fig. 1.By 3 months after surgery, the lesion had not recurred.
Gross and Histopathological Findings
The excised skin specimen (52 Â 23 mm) contained a 25 Â 25 skin-colored nodule with a cerebriform surface.Serial sectioning revealed yellowish-white contents, lobulation, and deposition of adipose tissue in the dermis (►Fig. 2).Histopathological examination revealed that the nodule was composed of folliculosebaceous material and that groups of mature adipose tissue occupied the whole dermis (►Fig.3A, B).Demodex mites were also present in follicles and sebaceous glands with perifollicular inflammation (►Fig.3C).The path-ological diagnosis was NLS with sebaceous trichofolliculoma and Demodex infestation with inflammation.
Literature Review
The number of published articles about NLS has increased significantly since 2012 (►Fig. 4).Awareness of NLS has probably increased in many countries.We conducted a literature review, for which we searched PubMed with the keyword "NLS."The search was limited to articles written in English and whose full text was available.We analyzed the following data: year of report, nation of corresponding author, sex of patient, age at onset, duration of disease, location of lesion, type of lesion, associated symptoms, pathological findings, and treatment.Among the countries of the authors (►Fig. 5),India, the Republic of Korea, and the United States were most common.In total, 112 articles about 149 cases were selected from the PubMed search.Of the 149 patients, 78 were male and 71 were female.We found no sex difference in the prevalence of NLS.In 114 cases, the lesions were the classical type; in 33 cases, the solitary type.Two patients had both classical and solitary lesions (►Fig. 6).
The lesions were located in five regions: head and neck, trunk, pelvis and genitalia, upper extremities, and lower extremities.Classical lesions were predominantly on the pelvis and trunk, and solitary lesions were frequently on the pelvis and lower extremities (►Fig. 6).
With regard to onset, findings in previous reports 2 were consistent with ours: the classical type usually was present at birth or appeared in the first three decades of life, and the solitary type appeared in the third to fifth decades of life (►Fig.7).In 28 patients (18%), classical and solitary lesions were congenital.
"Duration of the disease" means the time interval between the patient's awareness of the lesion's presence and the visit to the clinic.Among durations of less than 20 years, five were most common (►Fig.8).
Discussion
In the United States, the Food and Drug Administration defines a rare disease as any disease that affects fewer than 200,000 Americans.In Europe, a disease is defined as rare if it affects fewer than 1 per 2,000 people. 3Until now, no more than 200 cases of NLS were reported, according to our PubMed search; thus, NLS could have been classified as a rare disease.Data on prevalence will be required in the near future.
Three patients with NLS had relatives with the condition. 4,5In one case of NLS, the patient had a 2p24 deletion; the authors suggested that the chromosomal abnormality played a role in NLS. 6LS has been described as asymptomatic.According to our literature review, however, 10 patients (6.7%) presented with symptoms such as itching, pain, foul odor, and rhinitis.One patient had dyspnea as a result of an intranasal lesion.According to another report, compressive neuropathy of the ulnar nerve was caused by firm subcutaneous nodules of thickened fibrotic dermis containing adipose tissue. 7Systemic abnormalities and malignant changes have not been associated with NLS.Fig. 5 The 112 selected articles concerned cases reported from all continents and 26 nations.We found no geographic preponderance.Fig. 6 Nevus ipomatosus superficialis (NLS) has two manifestations: classical and solitary.(A) Among the selected articles, the classical type was reported more commonly than the solitary type.Two patients had both classical and solitary lesions.(B) The lesions were located in five regions: head and neck, trunk, pelvis and genitalia, upper extremities, and lower extremities.Classical lesions appeared predominantly on the pelvis and trunk.The solitary type appeared most frequently on the pelvis and lower extremities.Nevus Lipomatosus Superficialis of Face Yang, Park 199 Fig. 7 Onset of lesion.The classical type of nevus lipomatosus superficialis (NLS) usually was present at birth or appeared in the first three decades of life.The solitary type appeared in the third to fifth decades of life.In 28 cases (18%), the classical and solitary types were congenital.
Fig. 8 "Duration of the disease" means the time interval between the patient's awareness of the lesion's presence and the visit to the clinic.Among durations of less than 20 years, five durations were most common.This suggested that nevus lipomatosus superficialis was typically present for more than 5 years and that the lesion increased gradually in size.
Several combinations of lesions have been reported.Comedones, abnormal hair growth or alopecia, and acanthosis most frequently accompanied NLS.Combinations of NLS with trichofolliculoma, folliculosebaceous hamartoma, or perifollicular fibroma were rare.In two cases, NLS was accompanied by intramuscular lipoma. 8,9In one case, NLS was accompanied by a polypoid type of basal cell carcinoma of the scalp. 10In our patient, NLS and sebaceous trichofolliculoma were present; in our literature review, we found only one report in which NLS was accompanied by a sebaceous trichofolliculoma. 11ebaceous trichofolliculoma has hair follicles with sebaceous gland components, which are epithelial components.However, there are two other, similar lesions that should be differentiated from sebaceous trichofolliculoma histopathologically: trichofolliculoma, which has only a follicular structure, and folliculosebaceous hamartoma, which contains not only epithelial components of follicles and sebaceous components but also mesenchymal structures of vessels, cartilage, and other tissues.
Plastic surgeons are not familiar with NLS.It is often misdiagnosed as a common skin lesion such as soft fibroma, skin tag (acrochordon), neurofibroma, or lipoma.The differential diagnosis of the classical type includes mucinous nevus, 12 sebaceous nevus, hemangioma, hairy nevus, lymphangioma, and focal dermal hyperplasia (Goltz syndrome).The solitary type is often confused with skin tags, solitary neurofibromas, and solitary lipofibromas. 13reatment is usually not necessary except for cosmetic reasons.Surgical resection is curative, and no recurrence has been reported.Other treatment modalities have been attempted: CO 2 laser treatment, 14,15 cryotherapy, 16 intralesional phosphatidylcholine injection, 17 topical corticosteroid application, 18 and electrodessication. 19However, because the lesion is large, reconstructing the defect after lesion removal is difficult.Physicians should be aware of this rare disease because early detection allows for less invasive resection of the lesion and more conservative reconstruction of the defect.
Fig. 1
Fig. 1 Preoperative appearance.The lesion (3 Â 1.5 cm) was noted in the posterior region near the right auricle over the auriculotemporal sulcus.(A) An elliptical excision line was designed.(B) After surgery, the skin margins of the excised wound were approximated and well healed by postoperative day 15.
Fig. 2
Fig. 2 Gross photographs of serial sections, showing skin-colored pedunculated multiple nodules with cerebriform surfaces (A) and yellowish-white lobulated nodules and underlying adipose tissue in the dermis (B).
Fig. 4 A
Fig. 4 A case of nevus lipomatosus superficialis (NLS) was first reported in 1921.We analyzed 112 articles identified in the PubMed search.The number of reports of NLS has increased gradually since 2003. | 2023-12-07T16:17:32.859Z | 2023-08-03T00:00:00.000 | {
"year": 2024,
"sha1": "fedc75efd15571e6240c1e3e5e4373ef440c4b23",
"oa_license": "CCBY",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-2222-1226.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26b5ecaaa0d81fe5679da122c1208c1d309d26cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119172437 | pes2o/s2orc | v3-fos-license | On a generalization of the three spectral inverse problem
We consider a generalization of the three spectral inverse problem, that is, for given spectrum of the Dirichlet-Dirichlet problem (the Sturm-Liouville problem with Dirichlet conditions at both ends) on the whole interval $[0,a]$, parts of spectra of the Dirichlet-Neumann and Dirichlet-Dirichlet problems on $[0,a/2]$ and parts of spectra of the Dirichlet-Newman and Dirichlet-Dirichlet problems on $[a/2,a]$, we find the potential of the Sturm-Liouville equation.
Introduction
The theory of direct and inverse spectral problems is based on the base of classical results of Yu. M. Berezansky, V. A. Marchenko, M. G. Krein, B. M. Levitan (see [1], [15], [12], [14]). The so-called half-inverse problem and three spectral problem are branches of this theory.
In the present paper we show the relations between the three spectral problem and the Hochstadt-Lieberman problem. We use the same equation (see equation (2.9)) and similar methods for recovering the potential in these problems.
It is known [18] that {ν In this case this three spectra uniquely determine the potential q.
The aim of the present paper is to show that one may use {λ k } ∞ −∞,k =0 and certain parts of the spectra {µ are given not all but excluding a finite number 2n 1 of ν (1) k s and a finite number 2n 2 of ν (2) k s then it is possible to use 2n 1 eigenvalues µ (2) k and 2n 2 eigenvalues µ (1) k instead to determine q on (0, a).
Also we rewrite (1.4) and (1.5) as k ) of problems (2.6) coincide with the sets of zeros of the characteristic functions k ) of problems (2.7) coincide with the set of zeros of where ψ 2,j ∈ L a 2 and ψ 2,j (0) = 0. Let us look for the solution of problem (2.1)-(2.5) in the form where C j are constants. Then (2.4) and (2.5) imply This system of equations possesses nontrivial solution at the zeros of the characteristic function of this function is the spectrum of problem (2.1)-(2.5). Let us notice that problem (2.1)-(2.5) is the Dirichlet-Dirichlet problem on the whole interval and therefore . The corresponding direct theorem is as follows.
ii) all λ k are simple and for k > 0 λ k = τ k if and only if λ k = τ k+1 .
Proof. It is known that 2 ) are essentially positive Nevanlinna functions. It is known (see e.g. [19], Sec. 4.1) that if f and g are essentially positive Nevanlinna function then such is also (f + g). Therefore, .
Inverse three spectral problem
In this section we will use known results on sine-type functions. Remark. It can happen that regular intervals do not exist.
We will use the following notation: The main result of this paper is given by the following theorem.
It is easy to prove in the same way thatỸ (λ) is also uniquely determined by (3.4) . We identify with ν (1) k the zero ofX which lies in the regular interval containing µ (2) k (k ∈ N 1 ) and with ν k } ∞ −∞,k =0 can be found in [15]. The theorem is proved. | 2016-09-09T22:05:37.000Z | 2016-09-09T00:00:00.000 | {
"year": 2016,
"sha1": "e379a72f856b7b4fb62fe8147d707dfe193f4714",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e379a72f856b7b4fb62fe8147d707dfe193f4714",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
252890995 | pes2o/s2orc | v3-fos-license | A Review of Routine Laboratory Biomarkers for the Detection of Severe COVID-19 Disease
As the COVID-19 pandemic continues, there is an urgent need to identify clinical and laboratory predictors of disease severity and prognosis. Once the coronavirus enters the cell, it triggers additional events via different signaling pathways. Cellular and molecular deregulation evoked by coronavirus infection can manifest as changes in laboratory findings. Understanding the relationship between laboratory biomarkers and COVID-19 outcomes would help in developing a risk-stratified approach to the treatment of patients with this disease. The purpose of this review is to investigate the role of hematological (white blood cell (WBC), lymphocyte, and neutrophil count, neutrophil-to-lymphocyte ratio (NLR), platelet, and red blood cell (RBC) count), inflammatory (C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and lactate dehydrogenase (LDH)), and biochemical (Albumin, aspartate aminotransferase (AST) and alanine aminotransferase (ALT), blood urea nitrogen (BUN), creatinine, D-dimer, total Cholesterol, low-density lipoprotein (LDL), and high-density lipoprotein (HDL)) biomarkers in the pathogenesis of COVID-19 disease and how their levels vary according to disease severity.
Introduction
Since the onset of the new coronavirus (SARS-CoV-2 (severe acute respiratory syndrome coronavirus-2), previously known as 2019-nCoV) pandemic in December 2019 [1,2], confirmed cases have been reported in countries all over the world. e World Health Organization proclaimed the 2019 coronavirus disease (COVID-19) pandemic on March 11, 2020, mostly because of the disease's pervasive development [3]. Prior to that, it began as an outbreak in mainland China, with the first reports coming from the city of Wuhan in the province of Hubei on February 26, 2020. After the virus' genome was sequenced, the virus was given the name severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) by the International Committee on Taxonomy of Viruses. It shared genetic ancestry with the coronavirus outbreak that caused the SARS epidemic of 2003 [4]. Until July 30, 2022, COVID-19, the disease caused by SARS-CoV-2 virus infection, has posed a very significant threat to global public health with a total of 581,182,629 reported cases and 6,418,043 mortalities documented [5]. With 3,987,543 cases, the American continent was among those with the largest number of cases, with the United States and Brazil as the leading countries (2,137,731 and 923,189, respectively) [5,6].
SARS-CoV-2 virus belongs to the order Nidovirales, suborder Cornidovirineae, family Coronaviridae, and subfamily Orthocoronavirinae. SARS-CoV-2 is an enveloped and symmetrical virus with spike-like projections on its membrane, giving it the shape of crowns. It has a positivesense single-stranded RNA genome ( Figure 1) [7,8].
e COVID-19 pathogenesis includes direct cytotoxicity of virus in ACE2 (angiotensin-converting enzyme 2)expressing cells; Renin-Angiotensin-Aldosterone System (RAAS) dysregulation secondary to virus-mediated ACE2 downregulation; immune response dysregulation; damage to the endothelial cells and thromboinflammation; and tissue fibrosis ( Figure 2) [9]. e heterogeneous course of COVID-19 disease is unpredictable, with most patients presenting with mild, self-limiting symptoms. e virus infection commonly starts with flu-like symptoms [10] and can be asymptomatic or may have a minor to severe development [11]. Despite this, up to 30% of patients require hospitalization, and up to 17% of them need intensive care support for acute respiratory distress syndrome (ARDS), hyper-inflammatory responses, and multiorgan failure [12,13]. e predominant clinical symptoms, presented by individuals infected during the COVID-19 pandemic, are respiratory which is similar to the SARS and Middle East respiratory syndrome (MERS) outbreaks. Roughly 80% of infected people have mild to moderate symptoms. e remaining patients have severe disease that necessitates inpatient care [14]. e overall mortality rate seems to be around 3.8% [2,[15][16][17][18].
A small percentage of infected people have no or moderate symptoms, whereas most of the cases have a severe or crucial prognosis, and some people die as a result. Although elderly patients with underlying conditions appear to be more vulnerable to serious sickness and death, there are incidences of life-threatening disease in healthy people. As a result, certain critical issues remained unresolved, as for why does illness intensity differ between individuals? How do some people have a more serious sickness than others? e variability of the COVID-19 clinical phenotype may be addressed by several parameters relating to the host, virus, and environment [20]. e interpretation and understanding of host variables, particularly genetic construction, has largely remained unclear. On the other hand, there is limited information on the pathophysiology of SARS-CoV-2 and only a few educated guesses about the virus' behavior. Scientists are just now starting to understand how host, viral, and environmental variables interplay to impact infection. In fact, people of any age, particularly older adults with comorbidities like chronic bronchitis [21], diabetes [22], hypertension [23], cardiovascular disease [24], lung and liver diseases [25], chronic kidney disease (CKD) [26], and chronic obstructive pulmonary disease (COPD) [21], may present with more severe disease. Furthermore, disorders and therapies that damage the immune system, including cancer therapy, bone marrow or organ transplantation, and long-term use of corticosteroids, may increase the risk of disease, leading to more serious prognosis, and even death [27,28]. On the other hand, despite the fact that epidemiological data in some research found no indication of a greater transmission of SARS-CoV-2 among asthma patients, it appears that patients with nonallergic asthma had a more severe COVID-19 infection than patients with allergic asthma [29]. Importantly, during metabolic syndrome, the disease's deteriorated prognosis might be attributed to the three factors of this syndrome including hypertension, type 2 diabetes, and obesity, which are all risk factors for COVID-19 severity. Along with these well-known risk factors, there are other host-related variables that might influence COVID-19's result [20]. It has been shown in numerous studies, the SARS-CoV-2 virus affects not only the respiratory tract, but also other systems and organs such as the heart, liver, and gastrointestinal system. Some studies have shown that several important biochemical and hematological indicators are altered in COVID-19 patients [30][31][32]. ese biomarkers may be useful for predicting prognosis and also for treatment management, especially in cases with comorbidities and/or a severe disease course [33].
Understanding the relationship between these biomarkers and COVID-19 outcomes would help in developing a risk-stratified approach to the treatment of patients with this disease. e purpose of this article is to investigate the role of hematological, inflammatory, and biochemical biomarkers in the pathogenesis of COVID-19 disease and how their levels vary according to the disease severity. quickly determine if a patient is anemic or infected, as well as to evaluate the blood's ability to coagulate normally [45]. White blood cell (WBC) count, lymphocyte count, neutrophil count, neutrophil-to-lymphocyte ratio (NLR), platelet count, and red blood cell (RBC) count are among the hematological biomarkers applied to stratify COVID-19 patients.
WBC Count.
In a study of 140 hospitalized COVID-19 patients confirmed by computed tomography (CT) scan findings, Zhang et al. found that the leukocyte count was within normal range in 68.1% of patients, decreased in 19.6% of them, and increased in 12.3% of them. ey also observed that 75.4% of patients had lymphopenia [46]. On the other hand, Qin and colleagues investigated immune response dysregulation markers in 452 confirmed COVID-19 patients. ey reported that severe cases had higher leukocyte counts [47]. In a meta-analysis of 21 studies involving 3377 COVID-19 positive patients, Henry et al. discovered that patients with serious and fatal disease had considerably higher WBC in comparison to nonsevere patients [48]. In a study conducted by Mardani et al., hematological biomarkers were compared between 70 COVID-19 positive patients and 130 COVID-19 negative patients. It has been investigated that RT-PCR positive group had significantly lower WBC counts than the negative control group [49].
In a retrospective cohort study of 219 COVID-19 patients, a higher white blood cell (WBC) count was linked to an increased risk of one-month mortality, according to the findings [50]. Moreover, Ferrari et al., in a retrospective study, reported that patients with COVID-19 had significantly higher WBC counts compared to controls [51]. Overall, the current evidence suggests that, while WBC count can be utilized as a predictive factor for severer COVID-19 conditions, the research findings are not consistent and further studies are required.
Lymphocyte Count.
Yang et al. [52] found lymphopenia in 80% of severely ill adult COVID-19 patients, while Chen et al. [15] found it in only 25% of mild infected patients.
ese findings suggest that lymphopenia may be related to the severity of infection. As observed by Henry et al., severe and fatal cases of COVID-19 tend to have a lower lymphocyte count than mild cases [48]. ese findings were also confirmed by Qin et al. [47].
Chen et al. in a multicentric, retrospective study of 548 confirmed COVID-19 patients reported that survivors and nonsurvivors had significantly different hematological biomarkers on admission and at the end point. In fact, on admission, severe and critical cases, as well as nonsurvivors, had significant lymphopenia [53]. A meta-analysis of 20 publications found statistically significant decreases in total lymphocytes, CD4+ and CD8+ T-cells, and B-cells in critically ill COVID-19 patients compared to patients with moderate or mild disease [54]. Moutchia et al., in their systematic review, also reported that severe and critical cases of COVID-19 were characterized by lower lymphocyte and CD4 counts [55]. Due to the significant prevalence of lymphopenia in COVID-19 patients and its strong correlation with disease severity, current research suggests that lymphocyte count, particularly CD4+ levels, can be employed as a predictive biomarker for disease severity.
Neutrophil Count.
According to the available evidence, neutrophilia is an expression of a cytokine storm and hyperinflammatory state that play a key role in the pathogenesis of COVID-19 [2,56]. Furthermore, neutrophilia can be caused by a secondary bacterial infection, which is more common in patients with advanced disease [14]. Chen and colleagues, in their study, observed that on admission, severe and critical cases, as well as nonsurvivors, had significantly increased neutrophil count compared with mild cases [53]. As reported by Mardani et al., confirmed COVID-19 patients have a significant increase in neutrophil count in comparison to the control group [49]. Moutchia et al., in a systematic review of 45 studies, observed that compared to nonsevere COVID-19 cases, patients with severe or critical COVID-19 have higher neutrophil counts [55]. In a retrospective cohort study of 201 COVID-19 patients, bivariate Cox regression analysis revealed that neutrophilia was linked to the development of acute respiratory distress syndrome (ARDS) and the progression from ARDS to death [57].
ese studies demonstrated that neutrophilia in COVID-19 patients was associated with the severity of the disease.
2.1.4.
Neutrophil-to-Lymphocyte Ratio (NLR). Neutrophil-to-lymphocyte ratio (NLR) was first considered in the esophageal carcinoma patients under chemotherapeutic treatment, by dividing the relative percentage of neutrophils by lymphocytes. While the normal reference range in healthy population is 1-3, the values greater than 3 reveal an ongoing infection, and a ratio greater than 9 indicates sepsis. erefore, the NLR value is associated with current inflammation as a prognostic biomarker [58].
As observed by Maet al., patients with a higher neutrophil-to-lymphocyte ratio (NLR>9.8) had a higher incidence of ARDS (P � 0.005), as well as higher rates of nonmechanical and mechanical ventilation (P � 0.002 and P � 0.048, respectively) [38]. Chen et al. also reported that in comparison to milder cases, severe and nonsurvivor COVID-19 cases had a higher neutrophil-to-lymphocyte ratio (NLR) as an inflammatory biomarker and a marker of systemic inflammation [53]. Qin and colleagues also published similar findings in their study [47]. In a retrospective, cross-sectional study, 101 COVID-19 positive patients were examined by means of hematological parameters. e ratio of neutrophils to lymphocytes showed a significant relationship with disease severity (P � 0.001) [59]. Moreover, as observed in a retrospective cohort study of 219 confirmed COVID-19 patients, a significant association between increased neutrophil-to-lymphocyte ratio (NLR) and increased risk of one-month mortality was reported [50]. Due to the relationship between increased neutrophil-to-lymphocyte ratio (NLR) and more severity of disease, NLR can be used as a prognostic marker in COVID-19 patients.
Platelet Count.
Platelet count has been suggested as a potential biomarker for COVID-19 patients, because it is a simple, inexpensive, and easily accessible biomarker which has been freely correlated with disease severity and mortality risk in the intensive care unit (ICU). Platelet count was found to be absolutely lower in COVID-19 patients [60], and it was lower in nonsurvivors in comparison to survivors [61]. Waris et al., in a retrospective cross-sectional study, found that the mean platelet count (165.0 × 10 9 /L) was significantly lower (P < 0.001) among critical COVID-19 patients compared to the mild group (217.0 × 10 9 /L) [59]. According to Chen et al., severe and nonsurvivor COVID-19 patients had significant thrombocytopenia, when compared to milder cases [53]. Henry et al., in their study, also discovered similar results and reported that patients with serious and fatal disease had significantly lower platelet counts in comparison to nonsevere patients [48].
Liu et al., in a retrospective study of 383 COVID-19 patients, reported that thrombocytopenia at the time of admission was linked to a nearly threefold increase in mortality rate compared to those who did not have thrombocytopenia.
ey also found that a 50 × 10 9 /L increase in platelet count was associated with a 40% reduction in mortality [62].
ese studies demonstrated that lower platelet count in COVID-19 cases was associated with disease severity. ey observed that, compared to healthy people, COVID-19 patients have lower levels of red blood cell (RBC) counts [11]. Taneri and colleagues conducted a systematic review to evaluate the biomarkers of anemia and iron metabolism. Compared with moderate COVID-19 patients, severe cases had a lower RBC count [63]. In a longitudinal cohort study of 379 COVID-19 patients, Lanini et al. reported that the mean RBC count was significantly lower in nonsurvivor patients compared to survivors. According to the temporal analysis, survivors and nonsurvivors had similar RBC counts at the time of symptom onset (P � 0.257). e model predicted that by day 2 after the onset of symptoms, the average level of RBC would be significantly different between survivors and nonsurvivors [64]. e current evidence indicates that severe COVID-19 is associated with lower RBC counts in patients.
2.2.1. CRP. C-reactive protein (CRP) is an acute-phase reactant generated by the liver and other organs in response to the release of IL-6, and it is a sensitive biomarker for a variety of inflammatory disorders like infection and tissue injury. Many patients with severe illness have high CRP levels [14]. In severe COVID-19 patients, the CRP marker was shown to be significantly elevated in the early stages of infection. It has also been linked to disease progression and is a predictor of severe COVID-19 [66]. In a retrospective cohort study of 140 confirmed COVID-19 patients, Liu et al. found that the proportion of patients with increased serum CRP levels was significantly higher among severe cases than in mild cases [67]. Another study investigated the association between CRP levels and lung lesions on CT scans. It was observed that, in the early stages of COVID-19, CRP levels were found to be strongly linked with the diameter of lung lesions and the severity of illness [68]. According to Stringer et al., a CRP level of 40 mg/L or above on admission to the hospital should be considered a reliable predictor of disease severity and increased mortality risk in COVID-19 patients [69]. Smilowitz et al., in their study, examined the relationship between serum CRP concentrations and adverse outcomes in hospitalized COVID-19 patients. e results suggested that CRP levels above the median (108 mg/L) were correlated with venous thromboembolism, acute kidney injury, critical illness, and mortality, in comparison with CRP levels below the median [70].
ese studies suggest that CRP may be used for risk stratification of COVID-19 patients and early identification of disease severity, adverse outcomes, and mortality risk.
ESR.
e ESR is a measurement of the rate at which RBCs settle in a tube of anticoagulated blood over a given period. A wide range of immune and nonimmune factors can affect ESR, including changes in RBC quality and quantity, as well as changes in normal patterns and levels of different plasma proteins [71]. It has been demonstrated that severe COVID-19 is associated with a significant elevation in both ESR and CRP levels in the early stages of the disease. However, CRP changes are more sensitive to the disease condition [66]. Xiong and colleagues, in a study, analyzed the clinical, laboratory, and high-resolution CT scan findings of 42 COVID-19 patients. According to their results, the severity of pneumonia considered on the initial CT scan had a significant positive correlation with ESR. ey suggested that an elevation in ESR, CRP, and LDH levels may be an indicator for the severity of inflammation or extensive tissue injury [72].
In a meta-analysis of 17 articles addressing inflammatory biomarkers in COVID-19 patients, Ghahramani et al. also confirmed the correlation between higher ESR levels and severity of the disease [73]. e available data indicate that due to the correlation between elevated ESR and higher disease severity, ESR can be used as a prognostic biomarker in COVID-19 patients.
LDH.
Lactate dehydrogenase (LDH) is an intracellular enzyme that catalyzes the oxidation of pyruvate to lactate during anaerobic glycolysis [74]. Serum LDH is routinely tested in clinical settings for a variety of diseases. Elevated serum LDH levels have been linked to a poor prognosis in a variety of diseases, particularly tumors and inflammation [56]. Despite International Journal of Analytical Chemistry the fact that LDH is an enzyme that originates from many organs, it increases significantly in patients with pulmonary involvement [75]. As noted by Wu et al., at the time of diagnosis, significant differences in serum LDH levels were detected between nonsevere and severe COVID-19 patients. It was also shown that an increase or decrease in LDH levels was a sign of radiographic improvement or progress [76]. Han et al., in a retrospective observational study of 107 confirmed COVID-19 patients, suggested that LDH has the potential to be identified as a powerful predictor for early detection of lung injury and severe cases [77]. According to a meta-analysis by Deng et al., 52% of COVID-19 patients had elevated LDH levels [31]. In a study of 123 hospitalized COVID-19 patients confirmed by RT-PCR, it was reported that serum LDH levels were elevated in 89% of patients. Another finding was the strong negative correlation between LDH and partial pressure of arterial oxygen to the fraction of inspired oxygen ratio (PaO 2 / FiO 2 ) values, which is an indicator of patients' respiratory function. erefore, it was concluded from this study that, to avoid a poor prognosis, LDH should be regarded as a useful test for the early identification of cases that need more aggressive supportive therapies and closer respiratory monitoring [78]. On the other hand, in a retrospective case-control study, Li et al. observed that an increased serum LDH level at admission is an independent risk factor for COVID-19 severity and mortality. ey concluded that LDH can help with early COVID-19 detection [56]. Akdogan et al. reported that in the early stages of COVID-19, LDH levels were strongly linked to lung lesions, possibly reflecting disease severity [79].
From the current data, it can be concluded that an increased serum LDH level is linked to the severity of COVID-19 and it can be used for early detection of lung involvement, disease severity, and mortality risk.
Inflammatory Cytokines.
During SARS-CoV, MERS-CoV, and SARS-CoV-2 infection, scientists routinely focus on our current understanding of innate immune responses, inflammasome activation, inflammatory cell death pathways, and cytokines' release [80]. Cytokines are a type of polypeptide signaling molecule that acts on cell surface receptors to modulate a variety of biological activities [81]. Hyperproduction of mostly proinflammatory cytokines such as interleukin 1 (IL-1), IL-6, interleukin 12 (IL-12), interferon gamma (IFN-ɣ), and tumor necrosis factor alpha (TNF-α), which selectively target lung tissue, can significantly impair the prognosis in the most severe instances [82]. In critically sick COVD-19 patients, the cytokine storm can deteriorate the clinical symptoms or perhaps premature death. To enhance the survival rate of COVID-19 patients, early management of the cytokine storm using immunomodulatory and cytokine antagonists is critical. Table1 offers the most important inflammatory factors during SARS-CoV-2 infection.
Albumin.
Albumin is a negative acute phase reactant with antioxidant properties. erefore, under physiological circumstances, plasma albumin is a rich source of free thiols capable of scavenging reactive oxidant species. Under oxidative stress, albumin can undergo irreversible oxidation, impairing antioxidant properties and eventually causing cell and tissue damage [113]. In a systematic review and metaanalysis of 10 studies, in which a total of 1745 COVID-19 patients were evaluated, it was noted that 34% of patients demonstrated serum albumin levels lower than the normal range [31]. Wu et al. reported that at admission, the most common abnormal liver biochemical marker observed in COVID-19 cases was abnormal albumin [114]. In a recent article, Violi et al. investigated the predictive value of serum albumin for COVID-19 mortality. ey observed that nonsurvivors had lower values of albumin in comparison to survivors. Cox regression analysis revealed that albumin was independently associated with mortality (hazard ratio: 2.48) after adjusting for sex, heart failure, chronic obstructive pulmonary disease (COPD), and CRP levels [113].
Li and colleagues found that lower levels of albumin were associated with increased severity of COVID-19 pneumonia. Even after adjusting for confounding factors, plasma albumin values in the critical group continued to have a significant correlation with the risk of mortality [76]. Due to the association between decreased serum albumin levels and higher disease severity, serum albumin can be used as a predictive biomarker for the severity of the disease in COVID-19 patients.
AST & ALT.
Abnormal liver function tests (LFTs) including elevated alanine aminotransferase (ALT) and aspartate aminotransferase (AST) are indicators of hepatocyte injury. However, it has been suggested that elevated aminotransferases in COVID-19 could also be caused by myositis rather than liver damage [115]. Deng et al., in their systematic review and meta-analysis, reported that ALT and aspartate AST values were found to be higher than normal in 16% and 20% of COVID-19 patients, respectively [31]. In a systematic review by Wu and colleagues, the incidence, risk factor, and prognosis of abnormal liver biomarkers in patients with COVID-19 were evaluated. ey found that at admission, the pooled incidence of abnormal AST and ALT was 21.8% and 20.4%, respectively. Meta-analysis showed that serum AST and ALT levels were significantly increased in severe and critical cases compared to mild and moderate patients [114]. A recent systematic review and metaanalysis of 42 articles reported that, in nonsevere COVID-19 patients, an increase in ALT and AST levels was found in 30% and 21%, respectively, while in severe patients, it was found in 38% and 48% [116]. Canovi et al., in an observational cross-sectional study of 866 COVID-19 patients, observed that circulating concentrations of serum AST and ALT demonstrated a progressive increase with worsening parenchymal lesions [117]. e current evidence suggests that severe COVID-19 is correlated with higher AST and ALT levels in patients.
Blood Urea Nitrogen (BUN).
Mudatsir et al., in a systematic review of 19 articles, observed that elevated levels of BUN were associated with severe COVID-19 [118].
Ghahramani et al., in their systematic review and metaanalysis, also reported that BUN levels showed a significant increase in severe patients compared to nonsevere ones [73]. In another meta-analysis, Danwang et al. confirmed this relation and discovered that severe COVID-19 patients tend to have higher levels of BUN [119]. Zhang and colleagues in a study of 289 COVID-19 patients observed that increased BUN on admission was found in survived severe cases compared to cases with nonsevere disease.
ey also reported high levels of this biomarker to be associated with inhospital death [120]. Moreover, as observed in a multicenter retrospective cohort study of 12,413 confirmed COVID-19 patients, a significant association between increased baseline BUN levels and increased risk of all-cause mortality was TNF-α was one of the cytokines whose overproduction was related to a poor prognosis in patients with SARS-CoV-2 [106][107][108][109][110][111] VEGF VEGF is essential for vascular endothelial homeostasis and is present in numerous cells VEGF hyper-regulation is observed in various viral infections, especially COVID-19 VEGF would be useful in the approach to the regeneration of lung tissue and treatment of lung fibrosis [2,107,112] International Journal of Analytical Chemistry reported [121]. A recent study of 266 COVID-19 patients reported that compared with mild cases, severe patients had higher levels of BUN, proposing that BUN could be an independent element for predicting COVID-19 severity [122]. ese studies showed that higher BUN concentrations in COVID-19 cases were associated with higher disease severity and mortality rate.
Serum Creatinine.
As reported by Mudatsir et al., elevated levels of serum creatinine were correlated with severe manifestations of COVID-19 [118]. Ghahramani et al. also confirmed the association between higher creatinine levels and severity of disease in COVID 19-patients [73].
Danwang et al., in a meta-analysis, reported that serum creatinine levels are higher among COVID-19 severe cases in comparison to mild cases [119]. Zhang et al. confirmed that high levels of serum creatinine were associated with in-hospital mortality [120]. Liu et al. also observed that there was a relationship between elevated serum creatinine and increased allcause mortality risk among COVID-19 cases [121]. Ferrandovivas et al. in an observational cohort study of 10,362 critically ill patients discovered links between increased serum creatinine and higher 30-day mortality among COVID-19 cases, implying that deteriorating renal function was linked to a higher risk of death, as seen in many other types of critical illnesses [123]. Chen et al. in a study evaluated the impact of abnormal renal function on COVID-19 patients' prognosis and the prognostic value of various renal function indicators. ey found that when adjusted for several important clinical variables, increased creatinine levels were independent predictors of mortality [124]. Current evidence suggests that serum creatinine can be used as a predictive biomarker for more severe COVID-19 cases and the risk of mortality.
D-Dimer.
e plasma cleavage product D-dimer is a breakdown product of cross-linked fibrin. During systemic fibrinolysis after alpha2 depletion, plasma may destroy fibrin monomers, cross-linked fibrin polymers, and perhaps fibrinogen [125]. A fibrin degradation product (FDP) refers to all of these fragments [126]. Although preventive anticoagulation in ICU patients in China was not widespread when these researches were conducted, increased D-dimer in COVID-19 patients has been linked to greater mortality in several publications. While the influence of anticoagulants on D-dimer levels, in the context of COVID-19, is unclear, individuals on anticoagulation therapy frequently have very low D-dimer levels [127]. However, it was demonstrated that there is a dynamic association between COVID-19 progression and the degree of D-dimer [128].
It was indicated that there is a relation between the asymptomatic deep vein thrombosis (DVT) and patients with pneumonia hospitalized due to the COVID-19 disease. Nevertheless, for the diagnosis of DVT in COVID-19 patients, higher D-dimer cut-off levels may be required [129]. Additionally, it was shown that D-dimer levels (above 2.0 g/ mL) may reliably predict in-hospital mortality in COVID-19 patients. It is suggesting that D-dimer might be a helpful marker for better COVID-19 patient diagnosis [57,130].
ese results indicated that a higher D-dimer level in COVID-19 patients has been associated with an advanced death rate. Hence, D-dimer characterization can be one of the most valuable options for initial evaluation.
2.3.6. Total Cholesterol. During a retrospective study including 597 hospitalized COVID-19 patients, Wei et al. reported that total cholesterol (TC) levels were significantly lower among COVID-19 cases in comparison to the control group [131]. In a meta-analysis of 22 studies, Zinellu et al. observed that total cholesterol (TC) levels were significantly lower in hospitalized COVID-19 patients with severe disease or nonsurvivor status [132]. Aparisi and colleagues, in a study, evaluated the correlation of lipid biomarkers with 30days all-cause mortality among COVID-19 patients. ey found that nonsurvivor cases had lower TC throughout the entire course of the disease [133]. Li et al. investigated the changes in lipid profile and their association with COVID-19 severity. ey observed that TC levels showed an increasing trend in survivor cases, but showed a decreasing trend in nonsurvivor patients [120]. On the other hand, in a retrospective study, Qin et al. found that levels of total cholesterol were negatively correlated with length of hospital (LOS) stay in COVID-19 patients.
ey reported that at admission, serum levels of TC in the LOS >29 days group were significantly lower than those in the LOS ≤29 days group [134].
From the available literature, it can be stated that a lower total cholesterol (TC) level is linked to the severity of COVID-19, longer length of hospital (LOS) stay, and mortality risk.
Low-Density Lipoprotein Cholesterol (LDL-c) and
High-Density Lipoprotein Cholesterol (HDL-c). In their study, Wei et al. reported that COVID-19 cases have significantly lower LDL-c levels in comparison to healthy subjects. Moreover, they reported that when compared to mild and severe cases, high-density lipoprotein cholesterol (HDL-c) levels only decreased significantly in critical cases [131]. According to Zinellu et al., patients with severe COVID-19 have significantly lower levels of HDL-c and LDL-c when compared to patients with milder disease [132]. As noted by Aparisi et al., nonsurvivor COVID-19 patients had lower LDL-c levels than survivors during their disease course [133]. According to Li et al., in severe COVID-19 patients, LDL-c and HDL-c tend to have increased levels in survivors compared with nonsurvivors [120]. Moreover, a cross-sectional retrospective study showed that COVID-19 patients had a serum HDL-c level of 1.02 ± 0.28 mmol/L, which was significantly lower compared to the control group (1.52 ± 0.55 mmol/L). Furthermore, the serum HDL-c quantity in the severe COVID-19 group was 0.83 ± 1.67 mmol/L, considerably lower than that in the other group, nonsevere COVID-19 patients (1.15 ± 0.27 mmol/L) [135]. On the other hand, Qin et al. in their study reported that the length of hospital stay was negatively correlated with serum HDL-c and LDL-c levels in [120,[131][132][133][134] HDL-c LDL-c >40 mg/dl < 100 mg/dL Decrease in HDL-c LDL-c HDL-c and LDL-c can be utilized for risk stratification of COVID-19 patients, identification of disease severity, length of hospitalization, and mortality risk.
ey found that serum HDL-c and LDL-c values were significantly lower in the LOS >29 days group compared to the LOS ≤29 days group at admission [134]. ese studies indicate that the levels of serum HDL-c and LDL-c can be utilized for risk stratification of COVID-19 patients, identification of disease severity, length of hospitalization, and mortality risk. Common laboratory biomarkers and their relationship to the severity of COVID-19 are summarized in Table 2.
Conclusion
Since the beginning of COVID-19 outbreak, the capacity of hematological, biochemical, inflammatory, and immunological factors to predict patients with severe or fatal forms of COVID-19 has been of great scientific importance. To predict the severity of the disease in the early stages, it is critical to obtain a full profile of the laboratory analysis. According to the reviewed literature, hematological, inflammatory, and biochemical parameters are associated with severe prognosis in COVID-19 cases and can thus be used as predictive factors. is issue especially can be facilitated by evaluation of factors such as WBC, RBC platelet, lymphocyte, and neutrophil count, and NLR as hematological; CRP, ESR, and LDH as inflammatory; and Albumin, AST & ALT, BUN, creatinine, D-dimer, total Cholesterol, LDL, and HDL as biochemical biomarkers. However, it is important to know that the disease progression cannot be predicted by relying on only one or two factors, and it is crucial to monitor most of the elements together. On the other hand, while various possible therapeutic options for COVID-19 have been investigated, regulating proinflammatory responses to inactivate the virus could be the best treatment offer.
us, this review provided guidance for this prediction on hospital admission of patients to reduce the adverse effects of the disease. Such a prognosis could reduce unnecessary hospitalization of patients and the costs imposed on the health care system. Furthermore, these outcomes should be constantly re-evaluated according to the new findings, since more prospective cohorts with longer followup provided more useful and up-to-date data.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 2022-10-14T15:22:00.176Z | 2022-10-11T00:00:00.000 | {
"year": 2022,
"sha1": "855e4169b88e56a1e8f98408ce0da1b08277cd93",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijac/2022/9006487.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b370fea5a9269ec0cabeceb8c253da4248b75ddb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.